nhaliday + measure   130

Maybe America is simply too big | Eli Dourado
The classic economics paper on optimal country size is by Alesina and Spolare (1997). They advance a number of theoretical claims in the paper, but in my view the most important ones are on the relationship between political and economic integration.

Suppose that the world is full of trade barriers. Tariffs are high, and maybe also it’s just plain expensive to get goods across the ocean, so there’s not a lot of international competition. In this situation, there is a huge advantage to political integration: it buys you economic integration.

In a world of trade barriers, a giant internal free trade area is one of the most valuable public goods that a government can provide. Because many industries feature economies of scale, it’s better to live in a big market. If the only way to get a big market is to live in a big country, then megastates have a huge advantage over microstates.

On the other hand, if economic integration prevails regardless of political integration—say, tariffs are low and shipping is cheap—then political integration doesn’t buy you much. Many of the other public goods that governments provide—law and order, social insurance, etc.—don’t really benefit from large populations beyond a certain point. If you scale from a million people to 100 million people, you aren’t really better off.

As a result, if economic integration prevails, the optimal country size is small, maybe even a city-state.
econotariat  wonkish  2016-election  trump  contrarianism  politics  polisci  usa  scale  measure  convexity-curvature  government  exit-voice  polis  social-choice  diversity  putnam-like  cohesion  trade  nationalism-globalism  economics  alesina  american-nations 
5 days ago by nhaliday
Sci-Hub | The Moral Machine experiment. Nature | 10.1038/s41586-018-0637-6
Preference for inaction
Sparing pedestrians
Sparing the lawful
Sparing females
Sparing the fit
Sparing higher status
Sparing more characters
Sparing the young
Sparing humans

We selected the 130 countries with at least 100 respondents (n range 101–448,125), standardized the nine target AMCEs of each country, and conducted a hierarchical clustering on these nine scores, using Euclidean distance and Ward’s minimum variance method20. This analysis identified three distinct ‘moral clusters’ of countries. These are shown in Fig. 3a, and are broadly consistent with both geographical and cultural proximity according to the Inglehart–Welzel Cultural Map 2010–201421.

The first cluster (which we label the Western cluster) contains North America as well as many European countries of Protestant, Catholic, and Orthodox Christian cultural groups. The internal structure within this cluster also exhibits notable face validity, with a sub-cluster containing Scandinavian countries, and a sub-cluster containing Commonwealth countries.

The second cluster (which we call the Eastern cluster) contains many far eastern countries such as Japan and Taiwan that belong to the Confucianist cultural group, and Islamic countries such as Indonesia, Pakistan and Saudi Arabia.

The third cluster (a broadly Southern cluster) consists of the Latin American countries of Central and South America, in addition to some countries that are characterized in part by French influence (for example, metropolitan France, French overseas territories, and territories that were at some point under French leadership). Latin American countries are cleanly separated in their own sub-cluster within the Southern cluster.

...

Fig. 3 | Country-level clusters.

[ed.: I actually rather like how the values the West has compare w/ the global mean according in this plot.]

...
Participants from individualistic cultures, which emphasize the distinctive value of each individual23, show a stronger preference for sparing the greater number of characters (Fig. 4a). Furthermore, participants from collectivistic cultures, which emphasize the respect that is due to older members of the community23, show a weaker preference for sparing younger characters (Fig. 4a, inset).
pdf  study  org:nat  psychology  social-psych  poll  values  data  experiment  empirical  morality  ethics  pop-diff  cultural-dynamics  tradeoffs  death  safety  ai  automation  things  world  gender  biases  status  class  egalitarianism-hierarchy  order-disorder  anarcho-tyranny  crime  age-generation  quantitative-qualitative  number  nature  piracy  exploratory  phalanges  n-factor  europe  the-great-west-whale  nordic  usa  anglo  anglosphere  sinosphere  asia  japan  china  islam  MENA  latin-america  gallic  wonkish  correlation  measure  similarity  dignity  universalism-particularism  law  leviathan  wealth  econ-metrics  institutions  demographics  religion  group-level  within-group  expression-survival  comparison  technocracy  visualization  trees  developing-world  regional-scatter-plots 
5 weeks ago by nhaliday
Measures of cultural distance - Marginal REVOLUTION
A new paper with many authors — most prominently Joseph Henrich — tries to measure the cultural gaps between different countries.  I am reproducing a few of their results (see pp.36-37 for more), noting that higher numbers represent higher gaps:

...

Overall the numbers show much greater cultural distance of other nations from China than from the United States, a significant and under-discussed problem for China. For instance, the United States is about as culturally close to Hong Kong as China is.

[ed.: Japan is closer to the US than China. Interesting. I'd like to see some data based on something other than self-reported values though.]

the study:
Beyond WEIRD Psychology: Measuring and Mapping Scales of Cultural and Psychological Distance: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3259613
We present a new tool that provides a means to measure the psychological and cultural distance between two societies and create a distance scale with any population as the point of comparison. Since psychological data is dominated by samples drawn from the United States or other WEIRD nations, this tool provides a “WEIRD scale” to assist researchers in systematically extending the existing database of psychological phenomena to more diverse and globally representative samples. As the extreme WEIRDness of the literature begins to dissolve, the tool will become more useful for designing, planning, and justifying a wide range of comparative psychological projects. We have made our code available and developed an online application for creating other scales (including the “Sino scale” also presented in this paper). We discuss regional diversity within nations showing the relative homogeneity of the United States. Finally, we use these scales to predict various psychological outcomes.
econotariat  marginal-rev  henrich  commentary  study  summary  list  data  measure  metrics  similarity  culture  cultural-dynamics  sociology  things  world  usa  anglo  anglosphere  china  asia  japan  sinosphere  russia  developing-world  canada  latin-america  MENA  europe  eastern-europe  germanic  comparison  great-powers  thucydides  foreign-policy  the-great-west-whale  generalization  anthropology  within-group  homo-hetero  moments  exploratory  phalanges  the-bones  🎩  🌞  broad-econ  cocktail  n-factor  measurement  expectancy  distribution  self-report  values  expression-survival  uniqueness 
8 weeks ago by nhaliday
The Existential Risk of Math Errors - Gwern.net
How big is this upper bound? Mathematicians have often made errors in proofs. But it’s rarer for ideas to be accepted for a long time and then rejected. But we can divide errors into 2 basic cases corresponding to type I and type II errors:

1. Mistakes where the theorem is still true, but the proof was incorrect (type I)
2. Mistakes where the theorem was false, and the proof was also necessarily incorrect (type II)

Before someone comes up with a final answer, a mathematician may have many levels of intuition in formulating & working on the problem, but we’ll consider the final end-product where the mathematician feels satisfied that he has solved it. Case 1 is perhaps the most common case, with innumerable examples; this is sometimes due to mistakes in the proof that anyone would accept is a mistake, but many of these cases are due to changing standards of proof. For example, when David Hilbert discovered errors in Euclid’s proofs which no one noticed before, the theorems were still true, and the gaps more due to Hilbert being a modern mathematician thinking in terms of formal systems (which of course Euclid did not think in). (David Hilbert himself turns out to be a useful example of the other kind of error: his famous list of 23 problems was accompanied by definite opinions on the outcome of each problem and sometimes timings, several of which were wrong or questionable5.) Similarly, early calculus used ‘infinitesimals’ which were sometimes treated as being 0 and sometimes treated as an indefinitely small non-zero number; this was incoherent and strictly speaking, practically all of the calculus results were wrong because they relied on an incoherent concept - but of course the results were some of the greatest mathematical work ever conducted6 and when later mathematicians put calculus on a more rigorous footing, they immediately re-derived those results (sometimes with important qualifications), and doubtless as modern math evolves other fields have sometimes needed to go back and clean up the foundations and will in the future.7

...

Isaac Newton, incidentally, gave two proofs of the same solution to a problem in probability, one via enumeration and the other more abstract; the enumeration was correct, but the other proof totally wrong and this was not noticed for a long time, leading Stigler to remark:

...

TYPE I > TYPE II?
“Lefschetz was a purely intuitive mathematician. It was said of him that he had never given a completely correct proof, but had never made a wrong guess either.”
- Gian-Carlo Rota13

Case 2 is disturbing, since it is a case in which we wind up with false beliefs and also false beliefs about our beliefs (we no longer know that we don’t know). Case 2 could lead to extinction.

...

Except, errors do not seem to be evenly & randomly distributed between case 1 and case 2. There seem to be far more case 1s than case 2s, as already mentioned in the early calculus example: far more than 50% of the early calculus results were correct when checked more rigorously. Richard Hamming attributes to Ralph Boas a comment that while editing Mathematical Reviews that “of the new results in the papers reviewed most are true but the corresponding proofs are perhaps half the time plain wrong”.

...

Gian-Carlo Rota gives us an example with Hilbert:

...

Olga labored for three years; it turned out that all mistakes could be corrected without any major changes in the statement of the theorems. There was one exception, a paper Hilbert wrote in his old age, which could not be fixed; it was a purported proof of the continuum hypothesis, you will find it in a volume of the Mathematische Annalen of the early thirties.

...

Leslie Lamport advocates for machine-checked proofs and a more rigorous style of proofs similar to natural deduction, noting a mathematician acquaintance guesses at a broad error rate of 1/329 and that he routinely found mistakes in his own proofs and, worse, believed false conjectures30.

[more on these "structured proofs":
https://academia.stackexchange.com/questions/52435/does-anyone-actually-publish-structured-proofs
https://mathoverflow.net/questions/35727/community-experiences-writing-lamports-structured-proofs
]

We can probably add software to that list: early software engineering work found that, dismayingly, bug rates seem to be simply a function of lines of code, and one would expect diseconomies of scale. So one would expect that in going from the ~4,000 lines of code of the Microsoft DOS operating system kernel to the ~50,000,000 lines of code in Windows Server 2003 (with full systems of applications and libraries being even larger: the comprehensive Debian repository in 2007 contained ~323,551,126 lines of code) that the number of active bugs at any time would be… fairly large. Mathematical software is hopefully better, but practitioners still run into issues (eg Durán et al 2014, Fonseca et al 2017) and I don’t know of any research pinning down how buggy key mathematical systems like Mathematica are or how much published mathematics may be erroneous due to bugs. This general problem led to predictions of doom and spurred much research into automated proof-checking, static analysis, and functional languages31.

[related:
https://mathoverflow.net/questions/11517/computer-algebra-errors
I don't know any interesting bugs in symbolic algebra packages but I know a true, enlightening and entertaining story about something that looked like a bug but wasn't.

Define sinc𝑥=(sin𝑥)/𝑥.

Someone found the following result in an algebra package: ∫∞0𝑑𝑥sinc𝑥=𝜋/2
They then found the following results:

...

So of course when they got:

∫∞0𝑑𝑥sinc𝑥sinc(𝑥/3)sinc(𝑥/5)⋯sinc(𝑥/15)=(467807924713440738696537864469/935615849440640907310521750000)𝜋

hmm:
Which means that nobody knows Fourier analysis nowdays. Very sad and discouraging story... – fedja Jan 29 '10 at 18:47

--

Because the most popular systems are all commercial, they tend to guard their bug database rather closely -- making them public would seriously cut their sales. For example, for the open source project Sage (which is quite young), you can get a list of all the known bugs from this page. 1582 known issues on Feb.16th 2010 (which includes feature requests, problems with documentation, etc).

That is an order of magnitude less than the commercial systems. And it's not because it is better, it is because it is younger and smaller. It might be better, but until SAGE does a lot of analysis (about 40% of CAS bugs are there) and a fancy user interface (another 40%), it is too hard to compare.

I once ran a graduate course whose core topic was studying the fundamental disconnect between the algebraic nature of CAS and the analytic nature of the what it is mostly used for. There are issues of logic -- CASes work more or less in an intensional logic, while most of analysis is stated in a purely extensional fashion. There is no well-defined 'denotational semantics' for expressions-as-functions, which strongly contributes to the deeper bugs in CASes.]

...

Should such widely-believed conjectures as P≠NP or the Riemann hypothesis turn out be false, then because they are assumed by so many existing proofs, a far larger math holocaust would ensue38 - and our previous estimates of error rates will turn out to have been substantial underestimates. But it may be a cloud with a silver lining, if it doesn’t come at a time of danger.

https://mathoverflow.net/questions/338607/why-doesnt-mathematics-collapse-down-even-though-humans-quite-often-make-mista

more on formal methods in programming:
https://www.quantamagazine.org/formal-verification-creates-hacker-proof-code-20160920/
https://intelligence.org/2014/03/02/bob-constable/

https://softwareengineering.stackexchange.com/questions/375342/what-are-the-barriers-that-prevent-widespread-adoption-of-formal-methods
Update: measured effort
In the October 2018 issue of Communications of the ACM there is an interesting article about Formally verified software in the real world with some estimates of the effort.

Interestingly (based on OS development for military equipment), it seems that producing formally proved software requires 3.3 times more effort than with traditional engineering techniques. So it's really costly.

On the other hand, it requires 2.3 times less effort to get high security software this way than with traditionally engineered software if you add the effort to make such software certified at a high security level (EAL 7). So if you have high reliability or security requirements there is definitively a business case for going formal.

WHY DON'T PEOPLE USE FORMAL METHODS?: https://www.hillelwayne.com/post/why-dont-people-use-formal-methods/
You can see examples of how all of these look at Let’s Prove Leftpad. HOL4 and Isabelle are good examples of “independent theorem” specs, SPARK and Dafny have “embedded assertion” specs, and Coq and Agda have “dependent type” specs.6

If you squint a bit it looks like these three forms of code spec map to the three main domains of automated correctness checking: tests, contracts, and types. This is not a coincidence. Correctness is a spectrum, and formal verification is one extreme of that spectrum. As we reduce the rigour (and effort) of our verification we get simpler and narrower checks, whether that means limiting the explored state space, using weaker types, or pushing verification to the runtime. Any means of total specification then becomes a means of partial specification, and vice versa: many consider Cleanroom a formal verification technique, which primarily works by pushing code review far beyond what’s humanly possible.

...

The question, then: “is 90/95/99% correct significantly cheaper than 100% correct?” The answer is very yes. We all are comfortable saying that a codebase we’ve well-tested and well-typed is mostly correct modulo a few fixes in prod, and we’re even writing more than four lines of code a day. In fact, the vast… [more]
ratty  gwern  analysis  essay  realness  truth  correctness  reason  philosophy  math  proofs  formal-methods  cs  programming  engineering  worse-is-better/the-right-thing  intuition  giants  old-anglo  error  street-fighting  heuristic  zooming  risk  threat-modeling  software  lens  logic  inference  physics  differential  geometry  estimate  distribution  robust  speculation  nonlinearity  cost-benefit  convexity-curvature  measure  scale  trivia  cocktail  history  early-modern  europe  math.CA  rigor  news  org:mag  org:sci  miri-cfar  pdf  thesis  comparison  examples  org:junk  q-n-a  stackex  pragmatic  tradeoffs  cracker-prog  techtariat  invariance  DSL  chart  ecosystem  grokkability  heavyweights  CAS  static-dynamic  lower-bounds  complexity  tcs  open-problems  big-surf  ideas  certificates-recognition  proof-systems  PCP  mediterranean  SDP  meta:prediction  epistemic  questions  guessing  distributed  overflow  nibble  soft-question  track-record  big-list  hmm  frontier  state-of-art  move-fast-(and-break-things)  grokkability-clarity  technical-writing  trust 
july 2019 by nhaliday
Computer latency: 1977-2017
If we look at overall results, the fastest machines are ancient. Newer machines are all over the place. Fancy gaming rigs with unusually high refresh-rate displays are almost competitive with machines from the late 70s and early 80s, but “normal” modern computers can’t compete with thirty to forty year old machines.

...

If we exclude the game boy color, which is a different class of device than the rest, all of the quickest devices are Apple phones or tablets. The next quickest device is the blackberry q10. Although we don’t have enough data to really tell why the blackberry q10 is unusually quick for a non-Apple device, one plausible guess is that it’s helped by having actual buttons, which are easier to implement with low latency than a touchscreen. The other two devices with actual buttons are the gameboy color and the kindle 4.

After that iphones and non-kindle button devices, we have a variety of Android devices of various ages. At the bottom, we have the ancient palm pilot 1000 followed by the kindles. The palm is hamstrung by a touchscreen and display created in an era with much slower touchscreen technology and the kindles use e-ink displays, which are much slower than the displays used on modern phones, so it’s not surprising to see those devices at the bottom.

...

Almost every computer and mobile device that people buy today is slower than common models of computers from the 70s and 80s. Low-latency gaming desktops and the ipad pro can get into the same range as quick machines from thirty to forty years ago, but most off-the-shelf devices aren’t even close.

If we had to pick one root cause of latency bloat, we might say that it’s because of “complexity”. Of course, we all know that complexity is bad. If you’ve been to a non-academic non-enterprise tech conference in the past decade, there’s a good chance that there was at least one talk on how complexity is the root of all evil and we should aspire to reduce complexity.

Unfortunately, it's a lot harder to remove complexity than to give a talk saying that we should remove complexity. A lot of the complexity buys us something, either directly or indirectly. When we looked at the input of a fancy modern keyboard vs. the apple 2 keyboard, we saw that using a relatively powerful and expensive general purpose processor to handle keyboard inputs can be slower than dedicated logic for the keyboard, which would both be simpler and cheaper. However, using the processor gives people the ability to easily customize the keyboard, and also pushes the problem of “programming” the keyboard from hardware into software, which reduces the cost of making the keyboard. The more expensive chip increases the manufacturing cost, but considering how much of the cost of these small-batch artisanal keyboards is the design cost, it seems like a net win to trade manufacturing cost for ease of programming.

...

If you want a reference to compare the kindle against, a moderately quick page turn in a physical book appears to be about 200 ms.

https://twitter.com/gravislizard/status/927593460642615296
almost everything on computers is perceptually slower than it was in 1983
https://archive.is/G3D5K
https://archive.is/vhDTL
https://archive.is/a3321
https://archive.is/imG7S
techtariat  dan-luu  performance  time  hardware  consumerism  objektbuch  data  history  reflection  critique  software  roots  tainter  engineering  nitty-gritty  ui  ux  hci  ios  mobile  apple  amazon  sequential  trends  increase-decrease  measure  analysis  measurement  os  systems  IEEE  intricacy  desktop  benchmarks  rant  carmack  system-design  degrees-of-freedom  keyboard  terminal  editors  links  input-output  networking  world  s:**  multi  twitter  social  discussion  tech  programming  web  internet  speed  backup  worrydream  interface  metal-to-virtual  latency-throughput  workflow  form-design  interface-compatibility 
july 2019 by nhaliday
c++ - Which is faster: Stack allocation or Heap allocation - Stack Overflow
On my machine, using g++ 3.4.4 on Windows, I get "0 clock ticks" for both stack and heap allocation for anything less than 100000 allocations, and even then I get "0 clock ticks" for stack allocation and "15 clock ticks" for heap allocation. When I measure 10,000,000 allocations, stack allocation takes 31 clock ticks and heap allocation takes 1562 clock ticks.

so maybe around 100x difference? what does that work out to in terms of total workload?

hmm:
http://vlsiarch.eecs.harvard.edu/wp-content/uploads/2017/02/asplos17mallacc.pdf
Recent work shows that dynamic memory allocation consumes nearly 7% of all cycles in Google datacenters.

That's not too bad actually. Seems like I shouldn't worry about shifting from heap to stack/globals unless profiling says it's important, particularly for non-oly stuff.

edit: Actually, factor x100 for 7% is pretty high, could be increase constant factor by almost an order of magnitude.

edit: Well actually that's not the right math. 93% + 7%*.01 is not much smaller than 100%
q-n-a  stackex  programming  c(pp)  systems  memory-management  performance  intricacy  comparison  benchmarks  data  objektbuch  empirical  google  papers  nibble  time  measure  pro-rata  distribution  multi  pdf  oly-programming  computer-memory 
june 2019 by nhaliday
C++ Core Guidelines
This document is a set of guidelines for using C++ well. The aim of this document is to help people to use modern C++ effectively. By “modern C++” we mean effective use of the ISO C++ standard (currently C++17, but almost all of our recommendations also apply to C++14 and C++11). In other words, what would you like your code to look like in 5 years’ time, given that you can start now? In 10 years’ time?

https://isocpp.github.io/CppCoreGuidelines/
“Within C++ is a smaller, simpler, safer language struggling to get out.” – Bjarne Stroustrup

...

The guidelines are focused on relatively higher-level issues, such as interfaces, resource management, memory management, and concurrency. Such rules affect application architecture and library design. Following the rules will lead to code that is statically type safe, has no resource leaks, and catches many more programming logic errors than is common in code today. And it will run fast - you can afford to do things right.

We are less concerned with low-level issues, such as naming conventions and indentation style. However, no topic that can help a programmer is out of bounds.

Our initial set of rules emphasize safety (of various forms) and simplicity. They may very well be too strict. We expect to have to introduce more exceptions to better accommodate real-world needs. We also need more rules.

...

The rules are designed to be supported by an analysis tool. Violations of rules will be flagged with references (or links) to the relevant rule. We do not expect you to memorize all the rules before trying to write code.

contrary:
https://aras-p.info/blog/2018/12/28/Modern-C-Lamentations/
This will be a long wall of text, and kinda random! My main points are:
1. C++ compile times are important,
2. Non-optimized build performance is important,
3. Cognitive load is important. I don’t expand much on this here, but if a programming language or a library makes me feel stupid, then I’m less likely to use it or like it. C++ does that a lot :)
programming  engineering  pls  best-practices  systems  c(pp)  guide  metabuch  objektbuch  reference  cheatsheet  elegance  frontier  libraries  intricacy  advanced  advice  recommendations  big-picture  novelty  lens  philosophy  state  error  types  concurrency  memory-management  performance  abstraction  plt  compilers  expert-experience  multi  checking  devtools  flux-stasis  safety  system-design  techtariat  time  measure  dotnet  comparison  examples  build-packaging  thinking  worse-is-better/the-right-thing  cost-benefit  tradeoffs  essay  commentary  oop  correctness  computer-memory  error-handling  resources-effects  latency-throughput 
june 2019 by nhaliday
c++ - Why is size_t unsigned? - Stack Overflow
size_t is unsigned for historical reasons.

On an architecture with 16 bit pointers, such as the "small" model DOS programming, it would be impractical to limit strings to 32 KB.

For this reason, the C standard requires (via required ranges) ptrdiff_t, the signed counterpart to size_t and the result type of pointer difference, to be effectively 17 bits.

Those reasons can still apply in parts of the embedded programming world.

However, they do not apply to modern 32-bit or 64-bit programming, where a much more important consideration is that the unfortunate implicit conversion rules of C and C++ make unsigned types into bug attractors, when they're used for numbers (and hence, arithmetical operations and magnitude comparisions). With 20-20 hindsight we can now see that the decision to adopt those particular conversion rules, where e.g. string( "Hi" ).length() < -3 is practically guaranteed, was rather silly and impractical. However, that decision means that in modern programming, adopting unsigned types for numbers has severe disadvantages and no advantages – except for satisfying the feelings of those who find unsigned to be a self-descriptive type name, and fail to think of typedef int MyType.

Summing up, it was not a mistake. It was a decision for then very rational, practical programming reasons. It had nothing to do with transferring expectations from bounds-checked languages like Pascal to C++ (which is a fallacy, but a very very common one, even if some of those who do it have never heard of Pascal).
q-n-a  stackex  c(pp)  systems  embedded  hardware  measure  types  signum  gotchas  roots  explanans  pls  programming 
june 2019 by nhaliday
How Many Keystrokes Programers Type a Day?
I was quite surprised how low my own figure is. But thinking about it… it makes sense. Even though we sit in front of computer all day, but the actual typing is a small percentage of that. Most of the time, you have to lunch, run errands, browse web, read docs, chat on phone, run to the bathroom. Perhaps only half of your work time is active coding or writing email/docs. Of that duration, perhaps majority of time you are digesting the info on screen.
techtariat  convexity-curvature  measure  keyboard  time  cost-benefit  data  time-use  workflow  efficiency  prioritizing  editors 
june 2019 by nhaliday
Lindy effect - Wikipedia
The Lindy effect is a theory that the future life expectancy of some non-perishable things like a technology or an idea is proportional to their current age, so that every additional period of survival implies a longer remaining life expectancy.[1] Where the Lindy effect applies, mortality rate decreases with time. In contrast, living creatures and mechanical things follow a bathtub curve where, after "childhood", the mortality rate increases with time. Because life expectancy is probabilistically derived, a thing may become extinct before its "expected" survival. In other words, one needs to gauge both the age and "health" of the thing to determine continued survival.
wiki  reference  concept  metabuch  ideas  street-fighting  planning  comparison  time  distribution  flux-stasis  history  measure  correlation  arrows  branches  pro-rata  manifolds  aging  stylized-facts  age-generation  robust  technology  thinking  cost-benefit  conceptual-vocab  methodology  threat-modeling  efficiency  neurons  tools  track-record  ubiquity 
june 2019 by nhaliday
Analysis of Current and Future Computer Science Needs via Advertised Faculty Searches for 2019 - CRN
Differences are also seen when analyzing results based on the type of institution. Positions related to Security have the highest percentages for all but top-100 institutions. The area of Artificial Intelligence/Data Mining/Machine Learning is of most interest for top-100 PhD institutions. Roughly 35% of positions for PhD institutions are in data-oriented areas. The results show a strong interest in data-oriented areas by public PhD and private PhD, MS, and BS institutions while public MS and BS institutions are most interested in Security.
org:edu  data  analysis  visualization  trends  recruiting  jobs  career  planning  academia  higher-ed  cs  tcs  machine-learning  systems  pro-rata  measure  long-term  🎓  uncertainty  progression  grad-school  phd  distribution  ranking  top-n  security  status  s-factor  comparison  homo-hetero  correlation  org:ngo  white-paper  cost-benefit 
june 2019 by nhaliday
quality - Is the average number of bugs per loc the same for different programming languages? - Software Engineering Stack Exchange
Contrary to intuition, the number of errors per 1000 lines of does seem to be relatively constant, reguardless of the specific language involved. Steve McConnell, author of Code Complete and Software Estimation: Demystifying the Black Art goes over this area in some detail.

I don't have my copies readily to hand - they're sitting on my bookshelf at work - but a quick Google found a relevant quote:

Industry Average: "about 15 - 50 errors per 1000 lines of delivered code."
(Steve) further says this is usually representative of code that has some level of structured programming behind it, but probably includes a mix of coding techniques.

Quoted from Code Complete, found here: http://mayerdan.com/ruby/2012/11/11/bugs-per-line-of-code-ratio/

If memory serves correctly, Steve goes into a thorough discussion of this, showing that the figures are constant across languages (C, C++, Java, Assembly and so on) and despite difficulties (such as defining what "line of code" means).

Most importantly he has lots of citations for his sources - he's not offering unsubstantiated opinions, but has the references to back them up.

[ed.: I think this is delivered code? So after testing, debugging, etc. I'm more interested in the metric for the moment after you've gotten something to compile.

edit: cf https://pinboard.in/u:nhaliday/b:0a6eb68166e6]
q-n-a  stackex  programming  engineering  nitty-gritty  error  flux-stasis  books  recommendations  software  checking  debugging  pro-rata  pls  comparison  parsimony  measure  data  objektbuch  speculation  accuracy  density  correctness  estimate  street-fighting  multi  quality  stylized-facts  methodology 
april 2019 by nhaliday
A cross-language perspective on speech information rate
Figure 2.

English (IREN = 1.08) shows a higher Information Rate than Vietnamese (IRVI = 1). On the contrary, Japanese exhibits the lowest IRL value of the sample. Moreover, one can observe that several languages may reach very close IRL with different encoding strategies: Spanish is characterized by a fast rate of low-density syllables while Mandarin exhibits a 34% slower syllabic rate with syllables ‘denser’ by a factor of 49%. Finally, their Information Rates differ only by 4%.

Is spoken English more efficient than other languages?: https://linguistics.stackexchange.com/questions/2550/is-spoken-english-more-efficient-than-other-languages
As a translator, I can assure you that English is no more efficient than other languages.
--
[some comments on a different answer:]
Russian, when spoken, is somewhat less efficient than English, and that is for sure. No one who has ever worked as an interpreter can deny it. You can convey somewhat more information in English than in Russian within an hour. The English language is not constrained by the rigid case and gender systems of the Russian language, which somewhat reduce the information density of the Russian language. The rules of the Russian language force the speaker to incorporate sometimes unnecessary details in his speech, which can be problematic for interpreters – user74809 Nov 12 '18 at 12:48
But in writing, though, I do think that Russian is somewhat superior. However, when it comes to common daily speech, I do not think that anyone can claim that English is less efficient than Russian. As a matter of fact, I also find Russian to be somewhat more mentally taxing than English when interpreting. I mean, anyone who has lived in the world of Russian and then moved to the world of English is certain to notice that English is somewhat more efficient in everyday life. It is not a night-and-day difference, but it is certainly noticeable. – user74809 Nov 12 '18 at 13:01
...
By the way, I am not knocking Russian. I love Russian, it is my mother tongue and the only language, in which I sound like a native speaker. I mean, I still have a pretty thick Russian accent. I am not losing it anytime soon, if ever. But like I said, living in both worlds, the Moscow world and the Washington D.C. world, I do notice that English is objectively more efficient, even if I am myself not as efficient in it as most other people. – user74809 Nov 12 '18 at 13:40

Do most languages need more space than English?: https://english.stackexchange.com/questions/2998/do-most-languages-need-more-space-than-english
Speaking as a translator, I can share a few rules of thumb that are popular in our profession:
- Hebrew texts are usually shorter than their English equivalents by approximately 1/3. To a large extent, that can be attributed to cheating, what with no vowels and all.
- Spanish, Portuguese and French (I guess we can just settle on Romance) texts are longer than their English counterparts by about 1/5 to 1/4.
- Scandinavian languages are pretty much on par with English. Swedish is a tiny bit more compact.
- Whether or not Russian (and by extension, Ukrainian and Belorussian) is more compact than English is subject to heated debate, and if you ask five people, you'll be presented with six different opinions. However, everybody seems to agree that the difference is just a couple percent, be it this way or the other.

--

A point of reference from the website I maintain. The files where we store the translations have the following sizes:

English: 200k
Portuguese: 208k
Spanish: 209k
German: 219k
And the translations are out of date. That is, there are strings in the English file that aren't yet in the other files.

For Chinese, the situation is a bit different because the character encoding comes into play. Chinese text will have shorter strings, because most words are one or two characters, but each character takes 3–4 bytes (for UTF-8 encoding), so each word is 3–12 bytes long on average. So visually the text takes less space but in terms of the information exchanged it uses more space. This Language Log post suggests that if you account for the encoding and remove redundancy in the data using compression you find that English is slightly more efficient than Chinese.

Is English more efficient than Chinese after all?: https://languagelog.ldc.upenn.edu/nll/?p=93
[Executive summary: Who knows?]

This follows up on a series of earlier posts about the comparative efficiency — in terms of text size — of different languages ("One world, how many bytes?", 8/5/2005; "Comparing communication efficiency across languages", 4/4/2008; "Mailbag: comparative communication efficiency", 4/5/2008). Hinrich Schütze wrote:
pdf  study  language  foreign-lang  linguistics  pro-rata  bits  communication  efficiency  density  anglo  japan  asia  china  mediterranean  data  multi  comparison  writing  meta:reading  measure  compression  empirical  evidence-based  experiment  analysis  chart  trivia  cocktail  org:edu 
february 2019 by nhaliday
Which benchmark programs are faster? | Computer Language Benchmarks Game
old:
https://salsa.debian.org/benchmarksgame-team/archive-alioth-benchmarksgame
https://web.archive.org/web/20170331153459/http://benchmarksgame.alioth.debian.org/
includes Scala

very outdated but more languages: https://web.archive.org/web/20110401183159/http://shootout.alioth.debian.org:80/

OCaml seems to offer the best tradeoff of performance vs parsimony (Haskell not so much :/)
https://blog.chewxy.com/2019/02/20/go-is-average/
http://blog.gmarceau.qc.ca/2009/05/speed-size-and-dependability-of.html
old official: https://web.archive.org/web/20130731195711/http://benchmarksgame.alioth.debian.org/u64q/code-used-time-used-shapes.php
https://web.archive.org/web/20121125103010/http://shootout.alioth.debian.org/u64q/code-used-time-used-shapes.php
Haskell does better here

other PL benchmarks:
https://github.com/kostya/benchmarks
BF 2.0:
Kotlin, C++ (GCC), Rust < Nim, D (GDC,LDC), Go, MLton < Crystal, Go (GCC), C# (.NET Core), Scala, Java, OCaml < D (DMD) < C# Mono < Javascript V8 < F# Mono, Javascript Node, Haskell (MArray) << LuaJIT << Python PyPy < Haskell < Racket <<< Python << Python3
mandel.b:
C++ (GCC) << Crystal < Rust, D (GDC), Go (GCC) < Nim, D (LDC) << C# (.NET Core) < MLton << Kotlin << OCaml << Scala, Java << D (DMD) << Go << C# Mono << Javascript Node << Haskell (MArray) << LuaJIT < Python PyPy << F# Mono <<< Racket
https://github.com/famzah/langs-performance
C++, Rust, Java w/ custom non-stdlib code < Python PyPy < C# .Net Core < Javscript Node < Go, unoptimized C++ (no -O2) << PHP << Java << Python3 << Python
comparison  pls  programming  performance  benchmarks  list  top-n  ranking  systems  time  multi  🖥  cost-benefit  tradeoffs  data  analysis  plots  visualization  measure  intricacy  parsimony  ocaml-sml  golang  rust  jvm  javascript  c(pp)  functional  haskell  backup  scala  realness  generalization  accuracy  techtariat  crosstab  database  repo  objektbuch  static-dynamic  gnu 
december 2018 by nhaliday
Lateralization of brain function - Wikipedia
Language
Language functions such as grammar, vocabulary and literal meaning are typically lateralized to the left hemisphere, especially in right handed individuals.[3] While language production is left-lateralized in up to 90% of right-handers, it is more bilateral, or even right-lateralized, in approximately 50% of left-handers.[4]

Broca's area and Wernicke's area, two areas associated with the production of speech, are located in the left cerebral hemisphere for about 95% of right-handers, but about 70% of left-handers.[5]:69

Auditory and visual processing
The processing of visual and auditory stimuli, spatial manipulation, facial perception, and artistic ability are represented bilaterally.[4] Numerical estimation, comparison and online calculation depend on bilateral parietal regions[6][7] while exact calculation and fact retrieval are associated with left parietal regions, perhaps due to their ties to linguistic processing.[6][7]

...

Depression is linked with a hyperactive right hemisphere, with evidence of selective involvement in "processing negative emotions, pessimistic thoughts and unconstructive thinking styles", as well as vigilance, arousal and self-reflection, and a relatively hypoactive left hemisphere, "specifically involved in processing pleasurable experiences" and "relatively more involved in decision-making processes".

Chaos and Order; the right and left hemispheres: https://orthosphere.wordpress.com/2018/05/23/chaos-and-order-the-right-and-left-hemispheres/
In The Master and His Emissary, Iain McGilchrist writes that a creature like a bird needs two types of consciousness simultaneously. It needs to be able to focus on something specific, such as pecking at food, while it also needs to keep an eye out for predators which requires a more general awareness of environment.

These are quite different activities. The Left Hemisphere (LH) is adapted for a narrow focus. The Right Hemisphere (RH) for the broad. The brains of human beings have the same division of function.

The LH governs the right side of the body, the RH, the left side. With birds, the left eye (RH) looks for predators, the right eye (LH) focuses on food and specifics. Since danger can take many forms and is unpredictable, the RH has to be very open-minded.

The LH is for narrow focus, the explicit, the familiar, the literal, tools, mechanism/machines and the man-made. The broad focus of the RH is necessarily more vague and intuitive and handles the anomalous, novel, metaphorical, the living and organic. The LH is high resolution but narrow, the RH low resolution but broad.

The LH exhibits unrealistic optimism and self-belief. The RH has a tendency towards depression and is much more realistic about a person’s own abilities. LH has trouble following narratives because it has a poor sense of “wholes.” In art it favors flatness, abstract and conceptual art, black and white rather than color, simple geometric shapes and multiple perspectives all shoved together, e.g., cubism. Particularly RH paintings emphasize vistas with great depth of field and thus space and time,[1] emotion, figurative painting and scenes related to the life world. In music, LH likes simple, repetitive rhythms. The RH favors melody, harmony and complex rhythms.

...

Schizophrenia is a disease of extreme LH emphasis. Since empathy is RH and the ability to notice emotional nuance facially, vocally and bodily expressed, schizophrenics tend to be paranoid and are often convinced that the real people they know have been replaced by robotic imposters. This is at least partly because they lose the ability to intuit what other people are thinking and feeling – hence they seem robotic and suspicious.

Oswald Spengler’s The Decline of the West as well as McGilchrist characterize the West as awash in phenomena associated with an extreme LH emphasis. Spengler argues that Western civilization was originally much more RH (to use McGilchrist’s categories) and that all its most significant artistic (in the broadest sense) achievements were triumphs of RH accentuation.

The RH is where novel experiences and the anomalous are processed and where mathematical, and other, problems are solved. The RH is involved with the natural, the unfamiliar, the unique, emotions, the embodied, music, humor, understanding intonation and emotional nuance of speech, the metaphorical, nuance, and social relations. It has very little speech, but the RH is necessary for processing all the nonlinguistic aspects of speaking, including body language. Understanding what someone means by vocal inflection and facial expressions is an intuitive RH process rather than explicit.

...

RH is very much the center of lived experience; of the life world with all its depth and richness. The RH is “the master” from the title of McGilchrist’s book. The LH ought to be no more than the emissary; the valued servant of the RH. However, in the last few centuries, the LH, which has tyrannical tendencies, has tried to become the master. The LH is where the ego is predominantly located. In split brain patients where the LH and the RH are surgically divided (this is done sometimes in the case of epileptic patients) one hand will sometimes fight with the other. In one man’s case, one hand would reach out to hug his wife while the other pushed her away. One hand reached for one shirt, the other another shirt. Or a patient will be driving a car and one hand will try to turn the steering wheel in the opposite direction. In these cases, the “naughty” hand is usually the left hand (RH), while the patient tends to identify herself with the right hand governed by the LH. The two hemispheres have quite different personalities.

The connection between LH and ego can also be seen in the fact that the LH is competitive, contentious, and agonistic. It wants to win. It is the part of you that hates to lose arguments.

Using the metaphor of Chaos and Order, the RH deals with Chaos – the unknown, the unfamiliar, the implicit, the emotional, the dark, danger, mystery. The LH is connected with Order – the known, the familiar, the rule-driven, the explicit, and light of day. Learning something means to take something unfamiliar and making it familiar. Since the RH deals with the novel, it is the problem-solving part. Once understood, the results are dealt with by the LH. When learning a new piece on the piano, the RH is involved. Once mastered, the result becomes a LH affair. The muscle memory developed by repetition is processed by the LH. If errors are made, the activity returns to the RH to figure out what went wrong; the activity is repeated until the correct muscle memory is developed in which case it becomes part of the familiar LH.

Science is an attempt to find Order. It would not be necessary if people lived in an entirely orderly, explicit, known world. The lived context of science implies Chaos. Theories are reductive and simplifying and help to pick out salient features of a phenomenon. They are always partial truths, though some are more partial than others. The alternative to a certain level of reductionism or partialness would be to simply reproduce the world which of course would be both impossible and unproductive. The test for whether a theory is sufficiently non-partial is whether it is fit for purpose and whether it contributes to human flourishing.

...

Analytic philosophers pride themselves on trying to do away with vagueness. To do so, they tend to jettison context which cannot be brought into fine focus. However, in order to understand things and discern their meaning, it is necessary to have the big picture, the overview, as well as the details. There is no point in having details if the subject does not know what they are details of. Such philosophers also tend to leave themselves out of the picture even when what they are thinking about has reflexive implications. John Locke, for instance, tried to banish the RH from reality. All phenomena having to do with subjective experience he deemed unreal and once remarked about metaphors, a RH phenomenon, that they are “perfect cheats.” Analytic philosophers tend to check the logic of the words on the page and not to think about what those words might say about them. The trick is for them to recognize that they and their theories, which exist in minds, are part of reality too.

The RH test for whether someone actually believes something can be found by examining his actions. If he finds that he must regard his own actions as free, and, in order to get along with other people, must also attribute free will to them and treat them as free agents, then he effectively believes in free will – no matter his LH theoretical commitments.

...

We do not know the origin of life. We do not know how or even if consciousness can emerge from matter. We do not know the nature of 96% of the matter of the universe. Clearly all these things exist. They can provide the subject matter of theories but they continue to exist as theorizing ceases or theories change. Not knowing how something is possible is irrelevant to its actual existence. An inability to explain something is ultimately neither here nor there.

If thought begins and ends with the LH, then thinking has no content – content being provided by experience (RH), and skepticism and nihilism ensue. The LH spins its wheels self-referentially, never referring back to experience. Theory assumes such primacy that it will simply outlaw experiences and data inconsistent with it; a profoundly wrong-headed approach.

...

Gödel’s Theorem proves that not everything true can be proven to be true. This means there is an ineradicable role for faith, hope and intuition in every moderately complex human intellectual endeavor. There is no one set of consistent axioms from which all other truths can be derived.

Alan Turing’s proof of the halting problem proves that there is no effective procedure for finding effective procedures. Without a mechanical decision procedure, (LH), when it comes to … [more]
gnon  reflection  books  summary  review  neuro  neuro-nitgrit  things  thinking  metabuch  order-disorder  apollonian-dionysian  bio  examples  near-far  symmetry  homo-hetero  logic  inference  intuition  problem-solving  analytical-holistic  n-factor  europe  the-great-west-whale  occident  alien-character  detail-architecture  art  theory-practice  philosophy  being-becoming  essence-existence  language  psychology  cog-psych  egalitarianism-hierarchy  direction  reason  learning  novelty  science  anglo  anglosphere  coarse-fine  neurons  truth  contradiction  matching  empirical  volo-avolo  curiosity  uncertainty  theos  axioms  intricacy  computation  analogy  essay  rhetoric  deep-materialism  new-religion  knowledge  expert-experience  confidence  biases  optimism  pessimism  realness  whole-partial-many  theory-of-mind  values  competition  reduction  subjective-objective  communication  telos-atelos  ends-means  turing  fiction  increase-decrease  innovation  creative  thick-thin  spengler  multi  ratty  hanson  complex-systems  structure  concrete  abstraction  network-s 
september 2018 by nhaliday
Theory of Self-Reproducing Automata - John von Neumann
Fourth Lecture: THE ROLE OF HIGH AND OF EXTREMELY HIGH COMPLICATION

Comparisons between computing machines and the nervous systems. Estimates of size for computing machines, present and near future.

Estimates for size for the human central nervous system. Excursus about the “mixed” character of living organisms. Analog and digital elements. Observations about the “mixed” character of all componentry, artificial as well as natural. Interpretation of the position to be taken with respect to these.

Evaluation of the discrepancy in size between artificial and natural automata. Interpretation of this discrepancy in terms of physical factors. Nature of the materials used.

The probability of the presence of other intellectual factors. The role of complication and the theoretical penetration that it requires.

Questions of reliability and errors reconsidered. Probability of individual errors and length of procedure. Typical lengths of procedure for computing machines and for living organisms--that is, for artificial and for natural automata. Upper limits on acceptable probability of error in individual operations. Compensation by checking and self-correcting features.

Differences of principle in the way in which errors are dealt with in artificial and in natural automata. The “single error” principle in artificial automata. Crudeness of our approach in this case, due to the lack of adequate theory. More sophisticated treatment of this problem in natural automata: The role of the autonomy of parts. Connections between this autonomy and evolution.

- 10^10 neurons in brain, 10^4 vacuum tubes in largest computer at time
- machines faster: 5 ms from neuron potential to neuron potential, 10^-3 ms for vacuum tubes

https://en.wikipedia.org/wiki/John_von_Neumann#Computing
pdf  article  papers  essay  nibble  math  cs  computation  bio  neuro  neuro-nitgrit  scale  magnitude  comparison  acm  von-neumann  giants  thermo  phys-energy  speed  performance  time  density  frequency  hardware  ems  efficiency  dirty-hands  street-fighting  fermi  estimate  retention  physics  interdisciplinary  multi  wiki  links  people  🔬  atoms  duplication  iteration-recursion  turing  complexity  measure  nature  technology  complex-systems  bits  information-theory  circuits  robust  structure  composition-decomposition  evolution  mutation  axioms  analogy  thinking  input-output  hi-order-bits  coding-theory  flexibility  rigidity  automata-languages 
april 2018 by nhaliday
Moral Transposition – neocolonial
- Every morality inherently has a doctrine on that which is morally beneficial and that which is morally harmful.
- Under the traditional, absolute, eucivic moral code of Western Civilisation these were termed Good and Evil.
- Under the modern, relative, dyscivic moral code of Progressivism these are called Love and Hate.
- Good and Evil inherently reference the in-group, and seek its growth in absolute capability and glory.  Love and Hate inherently reference the out-group, and seek its relative growth in capability and privilege.
- These combinations form the basis of the Frame through which individuals aligned with those moralities view the world.  They are markedly distinct; although both Good serves the moral directive of absolutely strengthening the in-group and Hate counters the moral directive of relatively weakening the in-group, they do not map to one another. This failure to map, as well as the overloading of terms, is why it is generally (intentionally, perniciously) difficult to discern the differences between the two world views.

You Didn’t Join a Suicide Cult: http://www.righteousdominion.org/2018/04/13/you-didnt-join-a-suicide-cult/
“Thomas Aquinas discusses whether there is an order to charity. Must we love everyone in outward effects equally? Or do we demonstrate love more to our near neighbors than our distant neighbors? His answers: No to the first question, yes to the second.”

...

This is a perfect distillation of the shaming patriotic Christians with a sense of national identity face. It is a very Alinsky tactic whose fourth rule is “Make the enemy live up to their own book of rules. You can kill them with this, for they can no more obey their own rules than the Christian church can live up to Christianity.” It is a tactic that can be applied to any idealistic movement. Now to be fair, my friend is not a disciple of Alinsky, but we have been bathed in Alinsky for at least two generations. Reading the Gospels alone and in a vacuum one could be forgiven coming away with that interpretation of Christ’s teachings. Take for example Luke 6:27-30:

...

Love as Virtue and Vice
Thirdly, Love is a virtue, the greatest, but like all virtues it can be malformed with excessive zeal.

Aristotle taught that virtues were a proper balance of behavior or feeling in a specific sphere. For instance, the sphere of confidence and fear: a proper balance in this sphere would be the virtue of courage. A deficit in this sphere would be cowardice and an excess would be rashness or foolhardiness. We can apply this to the question of charity. Charity in the bible is typically a translation of the Greek word for love. We are taught by Jesus that second only to loving God we are to love our neighbor (which in the Greek means those near you). If we are to view the sphere of love in this context of excess and deficit what would it be?

Selfishness <—- LOVE —-> Enablement

Enablement here is meant in its very modern sense. If we possess this excess of love, we are so selfless and “others focused” that we prioritize the other above all else we value. The pathologies of the target of our enablement are not considered; indeed, in this state of enablement they are even desired. The saying “the squeaky wheel gets the grease” is recast as: “The squeaky wheel gets the grease, BUT if I have nothing squeaking in m y life I’ll make sure to find or create something squeaky to “virtuously” burden myself with”.

Also, in this state of excessive love even those natural and healthy extensions of yourself must be sacrificed to the other. There was one mother I was acquainted with that embodies this excess of love. She had two biological children and anywhere from five to six very troubled adopted/foster kids at a time. She helped many kids out of terrible situations, but in turn her natural children were constantly subject to high levels of stress, drama, and constant babysitting of very troubled children. There was real resentment. In her efforts to help troubled foster children, she sacrificed the well-being of her biological children. Needless to say, her position on the refugee crisis was predictable.
gnon  politics  ideology  morality  language  universalism-particularism  tribalism  us-them  patho-altruism  altruism  thinking  religion  christianity  n-factor  civilization  nationalism-globalism  migration  theory-of-mind  ascetic  good-evil  sociality  love-hate  janus  multi  cynicism-idealism  kinship  duty  cohesion  charity  history  medieval  big-peeps  philosophy  egalitarianism-hierarchy  absolute-relative  measure  migrant-crisis  analytical-holistic  peace-violence  the-classics  self-interest  virtu  tails  convexity-curvature  equilibrium  free-riding  lexical 
march 2018 by nhaliday
'P' Versus 'Q': Differences and Commonalities between the Two Areas of Quantitative Finance by Attilio Meucci :: SSRN
There exist two separate branches of finance that require advanced quantitative techniques: the "Q" area of derivatives pricing, whose task is to "extrapolate the present"; and the "P" area of quantitative risk and portfolio management, whose task is to "model the future."

We briefly trace the history of these two branches of quantitative finance, highlighting their different goals and challenges. Then we provide an overview of their areas of intersection: the notion of risk premium; the stochastic processes used, often under different names and assumptions in the Q and in the P world; the numerical methods utilized to simulate those processes; hedging; and statistical arbitrage.
study  essay  survey  ORFE  finance  investing  probability  measure  stochastic-processes  outcome-risk 
december 2017 by nhaliday
light - Why doesn't the moon twinkle? - Astronomy Stack Exchange
As you mention, when light enters our atmosphere, it goes through several parcels of gas with varying density, temperature, pressure, and humidity. These differences make the refractive index of the parcels different, and since they move around (the scientific term for air moving around is "wind"), the light rays take slightly different paths through the atmosphere.

Stars are point sources
…the Moon is not
nibble  q-n-a  overflow  space  physics  trivia  cocktail  navigation  sky  visuo  illusion  measure  random  electromag  signal-noise  flux-stasis  explanation  explanans  magnitude  atmosphere  roots 
december 2017 by nhaliday
galaxy - How do astronomers estimate the total mass of dust in clouds and galaxies? - Astronomy Stack Exchange
Dust absorbs stellar light (primarily in the ultraviolet), and is heated up. Subsequently it cools by emitting infrared, "thermal" radiation. Assuming a dust composition and grain size distribution, the amount of emitted IR light per unit dust mass can be calculated as a function of temperature. Observing the object at several different IR wavelengths, a Planck curve can be fitted to the data points, yielding the dust temperature. The more UV light incident on the dust, the higher the temperature.

The result is somewhat sensitive to the assumptions, and thus the uncertainties are sometimes quite large. The more IR data points obtained, the better. If only one IR point is available, the temperature cannot be calculated. Then there's a degeneracy between incident UV light and the amount of dust, and the mass can only be estimated to within some orders of magnitude (I think).
nibble  q-n-a  overflow  space  measurement  measure  estimate  physics  electromag  visuo  methodology 
december 2017 by nhaliday
How do you measure the mass of a star? (Beginner) - Curious About Astronomy? Ask an Astronomer
Measuring the mass of stars in binary systems is easy. Binary systems are sets of two or more stars in orbit about each other. By measuring the size of the orbit, the stars' orbital speeds, and their orbital periods, we can determine exactly what the masses of the stars are. We can take that knowledge and then apply it to similar stars not in multiple systems.

We also can easily measure the luminosity and temperature of any star. A plot of luminocity versus temperature for a set of stars is called a Hertsprung-Russel (H-R) diagram, and it turns out that most stars lie along a thin band in this diagram known as the main Sequence. Stars arrange themselves by mass on the Main Sequence, with massive stars being hotter and brighter than their small-mass bretheren. If a star falls on the Main Sequence, we therefore immediately know its mass.

In addition to these methods, we also have an excellent understanding of how stars work. Our models of stellar structure are excellent predictors of the properties and evolution of stars. As it turns out, the mass of a star determines its life history from day 1, for all times thereafter, not only when the star is on the Main Sequence. So actually, the position of a star on the H-R diagram is a good indicator of its mass, regardless of whether it's on the Main Sequence or not.
nibble  q-n-a  org:junk  org:edu  popsci  space  physics  electromag  measurement  mechanics  gravity  cycles  oscillation  temperature  visuo  plots  correlation  metrics  explanation  measure  methodology 
december 2017 by nhaliday
Hyperbolic angle - Wikipedia
A unit circle {\displaystyle x^{2}+y^{2}=1} x^2 + y^2 = 1 has a circular sector with an area half of the circular angle in radians. Analogously, a unit hyperbola {\displaystyle x^{2}-y^{2}=1} {\displaystyle x^{2}-y^{2}=1} has a hyperbolic sector with an area half of the hyperbolic angle.
nibble  math  trivia  wiki  reference  physics  relativity  concept  atoms  geometry  ground-up  characterization  measure  definition  plots  calculation  nitty-gritty  direction  metrics  manifolds 
november 2017 by nhaliday
Genetics: CHROMOSOMAL MAPS AND MAPPING FUNCTIONS
Any particular gene has a specific location (its "locus") on a particular chromosome. For any two genes (or loci) alpha and beta, we can ask "What is the recombination frequency between them?" If the genes are on different chromosomes, the answer is 50% (independent assortment). If the two genes are on the same chromosome, the recombination frequency will be somewhere in the range from 0 to 50%. The "map unit" (1 cM) is the genetic map distance that corresponds to a recombination frequency of 1%. In large chromosomes, the cumulative map distance may be much greater than 50cM, but the maximum recombination frequency is 50%. Why? In large chromosomes, there is enough length to allow for multiple cross-overs, so we have to ask what result we expect for random multiple cross-overs.

1. How is it that random multiple cross-overs give the same result as independent assortment?

Figure 5.12 shows how the various double cross-over possibilities add up, resulting in gamete genotype percentages that are indistinguisable from independent assortment (50% parental type, 50% non-parental type). This is a very important figure. It provides the explanation for why genes that are far apart on a very large chromosome sort out in crosses just as if they were on separate chromosomes.

2. Is there a way to measure how close together two crossovers can occur involving the same two chromatids? That is, how could we measure whether there is spacial "interference"?

Figure 5.13 shows how a measurement of the gamete frequencies resulting from a "three point cross" can answer this question. If we would get a "lower than expected" occurrence of recombinant genotypes aCb and AcB, it would suggest that there is some hindrance to the two cross-overs occurring this close together. Crosses of this type in Drosophila have shown that, in this organism, double cross-overs do not occur at distances of less than about 10 cM between the two cross-over sites. ( Textbook, page 196. )

3. How does all of this lead to the "mapping function", the mathematical (graphical) relation between the observed recombination frequency (percent non-parental gametes) and the cumulative genetic distance in map units?

Figure 5.14 shows the result for the two extremes of "complete interference" and "no interference". The situation for real chromosomes in real organisms is somewhere between these extremes, such as the curve labelled "interference decreasing with distance".
org:junk  org:edu  explanation  faq  nibble  genetics  genomics  bio  ground-up  magnitude  data  flux-stasis  homo-hetero  measure  orders  metric-space  limits  measurement 
october 2017 by nhaliday
Inferior Faunas | West Hunter
I mentioned South American paleontologists defending the honor of their extinct animals, and pointed  out how stupid that is. There are many similar cases: Jefferson vs Buffon on the wimpiness of North American mammals (as a reader pointed out),  biologists defending the prowess of marsupials in Australia (a losing proposition) , etc.

So, we need to establish the relative competitive abilities of different faunas and settle this, once and for all.

Basically, the smaller and more isolated, the less competitive.  Pretty much true for both plants and animals.

Islands do poorly. Not just dodos: Hawaiian species, for example, are generally losers: everything from outside is a threat.

something hidden: https://westhunt.wordpress.com/2014/12/01/something-hidden/
I’m wondering of any of the Meridiungulata lineages did survive, unnoticed because they’re passing for insectivores or rats or whatever, just as tenrecs and golden moles did. . Obviously the big ones are extinct, probably the others as well, but until we’ve looked at the DNA of every little mammal in South America, the possibility exists.
west-hunter  scitariat  rant  discussion  ideas  nature  bio  archaeology  egalitarianism-hierarchy  absolute-relative  ranking  world  correlation  scale  oceans  geography  measure  network-structure  list  lol  speculation  latin-america  usa  convergence  multi 
october 2017 by nhaliday
The Downside of Baseball’s Data Revolution—Long Games, Less Action - WSJ
After years of ‘Moneyball’-style quantitative analysis, major-league teams are setting records for inactivity
news  org:rec  trends  sports  data-science  unintended-consequences  quantitative-qualitative  modernity  time  baseball  measure 
october 2017 by nhaliday
Power of a point - Wikipedia
The power of point P (see in Figure 1) can be defined equivalently as the product of distances from the point P to the two intersection points of any ray emanating from P.
nibble  math  geometry  spatial  ground-up  concept  metrics  invariance  identity  atoms  wiki  reference  measure  yoga  calculation 
september 2017 by nhaliday
How & Why Solar Eclipses Happen | Solar Eclipse Across America - August 21, 2017
Cosmic Coincidence
The Sun’s diameter is about 400 times that of the Moon. The Sun is also (on average) about 400 times farther away. As a result, the two bodies appear almost exactly the same angular size in the sky — about ½°, roughly half the width of your pinky finger seen at arm's length. This truly remarkable coincidence is what gives us total solar eclipses. If the Moon were slightly smaller or orbited a little farther away from Earth, it would never completely cover the solar disk. If the Moon were a little larger or orbited a bit closer to Earth, it would block much of the solar corona during totality, and eclipses wouldn’t be nearly as spectacular.

https://blogs.scientificamerican.com/life-unbounded/the-solar-eclipse-coincidence/
nibble  org:junk  org:edu  space  physics  mechanics  spatial  visuo  data  scale  measure  volo-avolo  earth  multi  news  org:mag  org:sci  popsci  sky  cycles  pro-rata  navigation  degrees-of-freedom 
august 2017 by nhaliday
How large is the Sun compared to Earth? | Cool Cosmos
Compared to Earth, the Sun is enormous! It contains 99.86% of all of the mass of the entire Solar System. The Sun is 864,400 miles (1,391,000 kilometers) across. This is about 109 times the diameter of Earth. The Sun weighs about 333,000 times as much as Earth. It is so large that about 1,300,000 planet Earths can fit inside of it. Earth is about the size of an average sunspot!
nibble  org:junk  space  physics  mechanics  gravity  earth  navigation  data  objektbuch  scale  spatial  measure  org:edu  popsci  pro-rata 
august 2017 by nhaliday
The Earth-Moon system
nice way of expressing Kepler's law (scaled by AU, solar mass, year, etc.) among other things

1. PHYSICAL PROPERTIES OF THE MOON
2. LUNAR PHASES
3. ECLIPSES
4. TIDES
nibble  org:junk  explanation  trivia  data  objektbuch  space  mechanics  spatial  visualization  earth  visual-understanding  navigation  experiment  measure  marginal  gravity  scale  physics  nitty-gritty  tidbits  identity  cycles  time  magnitude  street-fighting  calculation  oceans  pro-rata  rhythm  flux-stasis 
august 2017 by nhaliday
How to estimate distance using your finger | Outdoor Herbivore Blog
1. Hold your right arm out directly in front of you, elbow straight, thumb upright.
2. Align your thumb with one eye closed so that it covers (or aligns) the distant object. Point marked X in the drawing.
3. Do not move your head, arm or thumb, but switch eyes, so that your open eye is now closed and the other eye is open. Observe closely where the object now appears with the other open eye. Your thumb should appear to have moved to some other point: no longer in front of the object. This new point is marked as Y in the drawing.
4. Estimate this displacement XY, by equating it to the estimated size of something you are familiar with (height of tree, building width, length of a car, power line poles, distance between nearby objects). In this case, the distant barn is estimated to be 100′ wide. It appears 5 barn widths could fit this displacement, or 500 feet. Now multiply that figure by 10 (the ratio of the length of your arm to the distance between your eyes), and you get the distance between you and the thicket of blueberry bushes — 5000 feet away(about 1 mile).

- Basically uses parallax (similar triangles) with each eye.
- When they say to compare apparent shift to known distance, won't that scale with the unknown distance? The example uses width of an object at the point whose distance is being estimated.

per here: https://www.trails.com/how_26316_estimate-distances-outdoors.html
Select a distant object that the width can be accurately determined. For example, use a large rock outcropping. Estimate the width of the rock. Use 200 feet wide as an example here.
outdoors  human-bean  embodied  embodied-pack  visuo  spatial  measurement  lifehack  howto  navigation  prepping  survival  objektbuch  multi  measure  estimate 
august 2017 by nhaliday
Scanners Live in Vain | West Hunter
Of course, finding that the pattern already exists at the age of one month seriously weakens any idea that being poor shrinks the brain: most of the environmental effects you would consider haven’t even come into play in the first four weeks, when babies drink milk, sleep, and poop. Genetics affecting both parents and their children would make more sense, if the pattern shows up so early (and I’ll bet money that, if real,  it shows up well before one month);  but Martha Farah, and the reporter from Nature, Sara Reardon, ARE TOO FUCKING DUMB to realize this.

https://westhunt.wordpress.com/2015/03/31/scanners-live-in-vain/#comment-93791
Correlation between brain volume and IQ is about 0.4 . Shows up clearly in studies with sufficient power.

“poverty affects prenatal environment a lot.” No it does not. “poverty” in this country means having plenty to eat.

The Great IQ Depression: https://westhunt.wordpress.com/2014/03/07/the-great-iq-depression/
We hear that poverty can sap brainpower, reduce frontal lobe function, induce the fantods, etc. But exactly what do we mean by ‘poverty’? If we’re talking about an absolute, rather than relative, standard of living, most of the world today must be in poverty, as well as almost everyone who lived much before the present. Most Chinese are poorer than the official US poverty level, right? The US had fairly rapid economic growth until the last generation or so, so if you go very far back in time, almost everyone was poor, by modern standards. Even those who were considered rich at the time suffered from zero prenatal care, largely useless medicine, tabletless high schools, and slow Internet connections. They had to ride horses that had lousy acceleration and pooped all over the place.

In particular, if all this poverty-gives-you-emerods stuff is true, scholastic achievement should have collapsed in the Great Depression – and with the miracle of epigenetics, most of us should still be suffering those bad effects.

But somehow none of this seems to have gone through the formality of actually happening.
west-hunter  scitariat  commentary  study  org:nat  summary  rant  critique  neuro  neuro-nitgrit  brain-scan  iq  class  correlation  compensation  pop-diff  biodet  behavioral-gen  westminster  experiment  attaq  measure  multi  discussion  ideas  history  early-modern  pre-ww2  usa  gedanken  analogy  comparison  time  china  asia  world  developing-world  economics  growth-econ  medicine  healthcare  epigenetics  troll  aphorism  cycles  obesity  poast  nutrition  hypochondria  explanans 
august 2017 by nhaliday
Distribution of Word Lengths in Various Languages - Ravi Parikh's Website
Note that this visualization isn't normalized based on usage. For example the English word 'the' is used frequently, while the word 'lugubrious' is rarely used; however both words count the same in computing the histogram and average word lengths. A great idea for a follow-up would be to use language corpuses instead of word lists in order to build these histograms.
techtariat  data  visualization  project  anglo  language  foreign-lang  distribution  expectancy  measure  lexical 
june 2017 by nhaliday
[1705.03394] That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi's paradox
If a civilization wants to maximize computation it appears rational to aestivate until the far future in order to exploit the low temperature environment: this can produce a 10^30 multiplier of achievable computation. We hence suggest the "aestivation hypothesis": the reason we are not observing manifestations of alien civilizations is that they are currently (mostly) inactive, patiently waiting for future cosmic eras. This paper analyzes the assumptions going into the hypothesis and how physical law and observational evidence constrain the motivations of aliens compatible with the hypothesis.

http://aleph.se/andart2/space/the-aestivation-hypothesis-popular-outline-and-faq/

simpler explanation (just different math for Drake equation):
Dissolving the Fermi Paradox: http://www.jodrellbank.manchester.ac.uk/media/eps/jodrell-bank-centre-for-astrophysics/news-and-events/2017/uksrn-slides/Anders-Sandberg---Dissolving-Fermi-Paradox-UKSRN.pdf
http://marginalrevolution.com/marginalrevolution/2017/07/fermi-paradox-resolved.html
Overall the argument is that point estimates should not be shoved into a Drake equation and then multiplied by each, as that requires excess certainty and masks much of the ambiguity of our knowledge about the distributions. Instead, a Bayesian approach should be used, after which the fate of humanity looks much better. Here is one part of the presentation:

Life Versus Dark Energy: How An Advanced Civilization Could Resist the Accelerating Expansion of the Universe: https://arxiv.org/abs/1806.05203
The presence of dark energy in our universe is causing space to expand at an accelerating rate. As a result, over the next approximately 100 billion years, all stars residing beyond the Local Group will fall beyond the cosmic horizon and become not only unobservable, but entirely inaccessible, thus limiting how much energy could one day be extracted from them. Here, we consider the likely response of a highly advanced civilization to this situation. In particular, we argue that in order to maximize its access to useable energy, a sufficiently advanced civilization would chose to expand rapidly outward, build Dyson Spheres or similar structures around encountered stars, and use the energy that is harnessed to accelerate those stars away from the approaching horizon and toward the center of the civilization. We find that such efforts will be most effective for stars with masses in the range of M∼(0.2−1)M⊙, and could lead to the harvesting of stars within a region extending out to several tens of Mpc in radius, potentially increasing the total amount of energy that is available to a future civilization by a factor of several thousand. We also discuss the observable signatures of a civilization elsewhere in the universe that is currently in this state of stellar harvesting.
preprint  study  essay  article  bostrom  ratty  anthropic  philosophy  space  xenobio  computation  physics  interdisciplinary  ideas  hmm  cocktail  temperature  thermo  information-theory  bits  🔬  threat-modeling  time  scale  insight  multi  commentary  liner-notes  pdf  slides  error  probability  ML-MAP-E  composition-decomposition  econotariat  marginal-rev  fermi  risk  org:mat  questions  paradox  intricacy  multiplicative  calculation  street-fighting  methodology  distribution  expectancy  moments  bayesian  priors-posteriors  nibble  measurement  existence  technology  geoengineering  magnitude  spatial  density  spreading  civilization  energy-resources  phys-energy  measure  direction  speculation  structure 
may 2017 by nhaliday
Pearson correlation coefficient - Wikipedia
https://en.wikipedia.org/wiki/Coefficient_of_determination
what does this mean?: https://twitter.com/GarettJones/status/863546692724858880
deleted but it was about the Pearson correlation distance: 1-r
I guess it's a metric

https://en.wikipedia.org/wiki/Explained_variation

http://infoproc.blogspot.com/2014/02/correlation-and-variance.html
A less misleading way to think about the correlation R is as follows: given X,Y from a standardized bivariate distribution with correlation R, an increase in X leads to an expected increase in Y: dY = R dX. In other words, students with +1 SD SAT score have, on average, roughly +0.4 SD college GPAs. Similarly, students with +1 SD college GPAs have on average +0.4 SAT.

this reminds me of the breeder's equation (but it uses r instead of h^2, so it can't actually be the same)

https://www.reddit.com/r/slatestarcodex/comments/631haf/on_the_commentariat_here_and_why_i_dont_think_i/dfx4e2s/
stats  science  hypothesis-testing  correlation  metrics  plots  regression  wiki  reference  nibble  methodology  multi  twitter  social  discussion  best-practices  econotariat  garett-jones  concept  conceptual-vocab  accuracy  causation  acm  matrix-factorization  todo  explanation  yoga  hsu  street-fighting  levers  🌞  2014  scitariat  variance-components  meta:prediction  biodet  s:**  mental-math  reddit  commentary  ssc  poast  gwern  data-science  metric-space  similarity  measure  dependence-independence 
may 2017 by nhaliday
What would count as an explanation of the size of China? - Marginal REVOLUTION
econotariat  marginal-rev  discussion  speculation  links  broad-econ  history  iron-age  medieval  early-modern  economics  growth-econ  divergence  political-econ  leviathan  incentives  geopolitics  world  asia  china  roots  coordination  decentralized  stylized-facts  government  institutions  cultural-dynamics  wealth-of-nations  homo-hetero  sinosphere  list  environment  agriculture  multi  twitter  social  commentary  turchin  big-picture  deep-materialism  pdf  cliometrics  scale  orient  chart  🌞  🎩  mediterranean  the-classics  comparison  conquest-empire  the-great-west-whale  europe  microfoundations  geography  explanans  occident  competition  anthropology  hari-seldon  piracy  study  pseudoE  war  taxes  demographics  population  density  monetary-fiscal  causation  gavisti  urban-rural  maps  data  visualization  frontier  civilization  peace-violence  time-series  walter-scheidel  article  polisci  n-factor  whole-partial-many  exit-voice  polis  number  pro-rata  flux-stasis  measure  india  MENA 
may 2017 by nhaliday
Riemannian manifold - Wikipedia
In differential geometry, a (smooth) Riemannian manifold or (smooth) Riemannian space (M,g) is a real smooth manifold M equipped with an inner product {\displaystyle g_{p}} on the tangent space {\displaystyle T_{p}M} at each point {\displaystyle p} that varies smoothly from point to point in the sense that if X and Y are vector fields on M, then {\displaystyle p\mapsto g_{p}(X(p),Y(p))} is a smooth function. The family {\displaystyle g_{p}} of inner products is called a Riemannian metric (tensor). These terms are named after the German mathematician Bernhard Riemann. The study of Riemannian manifolds constitutes the subject called Riemannian geometry.

A Riemannian metric (tensor) makes it possible to define various geometric notions on a Riemannian manifold, such as angles, lengths of curves, areas (or volumes), curvature, gradients of functions and divergence of vector fields.
concept  definition  math  differential  geometry  manifolds  inner-product  norms  measure  nibble 
february 2017 by nhaliday
I've heard in the Middle Ages peasants weren't allowed to travel and that it was very difficult to travel in general. But what about pilgrimages then? Who participated in them and how did they overcome the difficulties of travel? : AskHistorians
How far from home did the average medieval person travel in a lifetime?: https://www.reddit.com/r/AskHistorians/comments/1a1egs/how_far_from_home_did_the_average_medieval_person/
What was it like to travel during the middle ages?: https://www.reddit.com/r/AskHistorians/comments/32n9ji/what_was_it_like_to_travel_during_the_middle_ages/
How expensive were medieval era inns relative to the cost of travel?: https://www.reddit.com/r/AskHistorians/comments/2j3a1m/how_expensive_were_medieval_era_inns_relative_to/
Logistics of Travel in Medieval Times: https://www.reddit.com/r/AskHistorians/comments/3fc8li/logistics_of_travel_in_medieval_times/
Were people of antiquity and the Middle Ages able to travel relatively freely?: https://www.reddit.com/r/AskHistorians/comments/wy3ir/were_people_of_antiquity_and_the_middle_ages_able/
How did someone such as Ibn Battuta (practically and logistically) travel, and keep travelling?: https://www.reddit.com/r/AskHistorians/comments/1nw9mg/how_did_someone_such_as_ibn_battuta_practically/
'm a Norseman around the year 950 C.E. Could I have been born in Iceland, raided the shores of the Caspian Sea, and walked amongst the markets of Baghdad in my lifetime? How common was extreme long distance travel?: https://www.reddit.com/r/AskHistorians/comments/2gh52r/im_a_norseman_around_the_year_950_ce_could_i_have/
Lone (inter-continental) long-distance travelers in the Middle Ages?: https://www.reddit.com/r/AskHistorians/comments/1mrraq/lone_intercontinental_longdistance_travelers_in/
q-n-a  reddit  social  discussion  travel  europe  medieval  lived-experience  multi  money  iron-age  MENA  islam  china  asia  prepping  scale  measure  navigation  history  africa  people  feudal  logistics 
february 2017 by nhaliday
Pre-industrial travel would take weeks to get anywhere. What did people do during that time? : AskHistorians
How did travellers travel the world in the 16th century? Was there visas?: https://www.reddit.com/r/AskHistorians/comments/5659ig/how_did_travellers_travel_the_world_in_the_16th/
How far from home would a typical Europeanin the 1600s travel in their life?: https://www.reddit.com/r/AskHistorians/comments/5gsgn7/how_far_from_home_would_a_typical_europeanin_the/
I just read an article about how I can travel across country for $213 on Amtrak. How much would the trip have cost me in, say, the mid-1800s: https://www.reddit.com/r/AskHistorians/comments/3poen3/i_just_read_an_article_about_how_i_can_travel/
Ridiculously subjective but I'm curious anyways: What traveling distance was considered beyond the hopes and even imagination of a common person during your specialty?: https://www.reddit.com/r/AskHistorians/comments/13zlsg/ridiculously_subjective_but_im_curious_anyways/
How fast could you travel across the U.S. in the 1800s?: https://www.mnn.com/green-tech/transportation/stories/how-fast-could-you-travel-across-the-us-in-the-1800s
What would be the earliest known example(s) of travel that could be thought of as "tourism"?: https://www.reddit.com/r/AskHistorians/comments/2uqxk9/what_would_be_the_earliest_known_examples_of/
https://twitter.com/conradhackett/status/944382041566654464
https://archive.is/9GWdK
This map shows travel time from London in 1881
q-n-a  reddit  social  discussion  history  europe  russia  early-modern  travel  lived-experience  multi  money  transportation  prepping  world  antiquity  iron-age  medieval  MENA  islam  comparison  mediterranean  usa  trivia  magnitude  scale  pre-ww2  navigation  measure  data  visualization  maps  feudal  twitter  pic  backup  journos-pundits 
february 2017 by nhaliday
Mixing (mathematics) - Wikipedia
One way to describe this is that strong mixing implies that for any two possible states of the system (realizations of the random variable), when given a sufficient amount of time between the two states, the occurrence of the states is independent.

Mixing coefficient is
α(n) = sup{|P(A∪B) - P(A)P(B)| : A in σ(X_0, ..., X_{t-1}), B in σ(X_{t+n}, ...), t >= 0}
for σ(...) the sigma algebra generated by those r.v.s.

So it's a notion of total variational distance between the true distribution and the product distribution.
concept  math  acm  physics  probability  stochastic-processes  definition  mixing  iidness  wiki  reference  nibble  limits  ergodic  math.DS  measure  dependence-independence 
february 2017 by nhaliday
« earlier      
per page:    204080120160

bundles : abstractmath

related tags

2016-election  aaronson  ability-competence  absolute-relative  abstraction  academia  accretion  accuracy  acm  acmtariat  additive  additive-combo  advanced  advice  africa  age-generation  age-of-discovery  aging  agriculture  ai  albion  alesina  algebra  algorithms  alien-character  altruism  amazon  american-nations  AMT  analogy  analysis  analytical-holistic  anarcho-tyranny  anglo  anglosphere  anthropic  anthropology  antidemos  antiquity  aphorism  api  apollonian-dionysian  app  apple  applicability-prereqs  applications  approximation  archaeology  archaics  aristos  arms  arrows  art  article  ascetic  asia  assortative-mating  atmosphere  atoms  attaq  attention  autism  automata-languages  automation  axioms  backup  bare-hands  baseball  bayesian  behavioral-gen  being-becoming  being-right  benchmarks  best-practices  biases  big-list  big-peeps  big-picture  big-surf  bio  biodet  bioinformatics  bits  blowhards  boltzmann  books  boolean-analysis  bostrom  bounded-cognition  brain-scan  branches  britain  broad-econ  browser  brunn-minkowski  build-packaging  business  c(pp)  c:***  caching  calculation  calculator  canada  career  carmack  cartoons  CAS  causation  censorship  certificates-recognition  characterization  charity  chart  cheatsheet  checking  checklists  chemistry  china  christianity  circuits  civilization  cjones-like  class  clever-rats  client-server  cliometrics  closure  coarse-fine  cocktail  code-dive  code-organizing  coding-theory  cog-psych  cohesion  commentary  common-case  communication  community  comparison  compensation  competition  compilers  complement-substitute  complex-systems  complexity  composition-decomposition  compression  computation  computational-geometry  computer-memory  concentration-of-measure  concept  conceptual-vocab  concrete  concurrency  confidence  confluence  conquest-empire  consilience  consumerism  contracts  contradiction  contrarianism  convergence  convexity-curvature  cool  coordination  correctness  correlation  cost-benefit  counterexample  cracker-prog  creative  crime  criminal-justice  critique  crosstab  crypto  cs  cultural-dynamics  culture  curiosity  curvature  cycles  cynicism-idealism  d-lang  dan-luu  data  data-science  data-structures  database  dataset  dataviz  dbs  death  debate  debt  debugging  decentralized  decision-making  decision-theory  deep-learning  deep-materialism  definition  degrees-of-freedom  demographics  dennett  density  dependence-independence  descriptive  design  desktop  detail-architecture  developing-world  developmental  devtools  differential  dignity  dimensionality  diogenes  direct-indirect  direction  dirty-hands  discovery  discrete  discussion  disease  distributed  distribution  divergence  diversity  diy  documentation  dotnet  draft  DSL  duplication  duty  dynamic  dynamical  dysgenics  early-modern  earth  eastern-europe  econ-metrics  econometrics  economics  econotariat  ecosystem  eden  eden-heaven  editors  education  EEA  efficiency  egalitarianism-hierarchy  elections  electromag  elegance  embedded  embeddings  embodied  embodied-pack  empirical  ems  endogenous-exogenous  ends-means  energy-resources  engineering  enhancement  enlightenment-renaissance-restoration-reformation  ensembles  entropy-like  environment  environmental-effects  epigenetics  epistemic  equilibrium  ergodic  error  error-handling  essay  essence-existence  estimate  ethics  EU  europe  evidence-based  evolution  evopsych  examples  existence  exit-voice  exocortex  expectancy  experiment  expert-experience  explanans  explanation  exploratory  exposition  expression-survival  extrema  facebook  faq  features  fedja  fermi  feudal  fiction  fields  finance  finiteness  fitness  flexibility  fluid  flux-stasis  food  foreign-lang  foreign-policy  form-design  formal-methods  formal-values  forum  fourier  frameworks  free-riding  frequency  frontend  frontier  functional  futurism  gallic  games  garett-jones  gavisti  gedanken  gender  gender-diff  gene-drift  gene-flow  generalization  generative  genetic-correlation  genetic-load  genetics  genomics  geoengineering  geography  geometry  geopolitics  germanic  giants  gibbon  git  github  gnon  gnu  golang  good-evil  google  gotchas  government  gowers  grad-school  graph-theory  graphical-models  graphs  gravity  great-powers  greedy  grokkability  grokkability-clarity  ground-up  group-level  growth  growth-econ  guessing  guide  GWAS  gwern  hamming  hanson  happy-sad  hard-tech  hardware  hari-seldon  harvard  hashing  haskell  hci  healthcare  heavyweights  henrich  heuristic  hi-order-bits  hierarchy  high-dimension  higher-ed  history  hmm  hn  homo-hetero  homogeneity  honor  horror  howto  hsu  huge-data-the-biggest  human-bean  human-capital  humanity  hypochondria  hypothesis-testing  ideas  identification-equivalence  identity  ideology  idk  IEEE  iidness  illusion  impact  impro  incentives  increase-decrease  india  inference  info-dynamics  info-foraging  infographic  information-theory  inhibition  init  inner-product  innovation  input-output  insight  instinct  institutions  integral  intelligence  interdisciplinary  interface  interface-compatibility  internet  intersection  intersection-connectedness  interview-prep  intricacy  intuition  invariance  investing  ios  iq  iron-age  islam  iteration-recursion  janus  japan  jargon  javascript  jobs  journos-pundits  justice  jvm  kernels  keyboard  kinship  knowledge  korea  labor  language  large-factor  latency-throughput  latin-america  law  leadership  learning  legacy  len:long  len:short  lens  lesswrong  let-me-see  letters  levers  leviathan  lexical  libraries  life-history  lifehack  lifts-projections  limits  linear-algebra  linearity  liner-notes  linguistics  links  lisp  list  literature  lived-experience  local-global  logic  logistics  lol  long-short-run  long-term  love-hate  lower-bounds  machiavelli  machine-learning  macro  magnitude  maker  manifolds  map-territory  maps  marginal  marginal-rev  markets  markov  martingale  matching  math  math.CA  math.CO  math.DS  math.FA  math.GN  math.GR  math.MG  math.NT  math.RT  mathtariat  matrix-factorization  measure  measurement  mechanics  medicine  medieval  mediterranean  memory-management  MENA  mental-math  meta-analysis  meta:math  meta:prediction  meta:reading  meta:war  metabuch  metal-to-virtual  metameta  methodology  metric-space  metrics  micro  microfoundations  migrant-crisis  migration  military  minimalism  minimum-viable  miri-cfar  mit  mixing  ML-MAP-E  mobile  model-class  models  modernity  moments  monetary-fiscal  money  monotonicity  morality  mostly-modern  motivation  move-fast-(and-break-things)  msr  multi  multiplicative  music-theory  mutation  n-factor  nationalism-globalism  nature  navigation  near-far  network-structure  networking  neuro  neuro-nitgrit  neurons  new-religion  news  nibble  nietzschean  nitty-gritty  no-go  nonlinearity  nordic  norms  notation  novelty  nuclear  null-result  number  numerics  nutrition  obesity  objektbuch  ocaml-sml  occam  occident  oceans  old-anglo  oly  oly-programming  oop  open-closed  open-problems  optimate  optimism  optimization  order-disorder  orders  ORFE  org:bleg  org:com  org:edu  org:inst  org:junk  org:mag  org:mat  org:med  org:nat  org:ngo  org:popup  org:rec  org:sci  organization  orient  os  oscillation  osx  outcome-risk  outdoors  overflow  p:***  p:someday  p:whenever  PAC  papers  paradox  parasites-microbiome  parsimony  paste  patho-altruism  PCP  pdf  peace-violence  people  performance  personality  pessimism  phalanges  phd  philosophy  phys-energy  physics  pic  pigeonhole-markov  piracy  planning  plots  pls  plt  poast  polis  polisci  political-econ  politics  poll  pop-diff  pop-structure  popsci  population  population-genetics  positivity  power  power-law  pragmatic  pre-2013  pre-ww2  prediction  predictive-processing  prepping  preprint  presentation  prioritizing  priors-posteriors  pro-rata  probabilistic-method  probability  problem-solving  productivity  profile  programming  progression  project  proof-systems  proofs  properties  protocol-metadata  pseudoE  psych-architecture  psychiatry  psychology  psychometrics  putnam-like  puzzles  python  q-n-a  qra  quality  quantifiers-sums  quantitative-qualitative  quantum  quantum-info  questions  quixotic  quotes  random  ranking  rant  rationality  ratty  reading  realness  realpolitik  reason  rec-math  recommendations  recruiting  reddit  reduction  reference  reflection  regional-scatter-plots  regression  regularity  regularizer  relativity  religion  replication  repo  research  resources-effects  responsibility  retention  retrofit  review  rhetoric  rhythm  rigidity  rigor  risk  roadmap  robust  roots  rot  rsc  russia  rust  s-factor  s:*  s:**  s:***  safety  sample-complexity  sampling  sanctity-degradation  sapiens  scala  scale  scaling-tech  scholar-pack  sci-comp  science  science-anxiety  scifi-fantasy  scitariat  SDP  search  securities  security  self-interest  self-report  sentiment  separation  sequential  series  shannon  shift  shipping  SIGGRAPH  signal-noise  signum  similarity  simplification-normalization  simulation  singularity  sinosphere  skeleton  skunkworks  sky  slides  slippery-slope  smoothness  social  social-choice  social-psych  social-science  social-structure  sociality  society  sociology  soft-question  software  space  space-complexity  span-cover  sparsity  spatial  speaking  spearhead  spectral  speculation  speed  speedometer  spengler  sports  spreading  ssc  stackex  stagnation  stanford  startups  stat-mech  state  state-of-art  static-dynamic  stats  status  stochastic-processes  stock-flow  stories  street-fighting  stress  strings  structure  study  studying  stylized-facts  sub-super  subculture  subjective-objective  sublinear  summary  summer-2014  supply-demand  survey  survival  sv  symmetry  synthesis  system-design  systems  tactics  tails  tainter  taxes  tcs  tcstariat  tech  tech-infrastructure  technical-writing  technocracy  technology  techtariat  telos-atelos  temperature  tensors  terminal  the-bones  the-classics  the-great-west-whale  the-self  the-trenches  the-world-is-just-atoms  theory-of-mind  theory-practice  theos  thermo  thesis  thick-thin  things  thinking  threat-modeling  thucydides  thurston  tidbits  time  time-complexity  time-series  time-use  todo  toolkit  tools  top-n  topology  track-record  trade  tradeoffs  tradition  transportation  travel  trees  trends  tribalism  tricki  tricks  trivia  troll  trump  trust  truth  turchin  turing  tutorial  twitter  types  ubiquity  ui  uncertainty  unintended-consequences  uniqueness  unit  universalism-particularism  unix  urban-rural  us-them  usa  ux  values  variance-components  vcs  video  virtu  virtualization  visual-understanding  visualization  visuo  volo-avolo  von-neumann  walter-scheidel  war  waves  wealth  wealth-of-nations  web  west-hunter  westminster  white-paper  whole-partial-many  wiki  wild-ideas  wire-guided  within-group  within-without  wonkish  workflow  working-stiff  world  world-war  wormholes  worrydream  worse-is-better/the-right-thing  writing  xenobio  yak-shaving  yoga  zooming  🌞  🎓  🎩  🐸  👳  👽  🔬  🖥  🤖  🦉 

Copy this bookmark:



description:


tags: