nhaliday + measure   113

The Existential Risk of Math Errors - Gwern.net
How big is this upper bound? Mathematicians have often made errors in proofs. But it’s rarer for ideas to be accepted for a long time and then rejected. But we can divide errors into 2 basic cases corresponding to type I and type II errors:

1. Mistakes where the theorem is still true, but the proof was incorrect (type I)
2. Mistakes where the theorem was false, and the proof was also necessarily incorrect (type II)

Before someone comes up with a final answer, a mathematician may have many levels of intuition in formulating & working on the problem, but we’ll consider the final end-product where the mathematician feels satisfied that he has solved it. Case 1 is perhaps the most common case, with innumerable examples; this is sometimes due to mistakes in the proof that anyone would accept is a mistake, but many of these cases are due to changing standards of proof. For example, when David Hilbert discovered errors in Euclid’s proofs which no one noticed before, the theorems were still true, and the gaps more due to Hilbert being a modern mathematician thinking in terms of formal systems (which of course Euclid did not think in). (David Hilbert himself turns out to be a useful example of the other kind of error: his famous list of 23 problems was accompanied by definite opinions on the outcome of each problem and sometimes timings, several of which were wrong or questionable5.) Similarly, early calculus used ‘infinitesimals’ which were sometimes treated as being 0 and sometimes treated as an indefinitely small non-zero number; this was incoherent and strictly speaking, practically all of the calculus results were wrong because they relied on an incoherent concept - but of course the results were some of the greatest mathematical work ever conducted6 and when later mathematicians put calculus on a more rigorous footing, they immediately re-derived those results (sometimes with important qualifications), and doubtless as modern math evolves other fields have sometimes needed to go back and clean up the foundations and will in the future.7

...

Isaac Newton, incidentally, gave two proofs of the same solution to a problem in probability, one via enumeration and the other more abstract; the enumeration was correct, but the other proof totally wrong and this was not noticed for a long time, leading Stigler to remark:

...

TYPE I > TYPE II?
“Lefschetz was a purely intuitive mathematician. It was said of him that he had never given a completely correct proof, but had never made a wrong guess either.”
- Gian-Carlo Rota13

Case 2 is disturbing, since it is a case in which we wind up with false beliefs and also false beliefs about our beliefs (we no longer know that we don’t know). Case 2 could lead to extinction.

...

Except, errors do not seem to be evenly & randomly distributed between case 1 and case 2. There seem to be far more case 1s than case 2s, as already mentioned in the early calculus example: far more than 50% of the early calculus results were correct when checked more rigorously. Richard Hamming attributes to Ralph Boas a comment that while editing Mathematical Reviews that “of the new results in the papers reviewed most are true but the corresponding proofs are perhaps half the time plain wrong”.

...

Gian-Carlo Rota gives us an example with Hilbert:

...

Olga labored for three years; it turned out that all mistakes could be corrected without any major changes in the statement of the theorems. There was one exception, a paper Hilbert wrote in his old age, which could not be fixed; it was a purported proof of the continuum hypothesis, you will find it in a volume of the Mathematische Annalen of the early thirties.
ratty  gwern  analysis  essay  realness  truth  correctness  reason  philosophy  math  proofs  formal-methods  cs  programming  engineering  worse-is-better/the-right-thing  intuition  giants  old-anglo  error  street-fighting  heuristic  zooming  risk  threat-modeling  software  lens  logic  inference  physics  differential  geometry  estimate  distribution  robust  speculation  nonlinearity  cost-benefit  convexity-curvature  measure  scale  trivia  cocktail  history  early-modern  europe  math.CA  rigor 
12 days ago by nhaliday
Computer latency: 1977-2017
If we look at overall results, the fastest machines are ancient. Newer machines are all over the place. Fancy gaming rigs with unusually high refresh-rate displays are almost competitive with machines from the late 70s and early 80s, but “normal” modern computers can’t compete with thirty to forty year old machines.

...

If we exclude the game boy color, which is a different class of device than the rest, all of the quickest devices are Apple phones or tablets. The next quickest device is the blackberry q10. Although we don’t have enough data to really tell why the blackberry q10 is unusually quick for a non-Apple device, one plausible guess is that it’s helped by having actual buttons, which are easier to implement with low latency than a touchscreen. The other two devices with actual buttons are the gameboy color and the kindle 4.

After that iphones and non-kindle button devices, we have a variety of Android devices of various ages. At the bottom, we have the ancient palm pilot 1000 followed by the kindles. The palm is hamstrung by a touchscreen and display created in an era with much slower touchscreen technology and the kindles use e-ink displays, which are much slower than the displays used on modern phones, so it’s not surprising to see those devices at the bottom.

...

Almost every computer and mobile device that people buy today is slower than common models of computers from the 70s and 80s. Low-latency gaming desktops and the ipad pro can get into the same range as quick machines from thirty to forty years ago, but most off-the-shelf devices aren’t even close.

If we had to pick one root cause of latency bloat, we might say that it’s because of “complexity”. Of course, we all know that complexity is bad. If you’ve been to a non-academic non-enterprise tech conference in the past decade, there’s a good chance that there was at least one talk on how complexity is the root of all evil and we should aspire to reduce complexity.

Unfortunately, it's a lot harder to remove complexity than to give a talk saying that we should remove complexity. A lot of the complexity buys us something, either directly or indirectly. When we looked at the input of a fancy modern keyboard vs. the apple 2 keyboard, we saw that using a relatively powerful and expensive general purpose processor to handle keyboard inputs can be slower than dedicated logic for the keyboard, which would both be simpler and cheaper. However, using the processor gives people the ability to easily customize the keyboard, and also pushes the problem of “programming” the keyboard from hardware into software, which reduces the cost of making the keyboard. The more expensive chip increases the manufacturing cost, but considering how much of the cost of these small-batch artisanal keyboards is the design cost, it seems like a net win to trade manufacturing cost for ease of programming.

...

If you want a reference to compare the kindle against, a moderately quick page turn in a physical book appears to be about 200 ms.
techtariat  dan-luu  performance  time  hardware  consumerism  objektbuch  data  history  reflection  critique  software  roots  tainter  engineering  nitty-gritty  ui  ux  hci  ios  mobile  apple  amazon  sequential  trends  increase-decrease  measure  analysis  measurement  os  systems  IEEE  intricacy  desktop  benchmarks  rant  carmack  system-design  degrees-of-freedom  keyboard  terminal  editors  links  input-output  networking  world  s:** 
18 days ago by nhaliday
c++ - Which is faster: Stack allocation or Heap allocation - Stack Overflow
On my machine, using g++ 3.4.4 on Windows, I get "0 clock ticks" for both stack and heap allocation for anything less than 100000 allocations, and even then I get "0 clock ticks" for stack allocation and "15 clock ticks" for heap allocation. When I measure 10,000,000 allocations, stack allocation takes 31 clock ticks and heap allocation takes 1562 clock ticks.

so maybe around 100x difference? what does that work out to in terms of total workload?

hmm:
http://vlsiarch.eecs.harvard.edu/wp-content/uploads/2017/02/asplos17mallacc.pdf
Recent work shows that dynamic memory allocation consumes nearly 7% of all cycles in Google datacenters.

That's not too bad actually. Seems like I shouldn't worry about shifting from heap to stack/globals unless profiling says it's important, particularly for non-oly stuff.

edit: Actually, factor x100 for 7% is pretty high, could be increase constant factor by almost an order of magnitude.
q-n-a  stackex  programming  c(pp)  systems  memory-management  performance  intricacy  comparison  benchmarks  data  objektbuch  empirical  google  papers  nibble  time  measure  pro-rata  distribution  multi  pdf  oly-programming  computer-memory 
26 days ago by nhaliday
C++ Core Guidelines
This document is a set of guidelines for using C++ well. The aim of this document is to help people to use modern C++ effectively. By “modern C++” we mean effective use of the ISO C++ standard (currently C++17, but almost all of our recommendations also apply to C++14 and C++11). In other words, what would you like your code to look like in 5 years’ time, given that you can start now? In 10 years’ time?

https://isocpp.github.io/CppCoreGuidelines/
“Within C++ is a smaller, simpler, safer language struggling to get out.” – Bjarne Stroustrup

...

The guidelines are focused on relatively higher-level issues, such as interfaces, resource management, memory management, and concurrency. Such rules affect application architecture and library design. Following the rules will lead to code that is statically type safe, has no resource leaks, and catches many more programming logic errors than is common in code today. And it will run fast - you can afford to do things right.

We are less concerned with low-level issues, such as naming conventions and indentation style. However, no topic that can help a programmer is out of bounds.

Our initial set of rules emphasize safety (of various forms) and simplicity. They may very well be too strict. We expect to have to introduce more exceptions to better accommodate real-world needs. We also need more rules.

...

The rules are designed to be supported by an analysis tool. Violations of rules will be flagged with references (or links) to the relevant rule. We do not expect you to memorize all the rules before trying to write code.

contrary:
https://aras-p.info/blog/2018/12/28/Modern-C-Lamentations/
This will be a long wall of text, and kinda random! My main points are:
1. C++ compile times are important,
2. Non-optimized build performance is important,
3. Cognitive load is important. I don’t expand much on this here, but if a programming language or a library makes me feel stupid, then I’m less likely to use it or like it. C++ does that a lot :)
programming  engineering  pls  best-practices  systems  c(pp)  guide  metabuch  objektbuch  reference  cheatsheet  elegance  frontier  libraries  intricacy  advanced  advice  recommendations  big-picture  novelty  lens  philosophy  state  error  types  concurrency  memory-management  performance  abstraction  plt  compilers  expert-experience  multi  checking  devtools  flux-stasis  safety  system-design  techtariat  time  measure  dotnet  comparison  examples  build-packaging  thinking  worse-is-better/the-right-thing  cost-benefit  tradeoffs  essay  commentary  oop  correctness  computer-memory  error-handling  resources-effects 
4 weeks ago by nhaliday
c++ - Why is size_t unsigned? - Stack Overflow
size_t is unsigned for historical reasons.

On an architecture with 16 bit pointers, such as the "small" model DOS programming, it would be impractical to limit strings to 32 KB.

For this reason, the C standard requires (via required ranges) ptrdiff_t, the signed counterpart to size_t and the result type of pointer difference, to be effectively 17 bits.

Those reasons can still apply in parts of the embedded programming world.

However, they do not apply to modern 32-bit or 64-bit programming, where a much more important consideration is that the unfortunate implicit conversion rules of C and C++ make unsigned types into bug attractors, when they're used for numbers (and hence, arithmetical operations and magnitude comparisions). With 20-20 hindsight we can now see that the decision to adopt those particular conversion rules, where e.g. string( "Hi" ).length() < -3 is practically guaranteed, was rather silly and impractical. However, that decision means that in modern programming, adopting unsigned types for numbers has severe disadvantages and no advantages – except for satisfying the feelings of those who find unsigned to be a self-descriptive type name, and fail to think of typedef int MyType.

Summing up, it was not a mistake. It was a decision for then very rational, practical programming reasons. It had nothing to do with transferring expectations from bounds-checked languages like Pascal to C++ (which is a fallacy, but a very very common one, even if some of those who do it have never heard of Pascal).
q-n-a  stackex  c(pp)  systems  embedded  hardware  measure  types  signum  gotchas  roots  explanans  pls  programming 
4 weeks ago by nhaliday
How Many Keystrokes Programers Type a Day?
I was quite surprised how low my own figure is. But thinking about it… it makes sense. Even though we sit in front of computer all day, but the actual typing is a small percentage of that. Most of the time, you have to lunch, run errands, browse web, read docs, chat on phone, run to the bathroom. Perhaps only half of your work time is active coding or writing email/docs. Of that duration, perhaps majority of time you are digesting the info on screen.
techtariat  convexity-curvature  measure  keyboard  time  cost-benefit  data  time-use  workflow  efficiency  prioritizing  editors 
5 weeks ago by nhaliday
Lindy effect - Wikipedia
The Lindy effect is a theory that the future life expectancy of some non-perishable things like a technology or an idea is proportional to their current age, so that every additional period of survival implies a longer remaining life expectancy.[1] Where the Lindy effect applies, mortality rate decreases with time. In contrast, living creatures and mechanical things follow a bathtub curve where, after "childhood", the mortality rate increases with time. Because life expectancy is probabilistically derived, a thing may become extinct before its "expected" survival. In other words, one needs to gauge both the age and "health" of the thing to determine continued survival.
wiki  reference  concept  metabuch  ideas  street-fighting  planning  comparison  time  distribution  flux-stasis  history  measure  correlation  arrows  branches  pro-rata  manifolds  aging  stylized-facts  age-generation  robust  technology  thinking  cost-benefit  conceptual-vocab  methodology  threat-modeling  efficiency  neurons  tools  track-record 
5 weeks ago by nhaliday
Football Still Americans' Favorite Sport to Watch
37% say football is their favorite sport to watch, by far the most for any sport
Baseball is at its lowest point ever, with only 9% saying it is their favorite
Football has slipped in popularity from its peak of 43% in 2006 and 2007

WASHINGTON, D.C. -- American football, under attack from critics in recent years, has lost some of its popularity but is still the champion of U.S. spectator sports -- picked by 37% of U.S. adults as their favorite sport to watch. The next-most-popular sports are basketball, favored by 11%, and baseball, favored by 9%.

http://www.businessinsider.com/popularity-nfl-mlb-nba-2015-2
news  org:data  data  time-series  history  mostly-modern  poll  measure  usa  scale  sports  vulgar  trivia  org:biz  multi  comparison  ranking  human-bean  ubiquity 
5 weeks ago by nhaliday
Analysis of Current and Future Computer Science Needs via Advertised Faculty Searches for 2019 - CRN
Differences are also seen when analyzing results based on the type of institution. Positions related to Security have the highest percentages for all but top-100 institutions. The area of Artificial Intelligence/Data Mining/Machine Learning is of most interest for top-100 PhD institutions. Roughly 35% of positions for PhD institutions are in data-oriented areas. The results show a strong interest in data-oriented areas by public PhD and private PhD, MS, and BS institutions while public MS and BS institutions are most interested in Security.
org:edu  data  analysis  visualization  trends  recruiting  jobs  career  planning  academia  higher-ed  cs  tcs  machine-learning  systems  pro-rata  measure  long-term  🎓  uncertainty  progression  grad-school  phd  distribution  ranking  top-n  security  status  s-factor  comparison  homo-hetero  correlation  org:ngo  white-paper 
6 weeks ago by nhaliday
quality - Is the average number of bugs per loc the same for different programming languages? - Software Engineering Stack Exchange
Contrary to intuition, the number of errors per 1000 lines of does seem to be relatively constant, reguardless of the specific language involved. Steve McConnell, author of Code Complete and Software Estimation: Demystifying the Black Art goes over this area in some detail.

I don't have my copies readily to hand - they're sitting on my bookshelf at work - but a quick Google found a relevant quote:

Industry Average: "about 15 - 50 errors per 1000 lines of delivered code."
(Steve) further says this is usually representative of code that has some level of structured programming behind it, but probably includes a mix of coding techniques.

Quoted from Code Complete, found here: http://mayerdan.com/ruby/2012/11/11/bugs-per-line-of-code-ratio/

If memory serves correctly, Steve goes into a thorough discussion of this, showing that the figures are constant across languages (C, C++, Java, Assembly and so on) and despite difficulties (such as defining what "line of code" means).

Most importantly he has lots of citations for his sources - he's not offering unsubstantiated opinions, but has the references to back them up.

[ed.: I think this is delivered code? So after testing, debugging, etc. I'm more interested in the metric for the moment after you've gotten something to compile.]
q-n-a  stackex  programming  engineering  nitty-gritty  error  flux-stasis  books  recommendations  software  checking  debugging  pro-rata  pls  comparison  parsimony  measure  data  objektbuch  speculation  accuracy  density  correctness  estimate  street-fighting 
april 2019 by nhaliday
A cross-language perspective on speech information rate
Figure 2.

English (IREN = 1.08) shows a higher Information Rate than Vietnamese (IRVI = 1). On the contrary, Japanese exhibits the lowest IRL value of the sample. Moreover, one can observe that several languages may reach very close IRL with different encoding strategies: Spanish is characterized by a fast rate of low-density syllables while Mandarin exhibits a 34% slower syllabic rate with syllables ‘denser’ by a factor of 49%. Finally, their Information Rates differ only by 4%.

Is spoken English more efficient than other languages?: https://linguistics.stackexchange.com/questions/2550/is-spoken-english-more-efficient-than-other-languages
As a translator, I can assure you that English is no more efficient than other languages.
--
[some comments on a different answer:]
Russian, when spoken, is somewhat less efficient than English, and that is for sure. No one who has ever worked as an interpreter can deny it. You can convey somewhat more information in English than in Russian within an hour. The English language is not constrained by the rigid case and gender systems of the Russian language, which somewhat reduce the information density of the Russian language. The rules of the Russian language force the speaker to incorporate sometimes unnecessary details in his speech, which can be problematic for interpreters – user74809 Nov 12 '18 at 12:48
But in writing, though, I do think that Russian is somewhat superior. However, when it comes to common daily speech, I do not think that anyone can claim that English is less efficient than Russian. As a matter of fact, I also find Russian to be somewhat more mentally taxing than English when interpreting. I mean, anyone who has lived in the world of Russian and then moved to the world of English is certain to notice that English is somewhat more efficient in everyday life. It is not a night-and-day difference, but it is certainly noticeable. – user74809 Nov 12 '18 at 13:01
...
By the way, I am not knocking Russian. I love Russian, it is my mother tongue and the only language, in which I sound like a native speaker. I mean, I still have a pretty thick Russian accent. I am not losing it anytime soon, if ever. But like I said, living in both worlds, the Moscow world and the Washington D.C. world, I do notice that English is objectively more efficient, even if I am myself not as efficient in it as most other people. – user74809 Nov 12 '18 at 13:40

Do most languages need more space than English?: https://english.stackexchange.com/questions/2998/do-most-languages-need-more-space-than-english
Speaking as a translator, I can share a few rules of thumb that are popular in our profession:
- Hebrew texts are usually shorter than their English equivalents by approximately 1/3. To a large extent, that can be attributed to cheating, what with no vowels and all.
- Spanish, Portuguese and French (I guess we can just settle on Romance) texts are longer than their English counterparts by about 1/5 to 1/4.
- Scandinavian languages are pretty much on par with English. Swedish is a tiny bit more compact.
- Whether or not Russian (and by extension, Ukrainian and Belorussian) is more compact than English is subject to heated debate, and if you ask five people, you'll be presented with six different opinions. However, everybody seems to agree that the difference is just a couple percent, be it this way or the other.

--

A point of reference from the website I maintain. The files where we store the translations have the following sizes:

English: 200k
Portuguese: 208k
Spanish: 209k
German: 219k
And the translations are out of date. That is, there are strings in the English file that aren't yet in the other files.

For Chinese, the situation is a bit different because the character encoding comes into play. Chinese text will have shorter strings, because most words are one or two characters, but each character takes 3–4 bytes (for UTF-8 encoding), so each word is 3–12 bytes long on average. So visually the text takes less space but in terms of the information exchanged it uses more space. This Language Log post suggests that if you account for the encoding and remove redundancy in the data using compression you find that English is slightly more efficient than Chinese.

Is English more efficient than Chinese after all?: https://languagelog.ldc.upenn.edu/nll/?p=93
[Executive summary: Who knows?]

This follows up on a series of earlier posts about the comparative efficiency — in terms of text size — of different languages ("One world, how many bytes?", 8/5/2005; "Comparing communication efficiency across languages", 4/4/2008; "Mailbag: comparative communication efficiency", 4/5/2008). Hinrich Schütze wrote:
pdf  study  language  foreign-lang  linguistics  pro-rata  bits  communication  efficiency  density  anglo  japan  asia  china  mediterranean  data  multi  comparison  writing  meta:reading  measure  compression  empirical  evidence-based  experiment  analysis  chart  trivia  cocktail 
february 2019 by nhaliday
Lateralization of brain function - Wikipedia
Language
Language functions such as grammar, vocabulary and literal meaning are typically lateralized to the left hemisphere, especially in right handed individuals.[3] While language production is left-lateralized in up to 90% of right-handers, it is more bilateral, or even right-lateralized, in approximately 50% of left-handers.[4]

Broca's area and Wernicke's area, two areas associated with the production of speech, are located in the left cerebral hemisphere for about 95% of right-handers, but about 70% of left-handers.[5]:69

Auditory and visual processing
The processing of visual and auditory stimuli, spatial manipulation, facial perception, and artistic ability are represented bilaterally.[4] Numerical estimation, comparison and online calculation depend on bilateral parietal regions[6][7] while exact calculation and fact retrieval are associated with left parietal regions, perhaps due to their ties to linguistic processing.[6][7]

...

Depression is linked with a hyperactive right hemisphere, with evidence of selective involvement in "processing negative emotions, pessimistic thoughts and unconstructive thinking styles", as well as vigilance, arousal and self-reflection, and a relatively hypoactive left hemisphere, "specifically involved in processing pleasurable experiences" and "relatively more involved in decision-making processes".

Chaos and Order; the right and left hemispheres: https://orthosphere.wordpress.com/2018/05/23/chaos-and-order-the-right-and-left-hemispheres/
In The Master and His Emissary, Iain McGilchrist writes that a creature like a bird needs two types of consciousness simultaneously. It needs to be able to focus on something specific, such as pecking at food, while it also needs to keep an eye out for predators which requires a more general awareness of environment.

These are quite different activities. The Left Hemisphere (LH) is adapted for a narrow focus. The Right Hemisphere (RH) for the broad. The brains of human beings have the same division of function.

The LH governs the right side of the body, the RH, the left side. With birds, the left eye (RH) looks for predators, the right eye (LH) focuses on food and specifics. Since danger can take many forms and is unpredictable, the RH has to be very open-minded.

The LH is for narrow focus, the explicit, the familiar, the literal, tools, mechanism/machines and the man-made. The broad focus of the RH is necessarily more vague and intuitive and handles the anomalous, novel, metaphorical, the living and organic. The LH is high resolution but narrow, the RH low resolution but broad.

The LH exhibits unrealistic optimism and self-belief. The RH has a tendency towards depression and is much more realistic about a person’s own abilities. LH has trouble following narratives because it has a poor sense of “wholes.” In art it favors flatness, abstract and conceptual art, black and white rather than color, simple geometric shapes and multiple perspectives all shoved together, e.g., cubism. Particularly RH paintings emphasize vistas with great depth of field and thus space and time,[1] emotion, figurative painting and scenes related to the life world. In music, LH likes simple, repetitive rhythms. The RH favors melody, harmony and complex rhythms.

...

Schizophrenia is a disease of extreme LH emphasis. Since empathy is RH and the ability to notice emotional nuance facially, vocally and bodily expressed, schizophrenics tend to be paranoid and are often convinced that the real people they know have been replaced by robotic imposters. This is at least partly because they lose the ability to intuit what other people are thinking and feeling – hence they seem robotic and suspicious.

Oswald Spengler’s The Decline of the West as well as McGilchrist characterize the West as awash in phenomena associated with an extreme LH emphasis. Spengler argues that Western civilization was originally much more RH (to use McGilchrist’s categories) and that all its most significant artistic (in the broadest sense) achievements were triumphs of RH accentuation.

The RH is where novel experiences and the anomalous are processed and where mathematical, and other, problems are solved. The RH is involved with the natural, the unfamiliar, the unique, emotions, the embodied, music, humor, understanding intonation and emotional nuance of speech, the metaphorical, nuance, and social relations. It has very little speech, but the RH is necessary for processing all the nonlinguistic aspects of speaking, including body language. Understanding what someone means by vocal inflection and facial expressions is an intuitive RH process rather than explicit.

...

RH is very much the center of lived experience; of the life world with all its depth and richness. The RH is “the master” from the title of McGilchrist’s book. The LH ought to be no more than the emissary; the valued servant of the RH. However, in the last few centuries, the LH, which has tyrannical tendencies, has tried to become the master. The LH is where the ego is predominantly located. In split brain patients where the LH and the RH are surgically divided (this is done sometimes in the case of epileptic patients) one hand will sometimes fight with the other. In one man’s case, one hand would reach out to hug his wife while the other pushed her away. One hand reached for one shirt, the other another shirt. Or a patient will be driving a car and one hand will try to turn the steering wheel in the opposite direction. In these cases, the “naughty” hand is usually the left hand (RH), while the patient tends to identify herself with the right hand governed by the LH. The two hemispheres have quite different personalities.

The connection between LH and ego can also be seen in the fact that the LH is competitive, contentious, and agonistic. It wants to win. It is the part of you that hates to lose arguments.

Using the metaphor of Chaos and Order, the RH deals with Chaos – the unknown, the unfamiliar, the implicit, the emotional, the dark, danger, mystery. The LH is connected with Order – the known, the familiar, the rule-driven, the explicit, and light of day. Learning something means to take something unfamiliar and making it familiar. Since the RH deals with the novel, it is the problem-solving part. Once understood, the results are dealt with by the LH. When learning a new piece on the piano, the RH is involved. Once mastered, the result becomes a LH affair. The muscle memory developed by repetition is processed by the LH. If errors are made, the activity returns to the RH to figure out what went wrong; the activity is repeated until the correct muscle memory is developed in which case it becomes part of the familiar LH.

Science is an attempt to find Order. It would not be necessary if people lived in an entirely orderly, explicit, known world. The lived context of science implies Chaos. Theories are reductive and simplifying and help to pick out salient features of a phenomenon. They are always partial truths, though some are more partial than others. The alternative to a certain level of reductionism or partialness would be to simply reproduce the world which of course would be both impossible and unproductive. The test for whether a theory is sufficiently non-partial is whether it is fit for purpose and whether it contributes to human flourishing.

...

Analytic philosophers pride themselves on trying to do away with vagueness. To do so, they tend to jettison context which cannot be brought into fine focus. However, in order to understand things and discern their meaning, it is necessary to have the big picture, the overview, as well as the details. There is no point in having details if the subject does not know what they are details of. Such philosophers also tend to leave themselves out of the picture even when what they are thinking about has reflexive implications. John Locke, for instance, tried to banish the RH from reality. All phenomena having to do with subjective experience he deemed unreal and once remarked about metaphors, a RH phenomenon, that they are “perfect cheats.” Analytic philosophers tend to check the logic of the words on the page and not to think about what those words might say about them. The trick is for them to recognize that they and their theories, which exist in minds, are part of reality too.

The RH test for whether someone actually believes something can be found by examining his actions. If he finds that he must regard his own actions as free, and, in order to get along with other people, must also attribute free will to them and treat them as free agents, then he effectively believes in free will – no matter his LH theoretical commitments.

...

We do not know the origin of life. We do not know how or even if consciousness can emerge from matter. We do not know the nature of 96% of the matter of the universe. Clearly all these things exist. They can provide the subject matter of theories but they continue to exist as theorizing ceases or theories change. Not knowing how something is possible is irrelevant to its actual existence. An inability to explain something is ultimately neither here nor there.

If thought begins and ends with the LH, then thinking has no content – content being provided by experience (RH), and skepticism and nihilism ensue. The LH spins its wheels self-referentially, never referring back to experience. Theory assumes such primacy that it will simply outlaw experiences and data inconsistent with it; a profoundly wrong-headed approach.

...

Gödel’s Theorem proves that not everything true can be proven to be true. This means there is an ineradicable role for faith, hope and intuition in every moderately complex human intellectual endeavor. There is no one set of consistent axioms from which all other truths can be derived.

Alan Turing’s proof of the halting problem proves that there is no effective procedure for finding effective procedures. Without a mechanical decision procedure, (LH), when it comes to … [more]
gnon  reflection  books  summary  review  neuro  neuro-nitgrit  things  thinking  metabuch  order-disorder  apollonian-dionysian  bio  examples  near-far  symmetry  homo-hetero  logic  inference  intuition  problem-solving  analytical-holistic  n-factor  europe  the-great-west-whale  occident  alien-character  detail-architecture  art  theory-practice  philosophy  being-becoming  essence-existence  language  psychology  cog-psych  egalitarianism-hierarchy  direction  reason  learning  novelty  science  anglo  anglosphere  coarse-fine  neurons  truth  contradiction  matching  empirical  volo-avolo  curiosity  uncertainty  theos  axioms  intricacy  computation  analogy  essay  rhetoric  deep-materialism  new-religion  knowledge  expert-experience  confidence  biases  optimism  pessimism  realness  whole-partial-many  theory-of-mind  values  competition  reduction  subjective-objective  communication  telos-atelos  ends-means  turing  fiction  increase-decrease  innovation  creative  thick-thin  spengler  multi  ratty  hanson  complex-systems  structure  concrete  abstraction  network-s 
september 2018 by nhaliday
Theory of Self-Reproducing Automata - John von Neumann
Fourth Lecture: THE ROLE OF HIGH AND OF EXTREMELY HIGH COMPLICATION

Comparisons between computing machines and the nervous systems. Estimates of size for computing machines, present and near future.

Estimates for size for the human central nervous system. Excursus about the “mixed” character of living organisms. Analog and digital elements. Observations about the “mixed” character of all componentry, artificial as well as natural. Interpretation of the position to be taken with respect to these.

Evaluation of the discrepancy in size between artificial and natural automata. Interpretation of this discrepancy in terms of physical factors. Nature of the materials used.

The probability of the presence of other intellectual factors. The role of complication and the theoretical penetration that it requires.

Questions of reliability and errors reconsidered. Probability of individual errors and length of procedure. Typical lengths of procedure for computing machines and for living organisms--that is, for artificial and for natural automata. Upper limits on acceptable probability of error in individual operations. Compensation by checking and self-correcting features.

Differences of principle in the way in which errors are dealt with in artificial and in natural automata. The “single error” principle in artificial automata. Crudeness of our approach in this case, due to the lack of adequate theory. More sophisticated treatment of this problem in natural automata: The role of the autonomy of parts. Connections between this autonomy and evolution.

- 10^10 neurons in brain, 10^4 vacuum tubes in largest computer at time
- machines faster: 5 ms from neuron potential to neuron potential, 10^-3 ms for vacuum tubes

https://en.wikipedia.org/wiki/John_von_Neumann#Computing
pdf  article  papers  essay  nibble  math  cs  computation  bio  neuro  neuro-nitgrit  scale  magnitude  comparison  acm  von-neumann  giants  thermo  phys-energy  speed  performance  time  density  frequency  hardware  ems  efficiency  dirty-hands  street-fighting  fermi  estimate  retention  physics  interdisciplinary  multi  wiki  links  people  🔬  atoms  duplication  iteration-recursion  turing  complexity  measure  nature  technology  complex-systems  bits  information-theory  circuits  robust  structure  composition-decomposition  evolution  mutation  axioms  analogy  thinking  input-output  hi-order-bits  coding-theory  flexibility  rigidity  automata-languages 
april 2018 by nhaliday
Moral Transposition – neocolonial
- Every morality inherently has a doctrine on that which is morally beneficial and that which is morally harmful.
- Under the traditional, absolute, eucivic moral code of Western Civilisation these were termed Good and Evil.
- Under the modern, relative, dyscivic moral code of Progressivism these are called Love and Hate.
- Good and Evil inherently reference the in-group, and seek its growth in absolute capability and glory.  Love and Hate inherently reference the out-group, and seek its relative growth in capability and privilege.
- These combinations form the basis of the Frame through which individuals aligned with those moralities view the world.  They are markedly distinct; although both Good serves the moral directive of absolutely strengthening the in-group and Hate counters the moral directive of relatively weakening the in-group, they do not map to one another. This failure to map, as well as the overloading of terms, is why it is generally (intentionally, perniciously) difficult to discern the differences between the two world views.

You Didn’t Join a Suicide Cult: http://www.righteousdominion.org/2018/04/13/you-didnt-join-a-suicide-cult/
“Thomas Aquinas discusses whether there is an order to charity. Must we love everyone in outward effects equally? Or do we demonstrate love more to our near neighbors than our distant neighbors? His answers: No to the first question, yes to the second.”

...

This is a perfect distillation of the shaming patriotic Christians with a sense of national identity face. It is a very Alinsky tactic whose fourth rule is “Make the enemy live up to their own book of rules. You can kill them with this, for they can no more obey their own rules than the Christian church can live up to Christianity.” It is a tactic that can be applied to any idealistic movement. Now to be fair, my friend is not a disciple of Alinsky, but we have been bathed in Alinsky for at least two generations. Reading the Gospels alone and in a vacuum one could be forgiven coming away with that interpretation of Christ’s teachings. Take for example Luke 6:27-30:

...

Love as Virtue and Vice
Thirdly, Love is a virtue, the greatest, but like all virtues it can be malformed with excessive zeal.

Aristotle taught that virtues were a proper balance of behavior or feeling in a specific sphere. For instance, the sphere of confidence and fear: a proper balance in this sphere would be the virtue of courage. A deficit in this sphere would be cowardice and an excess would be rashness or foolhardiness. We can apply this to the question of charity. Charity in the bible is typically a translation of the Greek word for love. We are taught by Jesus that second only to loving God we are to love our neighbor (which in the Greek means those near you). If we are to view the sphere of love in this context of excess and deficit what would it be?

Selfishness <—- LOVE —-> Enablement

Enablement here is meant in its very modern sense. If we possess this excess of love, we are so selfless and “others focused” that we prioritize the other above all else we value. The pathologies of the target of our enablement are not considered; indeed, in this state of enablement they are even desired. The saying “the squeaky wheel gets the grease” is recast as: “The squeaky wheel gets the grease, BUT if I have nothing squeaking in m y life I’ll make sure to find or create something squeaky to “virtuously” burden myself with”.

Also, in this state of excessive love even those natural and healthy extensions of yourself must be sacrificed to the other. There was one mother I was acquainted with that embodies this excess of love. She had two biological children and anywhere from five to six very troubled adopted/foster kids at a time. She helped many kids out of terrible situations, but in turn her natural children were constantly subject to high levels of stress, drama, and constant babysitting of very troubled children. There was real resentment. In her efforts to help troubled foster children, she sacrificed the well-being of her biological children. Needless to say, her position on the refugee crisis was predictable.
gnon  politics  ideology  morality  language  universalism-particularism  tribalism  us-them  patho-altruism  altruism  thinking  religion  christianity  n-factor  civilization  nationalism-globalism  migration  theory-of-mind  ascetic  good-evil  sociality  love-hate  janus  multi  cynicism-idealism  kinship  duty  cohesion  charity  history  medieval  big-peeps  philosophy  egalitarianism-hierarchy  absolute-relative  measure  migrant-crisis  analytical-holistic  peace-violence  the-classics  self-interest  virtu  tails  convexity-curvature  equilibrium  free-riding  lexical 
march 2018 by nhaliday
'P' Versus 'Q': Differences and Commonalities between the Two Areas of Quantitative Finance by Attilio Meucci :: SSRN
There exist two separate branches of finance that require advanced quantitative techniques: the "Q" area of derivatives pricing, whose task is to "extrapolate the present"; and the "P" area of quantitative risk and portfolio management, whose task is to "model the future."

We briefly trace the history of these two branches of quantitative finance, highlighting their different goals and challenges. Then we provide an overview of their areas of intersection: the notion of risk premium; the stochastic processes used, often under different names and assumptions in the Q and in the P world; the numerical methods utilized to simulate those processes; hedging; and statistical arbitrage.
study  essay  survey  ORFE  finance  investing  probability  measure  stochastic-processes  outcome-risk 
december 2017 by nhaliday
light - Why doesn't the moon twinkle? - Astronomy Stack Exchange
As you mention, when light enters our atmosphere, it goes through several parcels of gas with varying density, temperature, pressure, and humidity. These differences make the refractive index of the parcels different, and since they move around (the scientific term for air moving around is "wind"), the light rays take slightly different paths through the atmosphere.

Stars are point sources
…the Moon is not
nibble  q-n-a  overflow  space  physics  trivia  cocktail  navigation  sky  visuo  illusion  measure  random  electromag  signal-noise  flux-stasis  explanation  explanans  magnitude  atmosphere  roots 
december 2017 by nhaliday
galaxy - How do astronomers estimate the total mass of dust in clouds and galaxies? - Astronomy Stack Exchange
Dust absorbs stellar light (primarily in the ultraviolet), and is heated up. Subsequently it cools by emitting infrared, "thermal" radiation. Assuming a dust composition and grain size distribution, the amount of emitted IR light per unit dust mass can be calculated as a function of temperature. Observing the object at several different IR wavelengths, a Planck curve can be fitted to the data points, yielding the dust temperature. The more UV light incident on the dust, the higher the temperature.

The result is somewhat sensitive to the assumptions, and thus the uncertainties are sometimes quite large. The more IR data points obtained, the better. If only one IR point is available, the temperature cannot be calculated. Then there's a degeneracy between incident UV light and the amount of dust, and the mass can only be estimated to within some orders of magnitude (I think).
nibble  q-n-a  overflow  space  measurement  measure  estimate  physics  electromag  visuo  methodology 
december 2017 by nhaliday
How do you measure the mass of a star? (Beginner) - Curious About Astronomy? Ask an Astronomer
Measuring the mass of stars in binary systems is easy. Binary systems are sets of two or more stars in orbit about each other. By measuring the size of the orbit, the stars' orbital speeds, and their orbital periods, we can determine exactly what the masses of the stars are. We can take that knowledge and then apply it to similar stars not in multiple systems.

We also can easily measure the luminosity and temperature of any star. A plot of luminocity versus temperature for a set of stars is called a Hertsprung-Russel (H-R) diagram, and it turns out that most stars lie along a thin band in this diagram known as the main Sequence. Stars arrange themselves by mass on the Main Sequence, with massive stars being hotter and brighter than their small-mass bretheren. If a star falls on the Main Sequence, we therefore immediately know its mass.

In addition to these methods, we also have an excellent understanding of how stars work. Our models of stellar structure are excellent predictors of the properties and evolution of stars. As it turns out, the mass of a star determines its life history from day 1, for all times thereafter, not only when the star is on the Main Sequence. So actually, the position of a star on the H-R diagram is a good indicator of its mass, regardless of whether it's on the Main Sequence or not.
nibble  q-n-a  org:junk  org:edu  popsci  space  physics  electromag  measurement  mechanics  gravity  cycles  oscillation  temperature  visuo  plots  correlation  metrics  explanation  measure  methodology 
december 2017 by nhaliday
Hyperbolic angle - Wikipedia
A unit circle {\displaystyle x^{2}+y^{2}=1} x^2 + y^2 = 1 has a circular sector with an area half of the circular angle in radians. Analogously, a unit hyperbola {\displaystyle x^{2}-y^{2}=1} {\displaystyle x^{2}-y^{2}=1} has a hyperbolic sector with an area half of the hyperbolic angle.
nibble  math  trivia  wiki  reference  physics  relativity  concept  atoms  geometry  ground-up  characterization  measure  definition  plots  calculation  nitty-gritty  direction  metrics  manifolds 
november 2017 by nhaliday
Genetics: CHROMOSOMAL MAPS AND MAPPING FUNCTIONS
Any particular gene has a specific location (its "locus") on a particular chromosome. For any two genes (or loci) alpha and beta, we can ask "What is the recombination frequency between them?" If the genes are on different chromosomes, the answer is 50% (independent assortment). If the two genes are on the same chromosome, the recombination frequency will be somewhere in the range from 0 to 50%. The "map unit" (1 cM) is the genetic map distance that corresponds to a recombination frequency of 1%. In large chromosomes, the cumulative map distance may be much greater than 50cM, but the maximum recombination frequency is 50%. Why? In large chromosomes, there is enough length to allow for multiple cross-overs, so we have to ask what result we expect for random multiple cross-overs.

1. How is it that random multiple cross-overs give the same result as independent assortment?

Figure 5.12 shows how the various double cross-over possibilities add up, resulting in gamete genotype percentages that are indistinguisable from independent assortment (50% parental type, 50% non-parental type). This is a very important figure. It provides the explanation for why genes that are far apart on a very large chromosome sort out in crosses just as if they were on separate chromosomes.

2. Is there a way to measure how close together two crossovers can occur involving the same two chromatids? That is, how could we measure whether there is spacial "interference"?

Figure 5.13 shows how a measurement of the gamete frequencies resulting from a "three point cross" can answer this question. If we would get a "lower than expected" occurrence of recombinant genotypes aCb and AcB, it would suggest that there is some hindrance to the two cross-overs occurring this close together. Crosses of this type in Drosophila have shown that, in this organism, double cross-overs do not occur at distances of less than about 10 cM between the two cross-over sites. ( Textbook, page 196. )

3. How does all of this lead to the "mapping function", the mathematical (graphical) relation between the observed recombination frequency (percent non-parental gametes) and the cumulative genetic distance in map units?

Figure 5.14 shows the result for the two extremes of "complete interference" and "no interference". The situation for real chromosomes in real organisms is somewhere between these extremes, such as the curve labelled "interference decreasing with distance".
org:junk  org:edu  explanation  faq  nibble  genetics  genomics  bio  ground-up  magnitude  data  flux-stasis  homo-hetero  measure  orders  metric-space  limits  measurement 
october 2017 by nhaliday
Inferior Faunas | West Hunter
I mentioned South American paleontologists defending the honor of their extinct animals, and pointed  out how stupid that is. There are many similar cases: Jefferson vs Buffon on the wimpiness of North American mammals (as a reader pointed out),  biologists defending the prowess of marsupials in Australia (a losing proposition) , etc.

So, we need to establish the relative competitive abilities of different faunas and settle this, once and for all.

Basically, the smaller and more isolated, the less competitive.  Pretty much true for both plants and animals.

Islands do poorly. Not just dodos: Hawaiian species, for example, are generally losers: everything from outside is a threat.

something hidden: https://westhunt.wordpress.com/2014/12/01/something-hidden/
I’m wondering of any of the Meridiungulata lineages did survive, unnoticed because they’re passing for insectivores or rats or whatever, just as tenrecs and golden moles did. . Obviously the big ones are extinct, probably the others as well, but until we’ve looked at the DNA of every little mammal in South America, the possibility exists.
west-hunter  scitariat  rant  discussion  ideas  nature  bio  archaeology  egalitarianism-hierarchy  absolute-relative  ranking  world  correlation  scale  oceans  geography  measure  network-structure  list  lol  speculation  latin-america  usa  convergence 
october 2017 by nhaliday
Power of a point - Wikipedia
The power of point P (see in Figure 1) can be defined equivalently as the product of distances from the point P to the two intersection points of any ray emanating from P.
nibble  math  geometry  spatial  ground-up  concept  metrics  invariance  identity  atoms  wiki  reference  measure  yoga  calculation 
september 2017 by nhaliday
How & Why Solar Eclipses Happen | Solar Eclipse Across America - August 21, 2017
Cosmic Coincidence
The Sun’s diameter is about 400 times that of the Moon. The Sun is also (on average) about 400 times farther away. As a result, the two bodies appear almost exactly the same angular size in the sky — about ½°, roughly half the width of your pinky finger seen at arm's length. This truly remarkable coincidence is what gives us total solar eclipses. If the Moon were slightly smaller or orbited a little farther away from Earth, it would never completely cover the solar disk. If the Moon were a little larger or orbited a bit closer to Earth, it would block much of the solar corona during totality, and eclipses wouldn’t be nearly as spectacular.

https://blogs.scientificamerican.com/life-unbounded/the-solar-eclipse-coincidence/
nibble  org:junk  org:edu  space  physics  mechanics  spatial  visuo  data  scale  measure  volo-avolo  earth  multi  news  org:mag  org:sci  popsci  sky  cycles  pro-rata  navigation  degrees-of-freedom 
august 2017 by nhaliday
How large is the Sun compared to Earth? | Cool Cosmos
Compared to Earth, the Sun is enormous! It contains 99.86% of all of the mass of the entire Solar System. The Sun is 864,400 miles (1,391,000 kilometers) across. This is about 109 times the diameter of Earth. The Sun weighs about 333,000 times as much as Earth. It is so large that about 1,300,000 planet Earths can fit inside of it. Earth is about the size of an average sunspot!
nibble  org:junk  space  physics  mechanics  gravity  earth  navigation  data  objektbuch  scale  spatial  measure  org:edu  popsci  pro-rata 
august 2017 by nhaliday
The Earth-Moon system
nice way of expressing Kepler's law (scaled by AU, solar mass, year, etc.) among other things

1. PHYSICAL PROPERTIES OF THE MOON
2. LUNAR PHASES
3. ECLIPSES
4. TIDES
nibble  org:junk  explanation  trivia  data  objektbuch  space  mechanics  spatial  visualization  earth  visual-understanding  navigation  experiment  measure  marginal  gravity  scale  physics  nitty-gritty  tidbits  identity  cycles  time  magnitude  street-fighting  calculation  oceans  pro-rata  rhythm  flux-stasis 
august 2017 by nhaliday
How to estimate distance using your finger | Outdoor Herbivore Blog
1. Hold your right arm out directly in front of you, elbow straight, thumb upright.
2. Align your thumb with one eye closed so that it covers (or aligns) the distant object. Point marked X in the drawing.
3. Do not move your head, arm or thumb, but switch eyes, so that your open eye is now closed and the other eye is open. Observe closely where the object now appears with the other open eye. Your thumb should appear to have moved to some other point: no longer in front of the object. This new point is marked as Y in the drawing.
4. Estimate this displacement XY, by equating it to the estimated size of something you are familiar with (height of tree, building width, length of a car, power line poles, distance between nearby objects). In this case, the distant barn is estimated to be 100′ wide. It appears 5 barn widths could fit this displacement, or 500 feet. Now multiply that figure by 10 (the ratio of the length of your arm to the distance between your eyes), and you get the distance between you and the thicket of blueberry bushes — 5000 feet away(about 1 mile).

- Basically uses parallax (similar triangles) with each eye.
- When they say to compare apparent shift to known distance, won't that scale with the unknown distance? The example uses width of an object at the point whose distance is being estimated.

per here: https://www.trails.com/how_26316_estimate-distances-outdoors.html
Select a distant object that the width can be accurately determined. For example, use a large rock outcropping. Estimate the width of the rock. Use 200 feet wide as an example here.
outdoors  human-bean  embodied  embodied-pack  visuo  spatial  measurement  lifehack  howto  navigation  prepping  survival  objektbuch  multi  measure  estimate 
august 2017 by nhaliday
Scanners Live in Vain | West Hunter
Of course, finding that the pattern already exists at the age of one month seriously weakens any idea that being poor shrinks the brain: most of the environmental effects you would consider haven’t even come into play in the first four weeks, when babies drink milk, sleep, and poop. Genetics affecting both parents and their children would make more sense, if the pattern shows up so early (and I’ll bet money that, if real,  it shows up well before one month);  but Martha Farah, and the reporter from Nature, Sara Reardon, ARE TOO FUCKING DUMB to realize this.

https://westhunt.wordpress.com/2015/03/31/scanners-live-in-vain/#comment-93791
Correlation between brain volume and IQ is about 0.4 . Shows up clearly in studies with sufficient power.

“poverty affects prenatal environment a lot.” No it does not. “poverty” in this country means having plenty to eat.

The Great IQ Depression: https://westhunt.wordpress.com/2014/03/07/the-great-iq-depression/
We hear that poverty can sap brainpower, reduce frontal lobe function, induce the fantods, etc. But exactly what do we mean by ‘poverty’? If we’re talking about an absolute, rather than relative, standard of living, most of the world today must be in poverty, as well as almost everyone who lived much before the present. Most Chinese are poorer than the official US poverty level, right? The US had fairly rapid economic growth until the last generation or so, so if you go very far back in time, almost everyone was poor, by modern standards. Even those who were considered rich at the time suffered from zero prenatal care, largely useless medicine, tabletless high schools, and slow Internet connections. They had to ride horses that had lousy acceleration and pooped all over the place.

In particular, if all this poverty-gives-you-emerods stuff is true, scholastic achievement should have collapsed in the Great Depression – and with the miracle of epigenetics, most of us should still be suffering those bad effects.

But somehow none of this seems to have gone through the formality of actually happening.
west-hunter  scitariat  commentary  study  org:nat  summary  rant  critique  neuro  neuro-nitgrit  brain-scan  iq  class  correlation  compensation  pop-diff  biodet  behavioral-gen  westminster  experiment  attaq  measure  multi  discussion  ideas  history  early-modern  pre-ww2  usa  gedanken  analogy  comparison  time  china  asia  world  developing-world  economics  growth-econ  medicine  healthcare  epigenetics  troll  aphorism  cycles  obesity  poast  nutrition  hypochondria  explanans 
august 2017 by nhaliday
Distribution of Word Lengths in Various Languages - Ravi Parikh's Website
Note that this visualization isn't normalized based on usage. For example the English word 'the' is used frequently, while the word 'lugubrious' is rarely used; however both words count the same in computing the histogram and average word lengths. A great idea for a follow-up would be to use language corpuses instead of word lists in order to build these histograms.
techtariat  data  visualization  project  anglo  language  foreign-lang  distribution  expectancy  measure  lexical 
june 2017 by nhaliday
[1705.03394] That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi's paradox
If a civilization wants to maximize computation it appears rational to aestivate until the far future in order to exploit the low temperature environment: this can produce a 10^30 multiplier of achievable computation. We hence suggest the "aestivation hypothesis": the reason we are not observing manifestations of alien civilizations is that they are currently (mostly) inactive, patiently waiting for future cosmic eras. This paper analyzes the assumptions going into the hypothesis and how physical law and observational evidence constrain the motivations of aliens compatible with the hypothesis.

http://aleph.se/andart2/space/the-aestivation-hypothesis-popular-outline-and-faq/

simpler explanation (just different math for Drake equation):
Dissolving the Fermi Paradox: http://www.jodrellbank.manchester.ac.uk/media/eps/jodrell-bank-centre-for-astrophysics/news-and-events/2017/uksrn-slides/Anders-Sandberg---Dissolving-Fermi-Paradox-UKSRN.pdf
http://marginalrevolution.com/marginalrevolution/2017/07/fermi-paradox-resolved.html
Overall the argument is that point estimates should not be shoved into a Drake equation and then multiplied by each, as that requires excess certainty and masks much of the ambiguity of our knowledge about the distributions. Instead, a Bayesian approach should be used, after which the fate of humanity looks much better. Here is one part of the presentation:

Life Versus Dark Energy: How An Advanced Civilization Could Resist the Accelerating Expansion of the Universe: https://arxiv.org/abs/1806.05203
The presence of dark energy in our universe is causing space to expand at an accelerating rate. As a result, over the next approximately 100 billion years, all stars residing beyond the Local Group will fall beyond the cosmic horizon and become not only unobservable, but entirely inaccessible, thus limiting how much energy could one day be extracted from them. Here, we consider the likely response of a highly advanced civilization to this situation. In particular, we argue that in order to maximize its access to useable energy, a sufficiently advanced civilization would chose to expand rapidly outward, build Dyson Spheres or similar structures around encountered stars, and use the energy that is harnessed to accelerate those stars away from the approaching horizon and toward the center of the civilization. We find that such efforts will be most effective for stars with masses in the range of M∼(0.2−1)M⊙, and could lead to the harvesting of stars within a region extending out to several tens of Mpc in radius, potentially increasing the total amount of energy that is available to a future civilization by a factor of several thousand. We also discuss the observable signatures of a civilization elsewhere in the universe that is currently in this state of stellar harvesting.
preprint  study  essay  article  bostrom  ratty  anthropic  philosophy  space  xenobio  computation  physics  interdisciplinary  ideas  hmm  cocktail  temperature  thermo  information-theory  bits  🔬  threat-modeling  time  scale  insight  multi  commentary  liner-notes  pdf  slides  error  probability  ML-MAP-E  composition-decomposition  econotariat  marginal-rev  fermi  risk  org:mat  questions  paradox  intricacy  multiplicative  calculation  street-fighting  methodology  distribution  expectancy  moments  bayesian  priors-posteriors  nibble  measurement  existence  technology  geoengineering  magnitude  spatial  density  spreading  civilization  energy-resources  phys-energy  measure  direction  speculation  structure 
may 2017 by nhaliday
Pearson correlation coefficient - Wikipedia
https://en.wikipedia.org/wiki/Coefficient_of_determination
what does this mean?: https://twitter.com/GarettJones/status/863546692724858880
deleted but it was about the Pearson correlation distance: 1-r
I guess it's a metric

https://en.wikipedia.org/wiki/Explained_variation

http://infoproc.blogspot.com/2014/02/correlation-and-variance.html
A less misleading way to think about the correlation R is as follows: given X,Y from a standardized bivariate distribution with correlation R, an increase in X leads to an expected increase in Y: dY = R dX. In other words, students with +1 SD SAT score have, on average, roughly +0.4 SD college GPAs. Similarly, students with +1 SD college GPAs have on average +0.4 SAT.

this reminds me of the breeder's equation (but it uses r instead of h^2, so it can't actually be the same)

https://www.reddit.com/r/slatestarcodex/comments/631haf/on_the_commentariat_here_and_why_i_dont_think_i/dfx4e2s/
stats  science  hypothesis-testing  correlation  metrics  plots  regression  wiki  reference  nibble  methodology  multi  twitter  social  discussion  best-practices  econotariat  garett-jones  concept  conceptual-vocab  accuracy  causation  acm  matrix-factorization  todo  explanation  yoga  hsu  street-fighting  levers  🌞  2014  scitariat  variance-components  meta:prediction  biodet  s:**  mental-math  reddit  commentary  ssc  poast  gwern  data-science  metric-space  similarity  measure  dependence-independence 
may 2017 by nhaliday
Riemannian manifold - Wikipedia
In differential geometry, a (smooth) Riemannian manifold or (smooth) Riemannian space (M,g) is a real smooth manifold M equipped with an inner product {\displaystyle g_{p}} on the tangent space {\displaystyle T_{p}M} at each point {\displaystyle p} that varies smoothly from point to point in the sense that if X and Y are vector fields on M, then {\displaystyle p\mapsto g_{p}(X(p),Y(p))} is a smooth function. The family {\displaystyle g_{p}} of inner products is called a Riemannian metric (tensor). These terms are named after the German mathematician Bernhard Riemann. The study of Riemannian manifolds constitutes the subject called Riemannian geometry.

A Riemannian metric (tensor) makes it possible to define various geometric notions on a Riemannian manifold, such as angles, lengths of curves, areas (or volumes), curvature, gradients of functions and divergence of vector fields.
concept  definition  math  differential  geometry  manifolds  inner-product  norms  measure  nibble 
february 2017 by nhaliday
I've heard in the Middle Ages peasants weren't allowed to travel and that it was very difficult to travel in general. But what about pilgrimages then? Who participated in them and how did they overcome the difficulties of travel? : AskHistorians
How far from home did the average medieval person travel in a lifetime?: https://www.reddit.com/r/AskHistorians/comments/1a1egs/how_far_from_home_did_the_average_medieval_person/
What was it like to travel during the middle ages?: https://www.reddit.com/r/AskHistorians/comments/32n9ji/what_was_it_like_to_travel_during_the_middle_ages/
How expensive were medieval era inns relative to the cost of travel?: https://www.reddit.com/r/AskHistorians/comments/2j3a1m/how_expensive_were_medieval_era_inns_relative_to/
Logistics of Travel in Medieval Times: https://www.reddit.com/r/AskHistorians/comments/3fc8li/logistics_of_travel_in_medieval_times/
Were people of antiquity and the Middle Ages able to travel relatively freely?: https://www.reddit.com/r/AskHistorians/comments/wy3ir/were_people_of_antiquity_and_the_middle_ages_able/
How did someone such as Ibn Battuta (practically and logistically) travel, and keep travelling?: https://www.reddit.com/r/AskHistorians/comments/1nw9mg/how_did_someone_such_as_ibn_battuta_practically/
'm a Norseman around the year 950 C.E. Could I have been born in Iceland, raided the shores of the Caspian Sea, and walked amongst the markets of Baghdad in my lifetime? How common was extreme long distance travel?: https://www.reddit.com/r/AskHistorians/comments/2gh52r/im_a_norseman_around_the_year_950_ce_could_i_have/
Lone (inter-continental) long-distance travelers in the Middle Ages?: https://www.reddit.com/r/AskHistorians/comments/1mrraq/lone_intercontinental_longdistance_travelers_in/
q-n-a  reddit  social  discussion  travel  europe  medieval  lived-experience  multi  money  iron-age  MENA  islam  china  asia  prepping  scale  measure  navigation  history  africa  people  feudal 
february 2017 by nhaliday
Pre-industrial travel would take weeks to get anywhere. What did people do during that time? : AskHistorians
How did travellers travel the world in the 16th century? Was there visas?: https://www.reddit.com/r/AskHistorians/comments/5659ig/how_did_travellers_travel_the_world_in_the_16th/
How far from home would a typical Europeanin the 1600s travel in their life?: https://www.reddit.com/r/AskHistorians/comments/5gsgn7/how_far_from_home_would_a_typical_europeanin_the/
I just read an article about how I can travel across country for $213 on Amtrak. How much would the trip have cost me in, say, the mid-1800s: https://www.reddit.com/r/AskHistorians/comments/3poen3/i_just_read_an_article_about_how_i_can_travel/
Ridiculously subjective but I'm curious anyways: What traveling distance was considered beyond the hopes and even imagination of a common person during your specialty?: https://www.reddit.com/r/AskHistorians/comments/13zlsg/ridiculously_subjective_but_im_curious_anyways/
How fast could you travel across the U.S. in the 1800s?: https://www.mnn.com/green-tech/transportation/stories/how-fast-could-you-travel-across-the-us-in-the-1800s
What would be the earliest known example(s) of travel that could be thought of as "tourism"?: https://www.reddit.com/r/AskHistorians/comments/2uqxk9/what_would_be_the_earliest_known_examples_of/
https://twitter.com/conradhackett/status/944382041566654464
https://archive.is/9GWdK
This map shows travel time from London in 1881
q-n-a  reddit  social  discussion  history  europe  russia  early-modern  travel  lived-experience  multi  money  transportation  prepping  world  antiquity  iron-age  medieval  MENA  islam  comparison  mediterranean  usa  trivia  magnitude  scale  pre-ww2  navigation  measure  data  visualization  maps  feudal  twitter  pic  backup  journos-pundits 
february 2017 by nhaliday
Mixing (mathematics) - Wikipedia
One way to describe this is that strong mixing implies that for any two possible states of the system (realizations of the random variable), when given a sufficient amount of time between the two states, the occurrence of the states is independent.

Mixing coefficient is
α(n) = sup{|P(A∪B) - P(A)P(B)| : A in σ(X_0, ..., X_{t-1}), B in σ(X_{t+n}, ...), t >= 0}
for σ(...) the sigma algebra generated by those r.v.s.

So it's a notion of total variational distance between the true distribution and the product distribution.
concept  math  acm  physics  probability  stochastic-processes  definition  mixing  iidness  wiki  reference  nibble  limits  ergodic  math.DS  measure  dependence-independence 
february 2017 by nhaliday
inequalities - Is the Jaccard distance a distance? - MathOverflow
Steinhaus Transform
the referenced survey: http://kenclarkson.org/nn_survey/p.pdf

It's known that this transformation produces a metric from a metric. Now if you take as the base metric D the symmetric difference between two sets, what you end up with is the Jaccard distance (which actually is known by many other names as well).
q-n-a  overflow  nibble  math  acm  sublinear  metrics  metric-space  proofs  math.CO  tcstariat  arrows  reduction  measure  math.MG  similarity  multi  papers  survey  computational-geometry  cs  algorithms  pdf  positivity  msr  tidbits  intersection  curvature  convexity-curvature  intersection-connectedness  signum 
february 2017 by nhaliday
Information Processing: Big, complicated data sets
This Times article profiles Nick Patterson, a mathematician whose career wandered from cryptography, to finance (7 years at Renaissance) and finally to bioinformatics. “I’m a data guy,” Dr. Patterson said. “What I know about is how to analyze big, complicated data sets.”

If you're a smart guy looking for something to do, there are 3 huge computational problems staring you in the face, for which the data is readily accessible.

1) human genome: 3 GB of data in a single genome; most data freely available on the Web (e.g., Hapmap stores patterns of sequence variation). Got a hypothesis about deep human history (evolution)? Test it yourself...

2) market prediction: every market tick available at zero or minimal subscription-service cost. Can you model short term movements? It's never been cheaper to build and test your model!

3) internet search: about 10^3 Terabytes of data (admittedly, a barrier to entry for an individual, but not for a startup). Can you come up with a better way to index or search it? What about peripheral problems like language translation or picture or video search?

The biggest barrier to entry is, of course, brainpower and a few years (a decade?) of concentrated learning. But the necessary books are all in the library :-)

Patterson has worked in 2 of the 3 areas listed above! Substituting crypto for internet search is understandable given his age, our cold war history, etc.
hsu  scitariat  quotes  links  news  org:rec  profile  giants  stories  huge-data-the-biggest  genomics  bioinformatics  finance  crypto  history  britain  interdisciplinary  the-trenches  🔬  questions  genetics  dataset  search  web  internet  scale  commentary  apollonian-dionysian  magnitude  examples  open-problems  big-surf  markets  securities  ORFE  nitty-gritty  quixotic  google  startups  ideas  measure  space-complexity  minimum-viable 
february 2017 by nhaliday
Prékopa–Leindler inequality | Academically Interesting
Consider the following statements:
1. The shape with the largest volume enclosed by a given surface area is the n-dimensional sphere.
2. A marginal or sum of log-concave distributions is log-concave.
3. Any Lipschitz function of a standard n-dimensional Gaussian distribution concentrates around its mean.
What do these all have in common? Despite being fairly non-trivial and deep results, they all can be proved in less than half of a page using the Prékopa–Leindler inequality.

ie, Brunn-Minkowski
acmtariat  clever-rats  ratty  math  acm  geometry  measure  math.MG  estimate  distribution  concentration-of-measure  smoothness  regularity  org:bleg  nibble  brunn-minkowski  curvature  convexity-curvature 
february 2017 by nhaliday
MinHash - Wikipedia
- goal: compute Jaccard coefficient J(A, B) = |A∩B| / |A∪B| in sublinear space
- idea: pick random injective hash function h, define h_min(S) = argmin_{x in S} h(x), and note that Pr[h_min(A) = h_min(B)] = J(A, B)
- reduce variance w/ Chernoff bound
algorithms  data-structures  sublinear  hashing  wiki  reference  random  tcs  nibble  measure  metric-space  metrics  similarity  PAC  intersection  intersection-connectedness 
february 2017 by nhaliday
A VERY BRIEF REVIEW OF MEASURE THEORY
A brief philosophical discussion:
Measure theory, as much as any branch of mathematics, is an area where it is important to be acquainted with the basic notions and statements, but not desperately important to be acquainted with the detailed proofs, which are often rather unilluminating. One should always have in a mind a place where one could go and look if one ever did need to understand a proof: for me, that place is Rudin’s Real and Complex Analysis (Rudin’s “red book”).
gowers  pdf  math  math.CA  math.FA  philosophy  measure  exposition  synthesis  big-picture  hi-order-bits  ergodic  ground-up  summary  roadmap  mathtariat  proofs  nibble  unit  integral  zooming  p:whenever 
february 2017 by nhaliday
The Brunn-Minkowski Inequality | The n-Category Café
For instance, this happens in the plane when A is a horizontal line segment and B is a vertical line segment. There’s obviously no hope of getting an equation for Vol(A+B) in terms of Vol(A) and Vol(B). But this example suggests that we might be able to get an inequality, stating that Vol(A+B) is at least as big as some function of Vol(A) and Vol(B).

The Brunn-Minkowski inequality does this, but it’s really about linearized volume, Vol^{1/n}, rather than volume itself. If length is measured in metres then so is Vol^{1/n}.

...

Nice post, Tom. To readers whose background isn’t in certain areas of geometry and analysis, it’s not obvious that the Brunn–Minkowski inequality is more than a curiosity, the proof of the isoperimetric inequality notwithstanding. So let me add that Brunn–Minkowski is an absolutely vital tool in many parts of geometry, analysis, and probability theory, with extremely diverse applications. Gardner’s survey is a great place to start, but by no means exhaustive.

I’ll also add a couple remarks about regularity issues. You point out that Brunn–Minkowski holds “in the vast generality of measurable sets”, but it may not be initially obvious that this needs to be interpreted as “when A, B, and A+B are all Lebesgue measurable”, since A+B need not be measurable when A and B are (although you can modify the definition of A+B to work for arbitrary measurable A and B; this is discussed by Gardner).
mathtariat  math  estimate  exposition  geometry  math.MG  measure  links  regularity  survey  papers  org:bleg  nibble  homogeneity  brunn-minkowski  curvature  convexity-curvature 
february 2017 by nhaliday
« earlier      
per page:    204080120160

bundles : abstractmath

related tags

2016-election  aaronson  ability-competence  absolute-relative  abstraction  academia  accretion  accuracy  acm  acmtariat  additive-combo  advanced  advice  africa  age-generation  age-of-discovery  aging  ai  albion  algebra  algorithms  alien-character  altruism  amazon  AMT  analogy  analysis  analytical-holistic  anglo  anglosphere  anthropic  anthropology  antidemos  antiquity  aphorism  apollonian-dionysian  apple  applicability-prereqs  applications  approximation  archaeology  archaics  arms  arrows  art  article  ascetic  asia  assortative-mating  atmosphere  atoms  attaq  attention  autism  automata-languages  axioms  backup  bayesian  behavioral-gen  being-becoming  being-right  benchmarks  best-practices  biases  big-list  big-peeps  big-picture  big-surf  bio  biodet  bioinformatics  bits  blowhards  boltzmann  books  boolean-analysis  bostrom  brain-scan  branches  britain  brunn-minkowski  build-packaging  business  c(pp)  c:***  caching  calculation  calculator  career  carmack  cartoons  causation  characterization  charity  chart  cheatsheet  checking  checklists  chemistry  china  christianity  circuits  civilization  cjones-like  class  clever-rats  closure  coarse-fine  cocktail  coding-theory  cog-psych  cohesion  commentary  communication  community  comparison  compensation  competition  compilers  complex-systems  complexity  composition-decomposition  compression  computation  computational-geometry  computer-memory  concentration-of-measure  concept  conceptual-vocab  concrete  concurrency  confidence  confluence  conquest-empire  consilience  consumerism  contradiction  contrarianism  convergence  convexity-curvature  cool  correctness  correlation  cost-benefit  counterexample  creative  criminal-justice  critique  crosstab  crypto  cs  curiosity  curvature  cycles  cynicism-idealism  dan-luu  data  data-science  data-structures  dataset  dataviz  dbs  death  debate  debugging  decision-making  decision-theory  deep-learning  deep-materialism  definition  degrees-of-freedom  demographics  dennett  density  dependence-independence  descriptive  desktop  detail-architecture  developing-world  developmental  devtools  differential  dimensionality  direct-indirect  direction  dirty-hands  discovery  discrete  discussion  disease  distribution  diy  documentation  dotnet  draft  duplication  duty  dynamic  dynamical  dysgenics  early-modern  earth  econometrics  economics  econotariat  eden  eden-heaven  editors  education  EEA  efficiency  egalitarianism-hierarchy  elections  electromag  elegance  embedded  embeddings  embodied  embodied-pack  empirical  ems  endogenous-exogenous  ends-means  energy-resources  engineering  enhancement  enlightenment-renaissance-restoration-reformation  entropy-like  environmental-effects  epigenetics  equilibrium  ergodic  error  error-handling  essay  essence-existence  estimate  ethics  europe  evidence-based  evolution  evopsych  examples  existence  exocortex  expectancy  experiment  expert-experience  explanans  explanation  exploratory  exposition  extrema  faq  features  fedja  fermi  feudal  fiction  finance  finiteness  fitness  flexibility  fluid  flux-stasis  food  foreign-lang  formal-methods  formal-values  forum  fourier  free-riding  frequency  frontier  functional  futurism  garett-jones  gedanken  gene-drift  gene-flow  generalization  genetic-correlation  genetic-load  genetics  genomics  geoengineering  geography  geometry  giants  gibbon  git  github  gnon  golang  good-evil  google  gotchas  government  gowers  grad-school  graph-theory  graphical-models  graphs  gravity  ground-up  growth  growth-econ  guide  GWAS  gwern  hamming  hanson  hard-tech  hardware  hashing  haskell  hci  healthcare  heuristic  hi-order-bits  hierarchy  high-dimension  higher-ed  history  hmm  hn  homo-hetero  homogeneity  horror  howto  hsu  huge-data-the-biggest  human-bean  humanity  hypochondria  hypothesis-testing  ideas  identity  ideology  idk  IEEE  iidness  illusion  impact  impro  incentives  increase-decrease  inference  info-dynamics  info-foraging  infographic  information-theory  inhibition  init  inner-product  innovation  input-output  insight  instinct  integral  intelligence  interdisciplinary  internet  intersection  intersection-connectedness  interview-prep  intricacy  intuition  invariance  investing  ios  iq  iron-age  islam  iteration-recursion  janus  japan  jargon  javascript  jobs  journos-pundits  jvm  keyboard  kinship  knowledge  korea  labor  language  large-factor  latin-america  leadership  learning  legacy  len:long  len:short  lens  lesswrong  let-me-see  letters  levers  leviathan  lexical  libraries  life-history  lifehack  lifts-projections  limits  linear-algebra  linearity  liner-notes  linguistics  links  list  lived-experience  local-global  logic  lol  long-short-run  long-term  love-hate  lower-bounds  machiavelli  machine-learning  macro  magnitude  maker  manifolds  maps  marginal  marginal-rev  markets  markov  martingale  matching  math  math.CA  math.CO  math.DS  math.FA  math.GN  math.GR  math.MG  math.NT  math.RT  mathtariat  matrix-factorization  measure  measurement  mechanics  medicine  medieval  mediterranean  memory-management  MENA  mental-math  meta-analysis  meta:math  meta:prediction  meta:reading  meta:war  metabuch  metameta  methodology  metric-space  metrics  micro  migrant-crisis  migration  military  minimalism  minimum-viable  miri-cfar  mit  mixing  ML-MAP-E  mobile  model-class  models  moments  money  monotonicity  morality  mostly-modern  motivation  msr  multi  multiplicative  music-theory  mutation  n-factor  nationalism-globalism  nature  navigation  near-far  network-structure  networking  neuro  neuro-nitgrit  neurons  new-religion  news  nibble  nietzschean  nitty-gritty  no-go  nonlinearity  norms  notation  novelty  nuclear  null-result  numerics  nutrition  obesity  objektbuch  ocaml-sml  occam  occident  oceans  old-anglo  oly  oly-programming  oop  open-closed  open-problems  optimism  optimization  order-disorder  orders  ORFE  org:biz  org:bleg  org:data  org:edu  org:inst  org:junk  org:mag  org:mat  org:nat  org:ngo  org:rec  org:sci  organization  os  oscillation  outcome-risk  outdoors  overflow  p:***  p:someday  p:whenever  PAC  papers  paradox  parasites-microbiome  parsimony  paste  patho-altruism  pdf  peace-violence  people  performance  personality  pessimism  phd  philosophy  phys-energy  physics  pic  pigeonhole-markov  planning  plots  pls  plt  poast  politics  poll  pop-diff  pop-structure  popsci  population  population-genetics  positivity  power  power-law  pragmatic  pre-2013  pre-ww2  prediction  predictive-processing  prepping  preprint  prioritizing  priors-posteriors  pro-rata  probability  problem-solving  productivity  profile  programming  progression  project  proofs  properties  protocol  psych-architecture  psychiatry  psychology  psychometrics  puzzles  python  q-n-a  qra  quality  quantifiers-sums  quantitative-qualitative  quantum  quantum-info  questions  quixotic  quotes  random  ranking  rant  rationality  ratty  reading  realness  realpolitik  reason  rec-math  recommendations  recruiting  reddit  reduction  reference  reflection  regression  regularity  regularizer  relativity  religion  replication  research  resources-effects  responsibility  retention  retrofit  review  rhetoric  rhythm  rigidity  rigor  risk  roadmap  robust  roots  rot  russia  rust  s-factor  s:*  s:**  s:***  safety  sampling  sapiens  scala  scale  scaling-tech  scholar-pack  science  science-anxiety  scifi-fantasy  scitariat  search  securities  security  self-interest  separation  sequential  series  shannon  shift  shipping  signal-noise  signum  similarity  simulation  singularity  skeleton  skunkworks  sky  slides  slippery-slope  smoothness  social  social-science  sociality  sociology  soft-question  software  space  space-complexity  span-cover  sparsity  spatial  speaking  spearhead  spectral  speculation  speed  speedometer  spengler  sports  spreading  ssc  stackex  stagnation  stanford  startups  stat-mech  state  stats  status  stochastic-processes  stock-flow  stories  street-fighting  stress  strings  structure  study  studying  stylized-facts  sub-super  subculture  subjective-objective  sublinear  summary  supply-demand  survey  survival  symmetry  synthesis  system-design  systems  tails  tainter  tcs  tcstariat  tech  technology  techtariat  telos-atelos  temperature  tensors  terminal  the-classics  the-great-west-whale  the-self  the-trenches  the-world-is-just-atoms  theory-of-mind  theory-practice  theos  thermo  thick-thin  things  thinking  threat-modeling  thurston  tidbits  time  time-series  time-use  todo  toolkit  tools  top-n  topology  track-record  tradeoffs  tradition  transportation  travel  trends  tribalism  tricki  trivia  troll  truth  turing  tutorial  twitter  types  ubiquity  ui  uncertainty  unintended-consequences  uniqueness  unit  universalism-particularism  unix  urban-rural  us-them  usa  ux  values  variance-components  vcs  video  virtu  visual-understanding  visualization  visuo  volo-avolo  von-neumann  vulgar  war  waves  web  west-hunter  westminster  white-paper  whole-partial-many  wiki  wild-ideas  wire-guided  within-group  within-without  workflow  working-stiff  world  world-war  wormholes  worrydream  worse-is-better/the-right-thing  writing  xenobio  yak-shaving  yoga  zooming  🌞  🎓  👳  👽  🔬  🖥  🤖  🦉 

Copy this bookmark:



description:


tags: