nhaliday + bits   43

A cross-language perspective on speech information rate
Figure 2.

English (IREN = 1.08) shows a higher Information Rate than Vietnamese (IRVI = 1). On the contrary, Japanese exhibits the lowest IRL value of the sample. Moreover, one can observe that several languages may reach very close IRL with different encoding strategies: Spanish is characterized by a fast rate of low-density syllables while Mandarin exhibits a 34% slower syllabic rate with syllables ‘denser’ by a factor of 49%. Finally, their Information Rates differ only by 4%.
pdf  study  language  foreign-lang  linguistics  pro-rata  bits  communication  efficiency  density  anglo  japan  asia  china  mediterranean  data 
february 2019 by nhaliday
Theory of Self-Reproducing Automata - John von Neumann
Fourth Lecture: THE ROLE OF HIGH AND OF EXTREMELY HIGH COMPLICATION

Comparisons between computing machines and the nervous systems. Estimates of size for computing machines, present and near future.

Estimates for size for the human central nervous system. Excursus about the “mixed” character of living organisms. Analog and digital elements. Observations about the “mixed” character of all componentry, artificial as well as natural. Interpretation of the position to be taken with respect to these.

Evaluation of the discrepancy in size between artificial and natural automata. Interpretation of this discrepancy in terms of physical factors. Nature of the materials used.

The probability of the presence of other intellectual factors. The role of complication and the theoretical penetration that it requires.

Questions of reliability and errors reconsidered. Probability of individual errors and length of procedure. Typical lengths of procedure for computing machines and for living organisms--that is, for artificial and for natural automata. Upper limits on acceptable probability of error in individual operations. Compensation by checking and self-correcting features.

Differences of principle in the way in which errors are dealt with in artificial and in natural automata. The “single error” principle in artificial automata. Crudeness of our approach in this case, due to the lack of adequate theory. More sophisticated treatment of this problem in natural automata: The role of the autonomy of parts. Connections between this autonomy and evolution.

- 10^10 neurons in brain, 10^4 vacuum tubes in largest computer at time
- machines faster: 5 ms from neuron potential to neuron potential, 10^-3 ms for vacuum tubes

https://en.wikipedia.org/wiki/John_von_Neumann#Computing
pdf  article  papers  essay  nibble  math  cs  computation  bio  neuro  neuro-nitgrit  scale  magnitude  comparison  acm  von-neumann  giants  thermo  phys-energy  speed  performance  time  density  frequency  hardware  ems  efficiency  dirty-hands  street-fighting  fermi  estimate  retention  physics  interdisciplinary  multi  wiki  links  people  🔬  atoms  automata  duplication  iteration-recursion  turing  complexity  measure  nature  technology  complex-systems  bits  information-theory  circuits  robust  structure  composition-decomposition  evolution  mutation  axioms  analogy  thinking  input-output  hi-order-bits  coding-theory  flexibility  rigidity 
april 2018 by nhaliday
Is the human brain analog or digital? - Quora
The brain is neither analog nor digital, but works using a signal processing paradigm that has some properties in common with both.
 
Unlike a digital computer, the brain does not use binary logic or binary addressable memory, and it does not perform binary arithmetic. Information in the brain is represented in terms of statistical approximations and estimations rather than exact values. The brain is also non-deterministic and cannot replay instruction sequences with error-free precision. So in all these ways, the brain is definitely not "digital".
 
At the same time, the signals sent around the brain are "either-or" states that are similar to binary. A neuron fires or it does not. These all-or-nothing pulses are the basic language of the brain. So in this sense, the brain is computing using something like binary signals. Instead of 1s and 0s, or "on" and "off", the brain uses "spike" or "no spike" (referring to the firing of a neuron).
q-n-a  qra  expert-experience  neuro  neuro-nitgrit  analogy  deep-learning  nature  discrete  smoothness  IEEE  bits  coding-theory  communication  trivia  bio  volo-avolo  causation  random  order-disorder  ems  models  methodology  abstraction  nitty-gritty  computation  physics  electromag  scale  coarse-fine 
april 2018 by nhaliday
Eternity in six hours: intergalactic spreading of intelligent life and sharpening the Fermi paradox
We do this by demonstrating that traveling between galaxies – indeed even launching a colonisation project for the entire reachable universe – is a relatively simple task for a star-spanning civilization, requiring modest amounts of energy and resources. We start by demonstrating that humanity itself could likely accomplish such a colonisation project in the foreseeable future, should we want to, and then demonstrate that there are millions of galaxies that could have reached us by now, using similar methods. This results in a considerable sharpening of the Fermi paradox.
pdf  study  article  essay  anthropic  fermi  space  expansionism  bostrom  ratty  philosophy  xenobio  ideas  threat-modeling  intricacy  time  civilization  🔬  futurism  questions  paradox  risk  physics  engineering  interdisciplinary  frontier  technology  volo-avolo  dirty-hands  ai  automation  robotics  duplication  iteration-recursion  von-neumann  data  scale  magnitude  skunkworks  the-world-is-just-atoms  hard-tech  ems  bio  bits  speedometer  nature  model-organism  mechanics  phys-energy  relativity  electromag  analysis  spock  nitty-gritty  spreading  hanson  street-fighting  speed  gedanken  nibble 
march 2018 by nhaliday
The Coming Technological Singularity
Within thirty years, we will have the technological
means to create superhuman intelligence. Shortly after,
the human era will be ended.

Is such progress avoidable? If not to be avoided, can
events be guided so that we may survive? These questions
are investigated. Some possible answers (and some further
dangers) are presented.

_What is The Singularity?_

The acceleration of technological progress has been the central
feature of this century. I argue in this paper that we are on the edge
of change comparable to the rise of human life on Earth. The precise
cause of this change is the imminent creation by technology of
entities with greater than human intelligence. There are several means
by which science may achieve this breakthrough (and this is another
reason for having confidence that the event will occur):
o The development of computers that are "awake" and
superhumanly intelligent. (To date, most controversy in the
area of AI relates to whether we can create human equivalence
in a machine. But if the answer is "yes, we can", then there
is little doubt that beings more intelligent can be constructed
shortly thereafter.
o Large computer networks (and their associated users) may "wake
up" as a superhumanly intelligent entity.
o Computer/human interfaces may become so intimate that users
may reasonably be considered superhumanly intelligent.
o Biological science may find ways to improve upon the natural
human intellect.

The first three possibilities depend in large part on
improvements in computer hardware. Progress in computer hardware has
followed an amazingly steady curve in the last few decades [16]. Based
largely on this trend, I believe that the creation of greater than
human intelligence will occur during the next thirty years. (Charles
Platt [19] has pointed out the AI enthusiasts have been making claims
like this for the last thirty years. Just so I'm not guilty of a
relative-time ambiguity, let me more specific: I'll be surprised if
this event occurs before 2005 or after 2030.)

What are the consequences of this event? When greater-than-human
intelligence drives progress, that progress will be much more rapid.
In fact, there seems no reason why progress itself would not involve
the creation of still more intelligent entities -- on a still-shorter
time scale. The best analogy that I see is with the evolutionary past:
Animals can adapt to problems and make inventions, but often no faster
than natural selection can do its work -- the world acts as its own
simulator in the case of natural selection. We humans have the ability
to internalize the world and conduct "what if's" in our heads; we can
solve many problems thousands of times faster than natural selection.
Now, by creating the means to execute those simulations at much higher
speeds, we are entering a regime as radically different from our human
past as we humans are from the lower animals.
org:junk  humanity  accelerationism  futurism  prediction  classic  technology  frontier  speedometer  ai  risk  internet  time  essay  rhetoric  network-structure  ai-control  morality  ethics  volo-avolo  egalitarianism-hierarchy  intelligence  scale  giants  scifi-fantasy  speculation  quotes  religion  theos  singularity  flux-stasis  phase-transition  cybernetics  coordination  cooperate-defect  moloch  communication  bits  speed  efficiency  eden-heaven  ecology  benevolence  end-times  good-evil  identity  the-self  whole-partial-many  density 
march 2018 by nhaliday
New Theory Cracks Open the Black Box of Deep Learning | Quanta Magazine
A new idea called the “information bottleneck” is helping to explain the puzzling success of today’s artificial-intelligence algorithms — and might also explain how human brains learn.

sounds like he's just talking about autoencoders?
news  org:mag  org:sci  popsci  announcement  research  deep-learning  machine-learning  acm  information-theory  bits  neuro  model-class  big-surf  frontier  nibble  hmm  signal-noise  deepgoog  expert  ideas  wild-ideas  summary  talks  video  israel  roots  physics  interdisciplinary  ai  intelligence  shannon  giants  arrows  preimage  lifts-projections  composition-decomposition  characterization  markov  gradient-descent  papers  liner-notes  experiment  hi-order-bits  generalization  expert-experience  explanans  org:inst  speedometer 
september 2017 by nhaliday
My Old Boss | West Hunter
Back in those days, there was interest in finding better ways to communicate with a submerged submarine.  One method under consideration used an orbiting laser to send pulses of light over the ocean, using a special wavelength, for which there was a very good detector.  Since even the people running the laser might not know the boomer’s exact location, while weather and such might also interfere,  my old boss was trying to figure out methods of reliably transmitting messages when some pulses were randomly lost – which is of course a well-developed subject,  error-correcting codes. But he didn’t know that.  Hadn’t even heard of it.

Around this time, my old boss was flying from LA to Washington, and started talking with his seatmate about this  submarine communication problem.  His seatmate – Irving S. Reed – politely said that he had done a little work on some similar problems.  During this conversation, my informant, a fellow minion sitting behind my old boss, was doggedly choking back hysterical laughter, not wanting to interrupt this very special conversation.
west-hunter  scitariat  stories  reflection  working-stiff  engineering  dirty-hands  electromag  communication  coding-theory  giants  bits  management  signal-noise 
september 2017 by nhaliday
Edge.org: 2017 : WHAT SCIENTIFIC TERM OR CONCEPT OUGHT TO BE MORE WIDELY KNOWN?
highlights:
- the genetic book of the dead [Dawkins]
- complementarity [Frank Wilczek]
- relative information
- effective theory [Lisa Randall]
- affordances [Dennett]
- spontaneous symmetry breaking
- relatedly, equipoise [Nicholas Christakis]
- case-based reasoning
- population reasoning (eg, common law)
- criticality [Cesar Hidalgo]
- Haldan's law of the right size (!SCALE!)
- polygenic scores
- non-ergodic
- ansatz
- state [Aaronson]: http://www.scottaaronson.com/blog/?p=3075
- transfer learning
- effect size
- satisficing
- scaling
- the breeder's equation [Greg Cochran]
- impedance matching

soft:
- reciprocal altruism
- life history [Plomin]
- intellectual honesty [Sam Harris]
- coalitional instinct (interesting claim: building coalitions around "rationality" actually makes it more difficult to update on new evidence as it makes you look like a bad person, eg, the Cathedral)
basically same: https://twitter.com/ortoiseortoise/status/903682354367143936

more: https://www.edge.org/conversation/john_tooby-coalitional-instincts

interesting timing. how woke is this dude?
org:edge  2017  technology  discussion  trends  list  expert  science  top-n  frontier  multi  big-picture  links  the-world-is-just-atoms  metameta  🔬  scitariat  conceptual-vocab  coalitions  q-n-a  psychology  social-psych  anthropology  instinct  coordination  duty  power  status  info-dynamics  cultural-dynamics  being-right  realness  cooperate-defect  westminster  chart  zeitgeist  rot  roots  epistemic  rationality  meta:science  analogy  physics  electromag  geoengineering  environment  atmosphere  climate-change  waves  information-theory  bits  marginal  quantum  metabuch  homo-hetero  thinking  sapiens  genetics  genomics  evolution  bio  GT-101  low-hanging  minimum-viable  dennett  philosophy  cog-psych  neurons  symmetry  humility  life-history  social-structure  GWAS  behavioral-gen  biodet  missing-heritability  ergodic  machine-learning  generalization  west-hunter  population-genetics  methodology  blowhards  spearhead  group-level  scale  magnitude  business  scaling-tech  tech  business-models  optimization  effect-size  aaronson  state  bare-hands  problem-solving  politics 
may 2017 by nhaliday
[1705.03394] That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi's paradox
If a civilization wants to maximize computation it appears rational to aestivate until the far future in order to exploit the low temperature environment: this can produce a 10^30 multiplier of achievable computation. We hence suggest the "aestivation hypothesis": the reason we are not observing manifestations of alien civilizations is that they are currently (mostly) inactive, patiently waiting for future cosmic eras. This paper analyzes the assumptions going into the hypothesis and how physical law and observational evidence constrain the motivations of aliens compatible with the hypothesis.

http://aleph.se/andart2/space/the-aestivation-hypothesis-popular-outline-and-faq/

simpler explanation (just different math for Drake equation):
Dissolving the Fermi Paradox: http://www.jodrellbank.manchester.ac.uk/media/eps/jodrell-bank-centre-for-astrophysics/news-and-events/2017/uksrn-slides/Anders-Sandberg---Dissolving-Fermi-Paradox-UKSRN.pdf
http://marginalrevolution.com/marginalrevolution/2017/07/fermi-paradox-resolved.html
Overall the argument is that point estimates should not be shoved into a Drake equation and then multiplied by each, as that requires excess certainty and masks much of the ambiguity of our knowledge about the distributions. Instead, a Bayesian approach should be used, after which the fate of humanity looks much better. Here is one part of the presentation:

Life Versus Dark Energy: How An Advanced Civilization Could Resist the Accelerating Expansion of the Universe: https://arxiv.org/abs/1806.05203
The presence of dark energy in our universe is causing space to expand at an accelerating rate. As a result, over the next approximately 100 billion years, all stars residing beyond the Local Group will fall beyond the cosmic horizon and become not only unobservable, but entirely inaccessible, thus limiting how much energy could one day be extracted from them. Here, we consider the likely response of a highly advanced civilization to this situation. In particular, we argue that in order to maximize its access to useable energy, a sufficiently advanced civilization would chose to expand rapidly outward, build Dyson Spheres or similar structures around encountered stars, and use the energy that is harnessed to accelerate those stars away from the approaching horizon and toward the center of the civilization. We find that such efforts will be most effective for stars with masses in the range of M∼(0.2−1)M⊙, and could lead to the harvesting of stars within a region extending out to several tens of Mpc in radius, potentially increasing the total amount of energy that is available to a future civilization by a factor of several thousand. We also discuss the observable signatures of a civilization elsewhere in the universe that is currently in this state of stellar harvesting.
preprint  study  essay  article  bostrom  ratty  anthropic  philosophy  space  xenobio  computation  physics  interdisciplinary  ideas  hmm  cocktail  temperature  thermo  information-theory  bits  🔬  threat-modeling  time  scale  insight  multi  commentary  liner-notes  pdf  slides  error  probability  ML-MAP-E  composition-decomposition  econotariat  marginal-rev  fermi  risk  org:mat  questions  paradox  intricacy  multiplicative  calculation  street-fighting  methodology  distribution  expectancy  moments  bayesian  priors-posteriors  nibble  measurement  existence  technology  geoengineering  magnitude  spatial  density  spreading  civilization  energy-resources  phys-energy  measure  direction  speculation  structure 
may 2017 by nhaliday
Octopuses Do Something Really Strange to Their Genes - The Atlantic
But to what end? RNA editing is still mysterious, and its purpose unclear. Technically, an animal could use it to change the nature of its proteins without altering the underlying DNA instructions. But in practice, this kind of recoding is extremely rare. Only about 3 percent of human genes are ever edited in this way, and the changes are usually restricted to the parts of RNA that are cut out and discarded. To the extent that it happens, it doesn’t seem to be adaptive.

In cephalopods, it’s a different story. Back in 2015, Rosenthal and Eisenberg discovered that RNA editing has gone wild in the longfin inshore squid—a foot-long animal that’s commonly used in neuroscience research. While a typical mammal edits its RNA at just a few hundred sites, the squid was making some 57,000 such edits. These changes weren’t happening in discarded sections of RNA, but in the ones that actually go towards building proteins—the so-called coding regions. They were ten times more common in the squid’s neurons than in its other tissues, and they disproportionately affected proteins involved in its nervous system.
news  org:mag  popsci  oceans  nature  genetics  genomics  ed-yong  cocktail  neuro  bits  journos-pundits 
april 2017 by nhaliday
Evolution Runs Faster on Short Timescales | Quanta Magazine
But if more splashes of paint appear on a wall, they will gradually conceal some of the original color beneath new layers. Similarly, evolution and natural selection write over the initial mutations that appear over short timescales. Over millions of years, an A in the DNA may become a T, but in the intervening time it may be a C or a G for a while. Ho believes that this mutational saturation is a major cause of what he calls the time-dependent rate phenomenon.

“Think of it like the stock market,” he said. Look at the hourly or daily fluctuations of Standard & Poor’s 500 index, and it will appear wildly unstable, swinging this way and that. Zoom out, however, and the market appears much more stable as the daily shifts start to average out. In the same way, the forces of natural selection weed out the less advantageous and more deleterious mutations over time.
news  org:mag  org:sci  evolution  bio  nature  mutation  selection  time  methodology  stylized-facts  genetics  population-genetics  genomics  speed  pigeonhole-markov  bits  nibble  org:inst 
march 2017 by nhaliday
6.896: Essential Coding Theory
- probabilistic method and Chernoff bound for Shannon coding
- probabilistic method for asymptotically good Hamming codes (Gilbert coding)
- sparsity used for LDPC codes
mit  course  yoga  tcs  complexity  coding-theory  math.AG  fields  polynomials  pigeonhole-markov  linear-algebra  probabilistic-method  lecture-notes  bits  sparsity  concentration-of-measure  linear-programming  linearity  expanders  hamming  pseudorandomness  crypto  rigorous-crypto  communication-complexity  no-go  madhu-sudan  shannon  unit  p:**  quixotic 
february 2017 by nhaliday
What is the relationship between information theory and Coding theory? - Quora
basically:
- finite vs. asymptotic
- combinatorial vs. probabilistic (lotsa overlap their)
- worst-case (Hamming) vs. distributional (Shannon)

Information and coding theory most often appear together in the subject of error correction over noisy channels. Historically, they were born at almost exactly the same time - both Richard Hamming and Claude Shannon were working at Bell Labs when this happened. Information theory tends to heavily use tools from probability theory (together with an "asymptotic" way of thinking about the world), while traditional "algebraic" coding theory tends to employ mathematics that are much more finite sequence length/combinatorial in nature, including linear algebra over Galois Fields. The emergence in the late 90s and first decade of 2000 of codes over graphs blurred this distinction though, as code classes such as low density parity check codes employ both asymptotic analysis and random code selection techniques which have counterparts in information theory.

They do not subsume each other. Information theory touches on many other aspects that coding theory does not, and vice-versa. Information theory also touches on compression (lossy & lossless), statistics (e.g. large deviations), modeling (e.g. Minimum Description Length). Coding theory pays a lot of attention to sphere packing and coverings for finite length sequences - information theory addresses these problems (channel & lossy source coding) only in an asymptotic/approximate sense.
q-n-a  qra  math  acm  tcs  information-theory  coding-theory  big-picture  comparison  confusion  explanation  linear-algebra  polynomials  limits  finiteness  math.CO  hi-order-bits  synthesis  probability  bits  hamming  shannon  intricacy  nibble  s:null  signal-noise 
february 2017 by nhaliday
Information Geometry (Part 16) | Azimuth
While preparing this talk, I discovered a cool fact. I doubt it’s new, but I haven’t exactly seen it elsewhere. I came up with it while trying to give a precise and general statement of ‘Fisher’s fundamental theorem of natural selection’. I won’t start by explaining that theorem, since my version looks rather different than Fisher’s, and I came up with mine precisely because I had trouble understanding his. I’ll say a bit more about this at the end.

Here’s my version:
The square of the rate at which a population learns information is the variance of its fitness.
baez  mathtariat  evolution  bio  genetics  population-genetics  bits  interdisciplinary  models  exposition  math.DS  giants  information-theory  entropy-like  org:bleg  nibble  fisher  EGT  dynamical 
february 2017 by nhaliday
Shtetl-Optimized » Blog Archive » Why I Am Not An Integrated Information Theorist (or, The Unconscious Expander)
In my opinion, how to construct a theory that tells us which physical systems are conscious and which aren’t—giving answers that agree with “common sense” whenever the latter renders a verdict—is one of the deepest, most fascinating problems in all of science. Since I don’t know a standard name for the problem, I hereby call it the Pretty-Hard Problem of Consciousness. Unlike with the Hard Hard Problem, I don’t know of any philosophical reason why the Pretty-Hard Problem should be inherently unsolvable; but on the other hand, humans seem nowhere close to solving it (if we had solved it, then we could reduce the abortion, animal rights, and strong AI debates to “gentlemen, let us calculate!”).

Now, I regard IIT as a serious, honorable attempt to grapple with the Pretty-Hard Problem of Consciousness: something concrete enough to move the discussion forward. But I also regard IIT as a failed attempt on the problem. And I wish people would recognize its failure, learn from it, and move on.

In my view, IIT fails to solve the Pretty-Hard Problem because it unavoidably predicts vast amounts of consciousness in physical systems that no sane person would regard as particularly “conscious” at all: indeed, systems that do nothing but apply a low-density parity-check code, or other simple transformations of their input data. Moreover, IIT predicts not merely that these systems are “slightly” conscious (which would be fine), but that they can be unboundedly more conscious than humans are.

To justify that claim, I first need to define Φ. Strikingly, despite the large literature about Φ, I had a hard time finding a clear mathematical definition of it—one that not only listed formulas but fully defined the structures that the formulas were talking about. Complicating matters further, there are several competing definitions of Φ in the literature, including ΦDM (discrete memoryless), ΦE (empirical), and ΦAR (autoregressive), which apply in different contexts (e.g., some take time evolution into account and others don’t). Nevertheless, I think I can define Φ in a way that will make sense to theoretical computer scientists. And crucially, the broad point I want to make about Φ won’t depend much on the details of its formalization anyway.

We consider a discrete system in a state x=(x1,…,xn)∈Sn, where S is a finite alphabet (the simplest case is S={0,1}). We imagine that the system evolves via an “updating function” f:Sn→Sn. Then the question that interests us is whether the xi‘s can be partitioned into two sets A and B, of roughly comparable size, such that the updates to the variables in A don’t depend very much on the variables in B and vice versa. If such a partition exists, then we say that the computation of f does not involve “global integration of information,” which on Tononi’s theory is a defining aspect of consciousness.
aaronson  tcstariat  philosophy  dennett  interdisciplinary  critique  nibble  org:bleg  within-without  the-self  neuro  psychology  cog-psych  metrics  nitty-gritty  composition-decomposition  complex-systems  cybernetics  bits  information-theory  entropy-like  forms-instances  empirical  walls  arrows  math.DS  structure  causation  quantitative-qualitative  number  extrema  optimization  abstraction  explanation  summary  degrees-of-freedom  whole-partial-many  network-structure  systematic-ad-hoc  tcs  complexity  hardness  no-go  computation  measurement  intricacy  examples  counterexample  coding-theory  linear-algebra  fields  graphs  graph-theory  expanders  math  math.CO  properties  local-global  intuition  error  definition  coupling-cohesion 
january 2017 by nhaliday
Are You Living in a Computer Simulation?
Bostrom's anthropic arguments

https://www.jetpress.org/volume7/simulation.htm
In sum, if your descendants might make simulations of lives like yours, then you might be living in a simulation. And while you probably cannot learn much detail about the specific reasons for and nature of the simulation you live in, you can draw general conclusions by making analogies to the types and reasons of simulations today. If you might be living in a simulation then all else equal it seems that you should care less about others, live more for today, make your world look likely to become eventually rich, expect to and try to participate in pivotal events, be entertaining and praiseworthy, and keep the famous people around you happy and interested in you.

Theological Implications of the Simulation Argument: https://www.tandfonline.com/doi/pdf/10.1080/15665399.2010.10820012
Nick Bostrom’s Simulation Argument (SA) has many intriguing theological implications. We work out some of them here. We show how the SA can be used to develop novel versions of the Cosmological and Design Arguments. We then develop some of the affinities between Bostrom’s naturalistic theogony and more traditional theological topics. We look at the resurrection of the body and at theodicy. We conclude with some reflections on the relations between the SA and Neoplatonism (friendly) and between the SA and theism (less friendly).

https://www.gwern.net/Simulation-inferences
lesswrong  philosophy  weird  idk  thinking  insight  links  summary  rationality  ratty  bostrom  sampling-bias  anthropic  theos  simulation  hanson  decision-making  advice  mystic  time-preference  futurism  letters  entertainment  multi  morality  humility  hypocrisy  wealth  malthus  power  drama  gedanken  pdf  article  essay  religion  christianity  the-classics  big-peeps  iteration-recursion  aesthetics  nietzschean  axioms  gwern  analysis  realness  von-neumann  space  expansionism  duplication  spreading  sequential  cs  computation  outcome-risk  measurement  empirical  questions  bits  information-theory  efficiency  algorithms  physics  relativity  ems  neuro  data  scale  magnitude  complexity  risk  existence  threat-modeling  civilization  forms-instances 
september 2016 by nhaliday
Why Information Grows – Paul Romer
thinking like a physicist:

The key element in thinking like a physicist is being willing to push simultaneously to extreme levels of abstraction and specificity. This sounds paradoxical until you see it in action. Then it seems obvious. Abstraction means that you strip away inessential detail. Specificity means that you take very seriously the things that remain.

Abstraction vs. Radical Specificity: https://paulromer.net/abstraction-vs-radical-specificity/
books  summary  review  economics  growth-econ  interdisciplinary  hmm  physics  thinking  feynman  tradeoffs  paul-romer  econotariat  🎩  🎓  scholar  aphorism  lens  signal-noise  cartoons  skeleton  s:**  giants  electromag  mutation  genetics  genomics  bits  nibble  stories  models  metameta  metabuch  problem-solving  composition-decomposition  structure  abstraction  zooming  examples  knowledge  human-capital  behavioral-econ  network-structure  info-econ  communication  learning  information-theory  applications  volo-avolo  map-territory  externalities  duplication  spreading  property-rights  lattice  multi  government  polisci  policy  counterfactual  insight  paradox  parallax  reduction  empirical  detail-architecture  methodology  crux  visual-understanding  theory-practice  matching  analytical-holistic  branches  complement-substitute  local-global  internet  technology  cost-benefit  investing  micro  signaling  limits  public-goodish  interpretation  elegance 
september 2016 by nhaliday
Academics Make Theoretical Breakthrough in Random Number Generation | Hacker News
- interesting to see how programmers process theoretical results
- the result is that one by David Zuckerman that Henry Yuen was talking about
pseudorandomness  tcs  announcement  commentary  hn  random  bits  rand-complexity 
may 2016 by nhaliday
Kullback–Leibler divergence - Wikipedia, the free encyclopedia
see https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence#Motivation especially

https://twitter.com/SimonDeDeo/status/993881889143447552
https://archive.is/hZcVb
Kullback-Leibler divergence has an enormous number of interpretations and uses: psychological, epistemic, thermodynamic, statistical, computational, geometrical... I am pretty sure I could teach an entire graduate seminar on it.
information-theory  math  wiki  probability  concept  reference  acm  hmm  atoms  operational  characterization  metrics  bits  entropy-like  nibble  properties  multi  twitter  social  discussion  backup 
may 2016 by nhaliday

bundles : abstractinfotcs

related tags

2016-election  aaronson  absolute-relative  abstraction  academia  accelerationism  acm  acmtariat  adversarial  advice  aesthetics  ai  ai-control  algorithms  altruism  analogy  analysis  analytical-holistic  anglo  announcement  anthropic  anthropology  aphorism  applications  arrows  article  asia  atmosphere  atoms  automata  automation  average-case  axelrod  axioms  backup  baez  bare-hands  bayesian  behavioral-econ  behavioral-gen  being-right  benevolence  biases  big-list  big-peeps  big-picture  big-surf  binomial  bio  biodet  bioinformatics  bits  blog  blowhards  boltzmann  bonferroni  books  bostrom  branches  britain  business  business-models  calculation  canon  cartoons  causation  characterization  chart  chemistry  china  christianity  circuits  civilization  classic  clever-rats  climate-change  coalitions  coarse-fine  cocktail  coding-theory  cog-psych  cohesion  commentary  communication  communication-complexity  comparison  complement-substitute  complex-systems  complexity  composition-decomposition  computation  concentration-of-measure  concept  conceptual-vocab  concurrency  confluence  confusion  constraint-satisfaction  contradiction  convexity-curvature  cooperate-defect  coordination  correlation  cost-benefit  counter-revolution  counterexample  counterfactual  coupling-cohesion  course  creative  critique  crux  crypto  cs  cultural-dynamics  culture-war  curvature  cybernetics  cycles  cynicism-idealism  darwinian  data  data-science  data-structures  death  debate  decision-making  decision-theory  deep-learning  deepgoog  definition  degrees-of-freedom  democracy  dennett  density  descriptive  detail-architecture  differential  differential-privacy  dimensionality  direction  dirty-hands  discrete  discussion  distribution  diversity  drama  duplication  duty  dynamical  earth  ecology  economics  econotariat  ed-yong  eden-heaven  effect-size  efficiency  egalitarianism-hierarchy  EGT  einstein  electromag  elegance  embedded-cognition  emergent  empirical  ems  end-times  energy-resources  engineering  enhancement  entertainment  entropy-like  environment  epistemic  ergodic  error  essay  estimate  ethics  europe  evolution  examples  exegesis-hermeneutics  existence  expanders  expansionism  expectancy  experiment  expert  expert-experience  explanans  explanation  exploratory  exposition  externalities  extrema  fashun  features  fermi  feynman  fiction  fields  finance  finiteness  fisher  flexibility  flux-stasis  foreign-lang  forms-instances  frequency  frequentist  frontier  futurism  gedanken  gelman  generalization  genetics  genomics  geoengineering  geometry  giants  gnosis-logos  good-evil  government  gowers  gradient-descent  graph-theory  graphs  gravity  ground-up  group-level  growth-econ  GT-101  GWAS  gwern  hamming  hanson  hard-tech  hardness  hardware  harvard  hashing  hi-order-bits  hidden-motives  history  hmm  hn  homo-hetero  human-capital  human-ml  humanity  humility  hypocrisy  hypothesis-testing  ideas  identity  identity-politics  ideology  idk  IEEE  illusion  impetus  info-dynamics  info-econ  information-theory  innovation  input-output  insight  instinct  intelligence  interdisciplinary  internet  interpretation  intricacy  intuition  investing  iron-age  is-ought  israel  iteration-recursion  janus  japan  journos-pundits  knowledge  kumbaya-kult  language  large-factor  lattice  learning  learning-theory  lecture-notes  left-wing  len:long  lens  lesswrong  letters  levers  leviathan  life-history  lifts-projections  limits  linear-algebra  linear-programming  linearity  liner-notes  linguistics  links  list  local-global  long-short-run  low-hanging  lower-bounds  machine-learning  madhu-sudan  magnitude  malaise  malthus  management  manifolds  map-territory  marginal  marginal-rev  markov  matching  math  math.AG  math.CO  math.DS  mathtariat  maxim-gun  measure  measurement  mechanics  medieval  mediterranean  meta:prediction  meta:science  metabuch  metameta  methodology  metrics  micro  mihai  minimum-viable  missing-heritability  mit  ML-MAP-E  model-class  model-organism  models  moloch  moments  morality  motivation  mrtz  multi  multiplicative  music  mutation  mystic  n-factor  nascent-state  nature  network-structure  neuro  neuro-nitgrit  neurons  news  nibble  nietzschean  nitty-gritty  no-go  noble-lie  number  objektbuch  occam  occident  oceans  oly  online-learning  operational  optimization  order-disorder  ORFE  org:anglo  org:biz  org:bleg  org:edge  org:edu  org:inst  org:junk  org:mag  org:mat  org:rec  org:sci  org:theos  organizing  orient  outcome-risk  overflow  p:*  p:**  papers  paradox  parallax  parsimony  paul-romer  pdf  people  performance  perturbation  phase-transition  philosophy  phys-energy  physics  pigeonhole-markov  polarization  policy  polisci  politics  polynomials  popsci  population-genetics  power  pragmatic  prediction  preference-falsification  preimage  preprint  princeton  priors-posteriors  pro-rata  probabilistic-method  probability  problem-solving  proofs  properties  property-rights  prudence  pseudorandomness  psychology  psychometrics  public-goodish  puzzles  q-n-a  qra  quantitative-qualitative  quantum  quantum-info  questions  quixotic  quotes  rand-complexity  random  rationality  ratty  realness  reason  recommendations  reduction  reference  reflection  reinforcement  relativity  religion  research  research-program  retention  review  rhetoric  right-wing  rigidity  rigor  rigorous-crypto  risk  robotics  robust  roots  rot  s:*  s:**  s:null  sampling-bias  sanctity-degradation  sapiens  scale  scaling-tech  schelling  scholar  science  scifi-fantasy  scitariat  search  selection  sensitivity  sequential  shannon  signal-noise  signaling  simulation  singularity  sinosphere  skeleton  skunkworks  slides  slippery-slope  smoothness  social  social-psych  social-structure  soft-question  software  space  space-complexity  sparsity  spatial  spearhead  speculation  speed  speedometer  spock  spreading  stat-mech  state  stats  status  stories  straussian  stream  street-fighting  structure  study  stylized-facts  sublinear  summary  survey  symmetry  synthesis  systematic-ad-hoc  talks  tcs  tcstariat  tech  technology  telos-atelos  temperature  the-basilisk  the-classics  the-great-west-whale  the-self  the-trenches  the-watchers  the-world-is-just-atoms  theory-of-mind  theory-practice  theos  thermo  thick-thin  things  thinking  threat-modeling  tidbits  time  time-preference  top-n  topics  toxoplasmosis  tradeoffs  trends  tribalism  trivia  truth  turing  twitter  unaffiliated  unintended-consequences  unit  universalism-particularism  unsupervised  urban  urban-rural  us-them  values  video  visual-understanding  volo-avolo  von-neumann  walls  waves  wealth  web  weird  west-hunter  westminster  whole-partial-many  wiki  wild-ideas  within-without  working-stiff  wormholes  xenobio  yoga  zeitgeist  zooming  🌞  🎓  🎩  👳  🔬 

Copy this bookmark:



description:


tags: