nhaliday + hi-order-bits   129

Theory of Self-Reproducing Automata - John von Neumann

Comparisons between computing machines and the nervous systems. Estimates of size for computing machines, present and near future.

Estimates for size for the human central nervous system. Excursus about the “mixed” character of living organisms. Analog and digital elements. Observations about the “mixed” character of all componentry, artificial as well as natural. Interpretation of the position to be taken with respect to these.

Evaluation of the discrepancy in size between artificial and natural automata. Interpretation of this discrepancy in terms of physical factors. Nature of the materials used.

The probability of the presence of other intellectual factors. The role of complication and the theoretical penetration that it requires.

Questions of reliability and errors reconsidered. Probability of individual errors and length of procedure. Typical lengths of procedure for computing machines and for living organisms--that is, for artificial and for natural automata. Upper limits on acceptable probability of error in individual operations. Compensation by checking and self-correcting features.

Differences of principle in the way in which errors are dealt with in artificial and in natural automata. The “single error” principle in artificial automata. Crudeness of our approach in this case, due to the lack of adequate theory. More sophisticated treatment of this problem in natural automata: The role of the autonomy of parts. Connections between this autonomy and evolution.

- 10^10 neurons in brain, 10^4 vacuum tubes in largest computer at time
- machines faster: 5 ms from neuron potential to neuron potential, 10^-3 ms for vacuum tubes

pdf  article  papers  essay  nibble  math  cs  computation  bio  neuro  neuro-nitgrit  scale  magnitude  comparison  acm  von-neumann  giants  thermo  phys-energy  speed  performance  time  density  frequency  hardware  ems  efficiency  dirty-hands  street-fighting  fermi  estimate  retention  physics  interdisciplinary  multi  wiki  links  people  🔬  atoms  automata  duplication  iteration-recursion  turing  complexity  measure  nature  technology  complex-systems  bits  information-theory  circuits  robust  structure  composition-decomposition  evolution  mutation  axioms  analogy  thinking  input-output  hi-order-bits  coding-theory  flexibility  rigidity 
april 2018 by nhaliday
National Defense Strategy of the United States of America
National Defense Strategy released with clear priority: Stay ahead of Russia and China: https://www.defensenews.com/breaking-news/2018/01/19/national-defense-strategy-released-with-clear-priority-stay-ahead-of-russia-and-china/

A saner allocation of US 'defense' funds would be something like 10% nuclear trident, 10% border patrol, & spend the rest innoculating against cyber & biological attacks.
and since the latter 2 are hopeless, just refund 80% of the defense budget.
Monopoly on force at sea is arguably worthwhile.
Given the value of the US market to any would-be adversary, id be willing to roll the dice & let it ride.
subs are part of the triad, surface ships are sitting ducks this day and age
But nobody does sink them, precisely because of the monopoly on force. It's a path-dependent equilibirum where (for now) no other actor can reap the benefits of destabilizing the monopoly, and we're probably drastically underestimating the ramifications if/when it goes away.
can lethal autonomous weapon systems get some
pdf  white-paper  org:gov  usa  government  trump  policy  nascent-state  foreign-policy  realpolitik  authoritarianism  china  asia  russia  antidemos  military  defense  world  values  enlightenment-renaissance-restoration-reformation  democracy  chart  politics  current-events  sulla  nuclear  arms  deterrence  strategy  technology  sky  oceans  korea  communism  innovation  india  europe  EU  MENA  multi  org:foreign  war  great-powers  thucydides  competition  twitter  social  discussion  backup  gnon  🐸  markets  trade  nationalism-globalism  equilibrium  game-theory  tactics  top-n  hi-order-bits  security  hacker  biotech  terrorism  disease  parasites-microbiome  migration  walls  internet 
january 2018 by nhaliday
What are the Laws of Biology?
The core finding of systems biology is that only a very small subset of possible network motifs is actually used and that these motifs recur in all kinds of different systems, from transcriptional to biochemical to neural networks. This is because only those arrangements of interactions effectively perform some useful operation, which underlies some necessary function at a cellular or organismal level. There are different arrangements for input summation, input comparison, integration over time, high-pass or low-pass filtering, negative auto-regulation, coincidence detection, periodic oscillation, bistability, rapid onset response, rapid offset response, turning a graded signal into a sharp pulse or boundary, and so on, and so on.

These are all familiar concepts and designs in engineering and computing, with well-known properties. In living organisms there is one other general property that the designs must satisfy: robustness. They have to work with noisy components, at a scale that’s highly susceptible to thermal noise and environmental perturbations. Of the subset of designs that perform some operation, only a much smaller subset will do it robustly enough to be useful in a living organism. That is, they can still perform their particular functions in the face of noisy or fluctuating inputs or variation in the number of components constituting the elements of the network itself.
scitariat  reflection  proposal  ideas  thinking  conceptual-vocab  lens  bio  complex-systems  selection  evolution  flux-stasis  network-structure  structure  composition-decomposition  IEEE  robust  signal-noise  perturbation  interdisciplinary  graphs  circuits  🌞  big-picture  hi-order-bits  nibble  synthesis 
november 2017 by nhaliday
What is the connection between special and general relativity? - Physics Stack Exchange
Special relativity is the "special case" of general relativity where spacetime is flat. The speed of light is essential to both.
nibble  q-n-a  overflow  physics  relativity  explanation  synthesis  hi-order-bits  ground-up  gravity  summary  aphorism  differential  geometry 
november 2017 by nhaliday
What is the difference between general and special relativity? - Quora
General Relativity is, quite simply, needed to explain gravity.

Special Relativity is the special case of GR, when the metric is flat — which means no gravity.

You need General Relativity when the metric gets all curvy, and when things start to experience gravitation.
nibble  q-n-a  qra  explanation  physics  relativity  synthesis  hi-order-bits  ground-up  gravity  summary  aphorism  differential  geometry 
november 2017 by nhaliday
If Quantum Computers are not Possible Why are Classical Computers Possible? | Combinatorics and more
As most of my readers know, I regard quantum computing as unrealistic. You can read more about it in my Notices AMS paper and its extended version (see also this post) and in the discussion of Puzzle 4 from my recent puzzles paper (see also this post). The amazing progress and huge investment in quantum computing (that I presented and update  routinely in this post) will put my analysis to test in the next few years.
tcstariat  mathtariat  org:bleg  nibble  tcs  cs  computation  quantum  volo-avolo  no-go  contrarianism  frontier  links  quantum-info  analogy  comparison  synthesis  hi-order-bits  speedometer  questions  signal-noise 
november 2017 by nhaliday
The Constitutional Economics of Autocratic Succession on JSTOR
Abstract. The paper extends and empirically tests Gordon Tullock’s public choice theory of the nature of autocracy. A simple model of the relationship between constitutional rules governing succession in autocratic regimes and the occurrence of coups against autocrats is sketched. The model is applied to a case study of coups against monarchs in Denmark in the period ca. 935–1849. A clear connection is found between the specific constitutional rules governing succession and the frequency of coups. Specifically, the introduction of automatic hereditary succession in an autocracy provides stability and limits the number of coups conducted by contenders.

Table 2. General constitutional rules of succession, Denmark ca. 935–1849

To see this the data may be divided into three categories of constitutional rules of succession: One of open succession (for the periods 935–1165 and 1326–40), one of appointed succession combined with election (for the periods 1165–1326 and 1340–1536), and one of more or less formalized hereditary succession (1536–1849). On the basis of this categorization the data have been summarized in Table 3.

validity of empirics is a little sketchy

The graphic novel it is based on is insightful, illustrates Tullock's game-theoretic, asymmetric information views on autocracy.

Conclusions from Gorton Tullock's book Autocracy, p. 211-215.: https://astro.temple.edu/~bstavis/courses/tulluck.htm
study  polisci  political-econ  economics  cracker-econ  big-peeps  GT-101  info-econ  authoritarianism  antidemos  government  micro  leviathan  elite  power  institutions  garett-jones  multi  econotariat  twitter  social  commentary  backup  art  film  comics  fiction  competition  europe  nordic  empirical  evidence-based  incentives  legacy  peace-violence  order-disorder  🎩  organizing  info-dynamics  history  medieval  law  axioms  stylized-facts  early-modern  data  longitudinal  flux-stasis  shift  revolution  correlation  org:junk  org:edu  summary  military  war  top-n  hi-order-bits  feudal  democracy  sulla 
october 2017 by nhaliday
“Editor’s Introduction to The New Economic History and the Industrial Revolution,” J. Mokyr (1998) | A Fine Theorem
I taught a fun three hours on the Industrial Revolution in my innovation PhD course this week. The absolutely incredible change in the condition of mankind that began in a tiny corner of Europe in an otherwise unremarkable 70-or-so years is totally fascinating. Indeed, the Industrial Revolution and its aftermath are so important to human history that I find it strange that we give people PhDs in social science without requiring at least some study of what happened.

My post today draws heavily on Joel Mokyr’s lovely, if lengthy, summary of what we know about the period. You really should read the whole thing, but if you know nothing about the IR, there are really five facts of great importance which you should be aware of.

1) The world was absurdly poor from the dawn of mankind until the late 1800s, everywhere.
2) The average person did not become richer, nor was overall economic growth particularly spectacular, during the Industrial Revolution; indeed, wages may have fallen between 1760 and 1830.
3) Major macro inventions, and growth, of the type seen in England in the late 1700s and early 1800s happened many times in human history.
4) It is hard for us today to understand how revolutionary ideas like “experimentation” or “probability” were.
5) The best explanations for “why England? why in the late 1700s? why did growth continue?” do not involve colonialism, slavery, or famous inventions.
econotariat  broad-econ  economics  growth-econ  cjones-like  summary  divergence  industrial-revolution  list  top-n  mokyr-allen-mccloskey  hi-order-bits  aphorism  wealth  wealth-of-nations  malthus  revolution  innovation  the-trenches  science  europe  the-great-west-whale  britain  conceptual-vocab  history  early-modern  technology  long-short-run  econ-metrics  data  time-series  conquest-empire  india  asia  scale  attaq  enlightenment-renaissance-restoration-reformation  roots  cycles  flux-stasis  whiggish-hegelian 
october 2017 by nhaliday
Benedict Evans on Twitter: ""University can save you from the autodidact tendency to overrate himself. Democracy depends on people who know they don’t know everything.""
“The autodidact’s risk is that they think they know all of medieval history but have never heard of Charlemagne” - Umberto Eco

Facts are the least part of education. The structure and priorities they fit into matters far more, and learning how to learn far more again
techtariat  sv  twitter  social  discussion  rhetoric  info-foraging  learning  education  higher-ed  academia  expert  lens  aphorism  quotes  hi-order-bits  big-picture  synthesis  expert-experience 
october 2017 by nhaliday
New Theory Cracks Open the Black Box of Deep Learning | Quanta Magazine
A new idea called the “information bottleneck” is helping to explain the puzzling success of today’s artificial-intelligence algorithms — and might also explain how human brains learn.

sounds like he's just talking about autoencoders?
news  org:mag  org:sci  popsci  announcement  research  deep-learning  machine-learning  acm  information-theory  bits  neuro  model-class  big-surf  frontier  nibble  hmm  signal-noise  deepgoog  expert  ideas  wild-ideas  summary  talks  video  israel  roots  physics  interdisciplinary  ai  intelligence  shannon  giants  arrows  preimage  lifts-projections  composition-decomposition  characterization  markov  gradient-descent  papers  liner-notes  experiment  hi-order-bits  generalization  expert-experience  explanans  org:inst  speedometer 
september 2017 by nhaliday
All models are wrong - Wikipedia
Box repeated the aphorism in a paper that was published in the proceedings of a 1978 statistics workshop.[2] The paper contains a section entitled "All models are wrong but some are useful". The section is copied below.

Now it would be very remarkable if any system existing in the real world could be exactly represented by any simple model. However, cunningly chosen parsimonious models often do provide remarkably useful approximations. For example, the law PV = RT relating pressure P, volume V and temperature T of an "ideal" gas via a constant R is not exactly true for any real gas, but it frequently provides a useful approximation and furthermore its structure is informative since it springs from a physical view of the behavior of gas molecules.

For such a model there is no need to ask the question "Is the model true?". If "truth" is to be the "whole truth" the answer must be "No". The only question of interest is "Is the model illuminating and useful?".
thinking  metabuch  metameta  map-territory  models  accuracy  wire-guided  truth  philosophy  stats  data-science  methodology  lens  wiki  reference  complex-systems  occam  parsimony  science  nibble  hi-order-bits  info-dynamics  the-trenches  meta:science  physics  fluid  thermo  stat-mech  applicability-prereqs  theory-practice 
august 2017 by nhaliday
Introduction to Scaling Laws

Galileo’s Discovery of Scaling Laws: https://www.mtholyoke.edu/~mpeterso/classes/galileo/scaling8.pdf
Days 1 and 2 of Two New Sciences

An example of such an insight is “the surface of a small solid is comparatively greater than that of a large one” because the surface goes like the square of a linear dimension, but the volume goes like the cube.5 Thus as one scales down macroscopic objects, forces on their surfaces like viscous drag become relatively more important, and bulk forces like weight become relatively less important. Galileo uses this idea on the First Day in the context of resistance in free fall, as an explanation for why similar objects of different size do not fall exactly together, but the smaller one lags behind.
nibble  org:junk  exposition  lecture-notes  physics  mechanics  street-fighting  problem-solving  scale  magnitude  estimate  fermi  mental-math  calculation  nitty-gritty  multi  scitariat  org:bleg  lens  tutorial  guide  ground-up  tricki  skeleton  list  cheatsheet  identity  levers  hi-order-bits  yoga  metabuch  pdf  article  essay  history  early-modern  europe  the-great-west-whale  science  the-trenches  discovery  fluid  architecture  oceans  giants  tidbits 
august 2017 by nhaliday
A sense of where you are | West Hunter
Nobody at the Times noticed it at first. I don’t know that they ever did notice it by themselves- likely some reader brought it to their attention. But this happens all the time, because very few people have a picture of the world in their head that includes any numbers. Mostly they don’t even have a rough idea of relative size.

In much the same way, back in the 1980s,lots of writers were talking about 90,000 women a year dying of anorexia nervosa, another impossible number. Then there was the great scare about 1,000,000 kids being kidnapped in the US each year – also impossible and wrong. Practically all the talking classes bought into it.

You might think that the people at the top are different – but with a few exceptions, they’re just as clueless.
west-hunter  scitariat  commentary  discussion  reflection  bounded-cognition  realness  nitty-gritty  calculation  fermi  quantitative-qualitative  stories  street-fighting  mental-math  being-right  info-dynamics  knowledge  hi-order-bits  scale  dysgenics  drugs  death  coming-apart  opioids  elite  ability-competence  rant  decision-making 
may 2017 by nhaliday
- includes physics, cs, etc.
- CS is _a lot_ smaller, or at least has much lower citation counts
- size = number citations, placement = citation network structure
papers  publishing  science  meta:science  data  visualization  network-structure  big-picture  dynamic  exploratory  🎓  physics  cs  math  hi-order-bits  survey  visual-understanding  preprint  aggregator  database  search  maps  zooming  metameta  scholar-pack  🔬  info-dynamics  scale  let-me-see  chart 
february 2017 by nhaliday
A brief philosophical discussion:
Measure theory, as much as any branch of mathematics, is an area where it is important to be acquainted with the basic notions and statements, but not desperately important to be acquainted with the detailed proofs, which are often rather unilluminating. One should always have in a mind a place where one could go and look if one ever did need to understand a proof: for me, that place is Rudin’s Real and Complex Analysis (Rudin’s “red book”).
gowers  pdf  math  math.CA  math.FA  philosophy  measure  exposition  synthesis  big-picture  hi-order-bits  ergodic  ground-up  summary  roadmap  mathtariat  proofs  nibble  unit  integral  zooming  p:whenever 
february 2017 by nhaliday
What is the relationship between information theory and Coding theory? - Quora
- finite vs. asymptotic
- combinatorial vs. probabilistic (lotsa overlap their)
- worst-case (Hamming) vs. distributional (Shannon)

Information and coding theory most often appear together in the subject of error correction over noisy channels. Historically, they were born at almost exactly the same time - both Richard Hamming and Claude Shannon were working at Bell Labs when this happened. Information theory tends to heavily use tools from probability theory (together with an "asymptotic" way of thinking about the world), while traditional "algebraic" coding theory tends to employ mathematics that are much more finite sequence length/combinatorial in nature, including linear algebra over Galois Fields. The emergence in the late 90s and first decade of 2000 of codes over graphs blurred this distinction though, as code classes such as low density parity check codes employ both asymptotic analysis and random code selection techniques which have counterparts in information theory.

They do not subsume each other. Information theory touches on many other aspects that coding theory does not, and vice-versa. Information theory also touches on compression (lossy & lossless), statistics (e.g. large deviations), modeling (e.g. Minimum Description Length). Coding theory pays a lot of attention to sphere packing and coverings for finite length sequences - information theory addresses these problems (channel & lossy source coding) only in an asymptotic/approximate sense.
q-n-a  qra  math  acm  tcs  information-theory  coding-theory  big-picture  comparison  confusion  explanation  linear-algebra  polynomials  limits  finiteness  math.CO  hi-order-bits  synthesis  probability  bits  hamming  shannon  intricacy  nibble  s:null  signal-noise 
february 2017 by nhaliday
general topology - What should be the intuition when working with compactness? - Mathematics Stack Exchange

The situation with compactness is sort of like the above. It turns out that finiteness, which you think of as one concept (in the same way that you think of "Foo" as one concept above), is really two concepts: discreteness and compactness. You've never seen these concepts separated before, though. When people say that compactness is like finiteness, they mean that compactness captures part of what it means to be finite in the same way that shortness captures part of what it means to be Foo.


As many have said, compactness is sort of a topological generalization of finiteness. And this is true in a deep sense, because topology deals with open sets, and this means that we often "care about how something behaves on an open set", and for compact spaces this means that there are only finitely many possible behaviors.


Compactness does for continuous functions what finiteness does for functions in general.

If a set A is finite then every function f:A→R has a max and a min, and every function f:A→R^n is bounded. If A is compact, the every continuous function from A to R has a max and a min and every continuous function from A to R^n is bounded.

If A is finite then every sequence of members of A has a subsequence that is eventually constant, and "eventually constant" is the only kind of convergence you can talk about without talking about a topology on the set. If A is compact, then every sequence of members of A has a convergent subsequence.
q-n-a  overflow  math  topology  math.GN  concept  finiteness  atoms  intuition  oly  mathtariat  multi  discrete  gowers  motivation  synthesis  hi-order-bits  soft-question  limits  things  nibble  definition  convergence  abstraction 
january 2017 by nhaliday
Shtetl-Optimized » Blog Archive » Logicians on safari
So what are they then? Maybe it’s helpful to think of them as “quantitative epistemology”: discoveries about the capacities of finite beings like ourselves to learn mathematical truths. On this view, the theoretical computer scientist is basically a mathematical logician on a safari to the physical world: someone who tries to understand the universe by asking what sorts of mathematical questions can and can’t be answered within it. Not whether the universe is a computer, but what kind of computer it is! Naturally, this approach to understanding the world tends to appeal most to people for whom math (and especially discrete math) is reasonably clear, whereas physics is extremely mysterious.

the sequel: http://www.scottaaronson.com/blog/?p=153
tcstariat  aaronson  tcs  computation  complexity  aphorism  examples  list  reflection  philosophy  multi  summary  synthesis  hi-order-bits  interdisciplinary  lens  big-picture  survey  nibble  org:bleg  applications  big-surf  s:*  p:whenever  ideas 
january 2017 by nhaliday
ho.history overview - Proofs that require fundamentally new ways of thinking - MathOverflow
my favorite:
Although this has already been said elsewhere on MathOverflow, I think it's worth repeating that Gromov is someone who has arguably introduced more radical thoughts into mathematics than anyone else. Examples involving groups with polynomial growth and holomorphic curves have already been cited in other answers to this question. I have two other obvious ones but there are many more.

I don't remember where I first learned about convergence of Riemannian manifolds, but I had to laugh because there's no way I would have ever conceived of a notion. To be fair, all of the groundwork for this was laid out in Cheeger's thesis, but it was Gromov who reformulated everything as a convergence theorem and recognized its power.

Another time Gromov made me laugh was when I was reading what little I could understand of his book Partial Differential Relations. This book is probably full of radical ideas that I don't understand. The one I did was his approach to solving the linearized isometric embedding equation. His radical, absurd, but elementary idea was that if the system is sufficiently underdetermined, then the linear partial differential operator could be inverted by another linear partial differential operator. Both the statement and proof are for me the funniest in mathematics. Most of us view solving PDE's as something that requires hard work, involving analysis and estimates, and Gromov manages to do it using only elementary linear algebra. This then allows him to establish the existence of isometric embedding of Riemannian manifolds in a wide variety of settings.
q-n-a  overflow  soft-question  big-list  math  meta:math  history  insight  synthesis  gowers  mathtariat  hi-order-bits  frontier  proofs  magnitude  giants  differential  geometry  limits  flexibility  nibble  degrees-of-freedom  big-picture  novelty  zooming  big-surf  wild-ideas  metameta  courage  convergence  ideas  innovation  the-trenches  discovery  creative 
january 2017 by nhaliday
Thinking Outside One’s Paradigm | Academically Interesting
I think that as a scientist (or really, even as a citizen) it is important to be able to see outside one’s own paradigm. I currently think that I do a good job of this, but it seems to me that there’s a big danger of becoming more entrenched as I get older. Based on the above experiences, I plan to use the following test: When someone asks me a question about my field, how often have I not thought about it before? How tempted am I to say, “That question isn’t interesting”? If these start to become more common, then I’ll know something has gone wrong.
ratty  clever-rats  academia  science  interdisciplinary  lens  frontier  thinking  rationality  meta:science  curiosity  insight  scholar  innovation  reflection  acmtariat  water  biases  heterodox  🤖  🎓  aging  meta:math  low-hanging  big-picture  hi-order-bits  flexibility  org:bleg  nibble  the-trenches  wild-ideas  metameta  courage  s:**  discovery  context  embedded-cognition  endo-exo  near-far  🔬  info-dynamics  allodium  ideas  questions  within-without 
january 2017 by nhaliday
"Surely You're Joking, Mr. Feynman!": Adventures of a Curious Character ... - Richard P. Feynman - Google Books
Actually, there was a certain amount of genuine quality to my guesses. I had a scheme, which I still use today when somebody is explaining something that l’m trying to understand: I keep making up examples. For instance, the mathematicians would come in with a terrific theorem, and they’re all excited. As they’re telling me the conditions of the theorem, I construct something which fits all the conditions. You know, you have a set (one ball)—disjoint (two balls). Then the balls tum colors, grow hairs, or whatever, in my head as they put more conditions on. Finally they state the theorem, which is some dumb thing about the ball which isn’t true for my hairy green ball thing, so I say, “False!"
physics  math  feynman  thinking  empirical  examples  lens  intuition  operational  stories  metabuch  visual-understanding  thurston  hi-order-bits  geometry  topology  cartoons  giants  👳  nibble  the-trenches  metameta  meta:math  s:**  quotes  gbooks 
january 2017 by nhaliday
soft question - Thinking and Explaining - MathOverflow
- good question from Bill Thurston
- great answers by Terry Tao, fedja, Minhyong Kim, gowers, etc.

Terry Tao:
- symmetry as blurring/vibrating/wobbling, scale invariance
- anthropomorphization, adversarial perspective for estimates/inequalities/quantifiers, spending/economy

fedja walks through his though-process from another answer

Minhyong Kim: anthropology of mathematical philosophizing

Per Vognsen: normality as isotropy
comment: conjugate subgroup gHg^-1 ~ "H but somewhere else in G"

gowers: hidden things in basic mathematics/arithmetic
comment by Ryan Budney: x sin(x) via x -> (x, sin(x)), (x, y) -> xy
I kinda get what he's talking about but needed to use Mathematica to get the initial visualization down.
To remind myself later:
- xy can be easily visualized by juxtaposing the two parabolae x^2 and -x^2 diagonally
- x sin(x) can be visualized along that surface by moving your finger along the line (x, 0) but adding some oscillations in y direction according to sin(x)
q-n-a  soft-question  big-list  intuition  communication  teaching  math  thinking  writing  thurston  lens  overflow  synthesis  hi-order-bits  👳  insight  meta:math  clarity  nibble  giants  cartoons  gowers  mathtariat  better-explained  stories  the-trenches  problem-solving  homogeneity  symmetry  fedja  examples  philosophy  big-picture  vague  isotropy  reflection  spatial  ground-up  visual-understanding  polynomials  dimensionality  math.GR  worrydream  scholar  🎓  neurons  metabuch  yoga  retrofit  mental-math  metameta  wisdom  wordlessness  oscillation  operational  adversarial  quantifiers-sums  exposition  explanation  tricki  concrete  s:***  manifolds  invariance  dynamical  info-dynamics  cool  direction 
january 2017 by nhaliday
soft question - Why does Fourier analysis of Boolean functions "work"? - Theoretical Computer Science Stack Exchange
Here is my point of view, which I learned from Guy Kindler, though someone more experienced can probably give a better answer: Consider the linear space of functions f: {0,1}^n -> R and consider a linear operator of the form σ_w (for w in {0,1}^n), that maps a function f(x) as above to the function f(x+w). In many of the questions of TCS, there is an underlying need to analyze the effects that such operators have on certain functions.

Now, the point is that the Fourier basis is the basis that diagonalizes all those operators at the same time, which makes the analysis of those operators much simpler. More generally, the Fourier basis diagonalizes the convolution operator, which also underlies many of those questions. Thus, Fourier analysis is likely to be effective whenever one needs to analyze those operators.
q-n-a  math  tcs  synthesis  boolean-analysis  fourier  👳  tidbits  motivation  intuition  linear-algebra  overflow  hi-order-bits  insight  curiosity  ground-up  arrows  nibble  s:* 
december 2016 by nhaliday
gt.geometric topology - Intuitive crutches for higher dimensional thinking - MathOverflow
Terry Tao:
I can't help you much with high-dimensional topology - it's not my field, and I've not picked up the various tricks topologists use to get a grip on the subject - but when dealing with the geometry of high-dimensional (or infinite-dimensional) vector spaces such as R^n, there are plenty of ways to conceptualise these spaces that do not require visualising more than three dimensions directly.

For instance, one can view a high-dimensional vector space as a state space for a system with many degrees of freedom. A megapixel image, for instance, is a point in a million-dimensional vector space; by varying the image, one can explore the space, and various subsets of this space correspond to various classes of images.

One can similarly interpret sound waves, a box of gases, an ecosystem, a voting population, a stream of digital data, trials of random variables, the results of a statistical survey, a probabilistic strategy in a two-player game, and many other concrete objects as states in a high-dimensional vector space, and various basic concepts such as convexity, distance, linearity, change of variables, orthogonality, or inner product can have very natural meanings in some of these models (though not in all).

It can take a bit of both theory and practice to merge one's intuition for these things with one's spatial intuition for vectors and vector spaces, but it can be done eventually (much as after one has enough exposure to measure theory, one can start merging one's intuition regarding cardinality, mass, length, volume, probability, cost, charge, and any number of other "real-life" measures).

For instance, the fact that most of the mass of a unit ball in high dimensions lurks near the boundary of the ball can be interpreted as a manifestation of the law of large numbers, using the interpretation of a high-dimensional vector space as the state space for a large number of trials of a random variable.

More generally, many facts about low-dimensional projections or slices of high-dimensional objects can be viewed from a probabilistic, statistical, or signal processing perspective.

Scott Aaronson:
Here are some of the crutches I've relied on. (Admittedly, my crutches are probably much more useful for theoretical computer science, combinatorics, and probability than they are for geometry, topology, or physics. On a related note, I personally have a much easier time thinking about R^n than about, say, R^4 or R^5!)

1. If you're trying to visualize some 4D phenomenon P, first think of a related 3D phenomenon P', and then imagine yourself as a 2D being who's trying to visualize P'. The advantage is that, unlike with the 4D vs. 3D case, you yourself can easily switch between the 3D and 2D perspectives, and can therefore get a sense of exactly what information is being lost when you drop a dimension. (You could call this the "Flatland trick," after the most famous literary work to rely on it.)
2. As someone else mentioned, discretize! Instead of thinking about R^n, think about the Boolean hypercube {0,1}^n, which is finite and usually easier to get intuition about. (When working on problems, I often find myself drawing {0,1}^4 on a sheet of paper by drawing two copies of {0,1}^3 and then connecting the corresponding vertices.)
3. Instead of thinking about a subset S⊆R^n, think about its characteristic function f:R^n→{0,1}. I don't know why that trivial perspective switch makes such a big difference, but it does ... maybe because it shifts your attention to the process of computing f, and makes you forget about the hopeless task of visualizing S!
4. One of the central facts about R^n is that, while it has "room" for only n orthogonal vectors, it has room for exp⁡(n) almost-orthogonal vectors. Internalize that one fact, and so many other properties of R^n (for example, that the n-sphere resembles a "ball with spikes sticking out," as someone mentioned before) will suddenly seem non-mysterious. In turn, one way to internalize the fact that R^n has so many almost-orthogonal vectors is to internalize Shannon's theorem that there exist good error-correcting codes.
5. To get a feel for some high-dimensional object, ask questions about the behavior of a process that takes place on that object. For example: if I drop a ball here, which local minimum will it settle into? How long does this random walk on {0,1}^n take to mix?

Gil Kalai:
This is a slightly different point, but Vitali Milman, who works in high-dimensional convexity, likes to draw high-dimensional convex bodies in a non-convex way. This is to convey the point that if you take the convex hull of a few points on the unit sphere of R^n, then for large n very little of the measure of the convex body is anywhere near the corners, so in a certain sense the body is a bit like a small sphere with long thin "spikes".
q-n-a  intuition  math  visual-understanding  list  discussion  thurston  tidbits  aaronson  tcs  geometry  problem-solving  yoga  👳  big-list  metabuch  tcstariat  gowers  mathtariat  acm  overflow  soft-question  levers  dimensionality  hi-order-bits  insight  synthesis  thinking  models  cartoons  coding-theory  information-theory  probability  concentration-of-measure  magnitude  linear-algebra  boolean-analysis  analogy  arrows  lifts-projections  measure  markov  sampling  shannon  conceptual-vocab  nibble  degrees-of-freedom  worrydream  neurons  retrofit  oscillation  paradox  novelty  tricki  concrete  high-dimension  s:***  manifolds  direction  curvature  convexity-curvature 
december 2016 by nhaliday
Information Processing: Advice to a new graduate student
first 3 points (tough/connected advisor, big picture, benchmarking) are key:

1. There is often a tradeoff between the advisor from whom you will learn the most vs the one who will help your career the most. Letters of recommendation are the most important factor in obtaining a postdoc/faculty job, and some professors are 10x as influential as others. However, the influential prof might be a jerk and not good at training students. The kind mentor with deep knowledge or the approachable junior faculty member might not be a mover and shaker.

2. Most grad students fail to grasp the big picture in their field and get too caught up in their narrowly defined dissertation project.

3. Benchmark yourself against senior scholars at a similar stage in their (earlier) careers. What should you have accomplished / mastered as a grad student or postdoc in order to keep pace with your benchmark?

4. Take the opportunity to interact with visitors and speakers. Don't assume that because you are a student they'll be uninterested in intellectual exchange with you. Even established scholars are pleased to be asked interesting questions by intelligent grad students. If you get to the stage where the local professors think you are really good, i.e., they sort of think of you as a peer intellect or colleague, you might get invited along to dinner with the speaker!

5. Understand the trends and bandwagons in your field. Most people cannot survive on the job market without chasing trends at least a little bit. But always save some brainpower for thinking about the big questions that most interest you.

6. Work your ass off. If you outwork the other guy by 10%, the compound effect over time could accumulate into a qualitative difference in capability or depth of knowledge.

7. Don't be afraid to seek out professors with questions. Occasionally you will get a gem of an explanation. Most things, even the most conceptually challenging, can be explained in a very clear and concise way after enough thought. A real expert in the field will have accumulated many such explanations, which are priceless.
grad-school  phd  advice  career  hi-order-bits  top-n  hsu  🎓  scholar  strategy  tactics  pre-2013  scitariat  long-term  success  tradeoffs  big-picture  scholar-pack  optimate  discipline  🦉  gtd  prioritizing  transitions  s:***  benchmarks  track-record  s-factor  progression 
november 2016 by nhaliday
« earlier      
per page:    204080120160

bundles : abstractgood-vibesmetametathinkingvague

related tags

2016-election  :)  aaronson  ability-competence  abstraction  academia  accretion  accuracy  acm  acmtariat  additive-combo  adversarial  advice  aesthetics  age-generation  aggregator  aging  ai  ai-control  akrasia  albion  algebra  algebraic-complexity  algorithms  alignment  allodium  alt-inst  ama  analogy  analysis  analytical-holistic  anglosphere  announcement  anomie  anthropology  antidemos  antiquity  aphorism  apollonian-dionysian  applicability-prereqs  applications  approximation  architecture  arms  arrows  art  article  asia  atoms  attaq  attention  authoritarianism  automata  axelrod  axioms  backup  bare-hands  barons  bayesian  beauty  behavioral-gen  being-becoming  being-right  ben-recht  benchmarks  best-practices  better-explained  biases  big-list  big-peeps  big-picture  big-surf  binomial  bio  biodet  biotech  bitcoin  bits  blockchain  blog  boaz-barak  boltzmann  books  boolean-analysis  bounded-cognition  bret-victor  brexit  britain  broad-econ  business  c:***  calculation  caltech  canon  career  cartoons  causation  characterization  charity  chart  cheatsheet  checklists  chemistry  china  christianity  circuits  civic  cjones-like  clarity  classic  clever-rats  closure  coalitions  coarse-fine  coding-theory  cohesion  comics  coming-apart  commentary  communication  communism  comparison  competition  complex-systems  complexity  composition-decomposition  computation  concentration-of-measure  concept  conceptual-vocab  concrete  conference  confluence  confusion  conquest-empire  constraint-satisfaction  context  contradiction  contrarianism  convergence  convexity-curvature  cool  cooperate-defect  coordination  core-rats  correlation  counterexample  courage  course  cracker-econ  creative  critique  crosstab  crux  crypto  cryptocurrency  cs  culture  culture-war  curiosity  current-events  curvature  cycles  cynicism-idealism  darwinian  data  data-science  database  death  debate  debt  decision-making  decision-theory  deep-learning  deep-materialism  deepgoog  defense  definition  degrees-of-freedom  democracy  demographics  dennett  density  descriptive  design  detail-architecture  deterrence  developmental  diaspora  differential  dimensionality  direction  dirty-hands  discipline  discovery  discrete  discrimination  discussion  disease  distribution  divergence  diversity  DP  draft  drugs  duality  duplication  dynamic  dynamical  dysgenics  early-modern  ecology  econ-metrics  economics  econotariat  education  efficiency  egalitarianism-hierarchy  EGT  einstein  elections  electromag  elite  embedded-cognition  emergent  empirical  ems  encyclopedic  endo-exo  ends-means  endurance  energy-resources  engineering  enlightenment-renaissance-restoration-reformation  ensembles  entropy-like  environmental-effects  epistemic  equilibrium  ergodic  essay  estimate  ethical-algorithms  ethics  EU  europe  events  evidence-based  evolution  examples  exocortex  expansionism  expectancy  experiment  expert  expert-experience  explanans  explanation  exploratory  explore-exploit  exposition  extrema  fall-2016  farmers-and-foragers  fedja  fermi  feudal  feynman  fiction  film  finiteness  fisher  fitness  fitsci  flexibility  fluid  flux-stasis  foreign-policy  formal-values  fourier  frequency  frequentist  frontier  futurism  game-theory  games  garett-jones  gbooks  generalization  genetic-correlation  genetics  geometry  giants  gilens-page  gnon  good-evil  government  gowers  grad-school  gradient-descent  graph-theory  graphical-models  graphs  gravity  gray-econ  great-powers  greedy  ground-up  group-selection  growth  growth-econ  GT-101  gtd  guide  habit  hacker  haidt  hamming  hanson  hardware  hci  health  healthcare  heterodox  heuristic  hi-order-bits  high-dimension  high-variance  higher-ed  hiit  history  hmm  homogeneity  housing  hsu  humility  hypothesis-testing  ideas  identity  identity-politics  ideology  idk  IEEE  iidness  impact  incentives  india  industrial-revolution  inequality  inference  info-dynamics  info-econ  info-foraging  infographic  information-theory  inhibition  init  inner-product  innovation  input-output  insight  instinct  institutions  integral  integrity  intelligence  interdisciplinary  interests  internet  interview  intricacy  intuition  invariance  iq  iron-age  is-ought  islam  isotropy  israel  iteration-recursion  jargon  judaism  knowledge  korea  labor  law  learning  lecture-notes  lectures  legacy  len:long  len:short  lens  lesswrong  let-me-see  letters  levers  leviathan  lifts-projections  limits  linear-algebra  linear-models  linear-programming  linearity  liner-notes  links  list  literature  local-global  logic  long-short-run  long-term  longevity  longitudinal  low-hanging  machine-learning  macro  madisonian  magnitude  malthus  manifolds  map-territory  maps  markets  markov  martingale  matching  math  math.AC  math.AG  math.CA  math.CO  math.CV  math.DS  math.FA  math.GN  math.GR  math.MG  math.NT  mathtariat  matrix-factorization  meaningness  measure  mechanics  media  medicine  medieval  mediterranean  MENA  mental-math  meta-analysis  meta:math  meta:prediction  meta:research  meta:rhetoric  meta:science  metabolic  metabuch  metameta  methodology  metric-space  michael-nielsen  micro  microfoundations  migration  military  minimum-viable  miri-cfar  model-class  models  mokyr-allen-mccloskey  moloch  monetary-fiscal  money  monte-carlo  morality  mostly-modern  motivation  multi  multiplicative  music  mutation  narrative  nascent-state  nationalism-globalism  nature  near-far  network-structure  neuro  neuro-nitgrit  neurons  new-religion  news  nibble  nietzschean  nihil  nitty-gritty  nl-and-so-can-you  no-go  nordic  norms  notetaking  novelty  nuclear  null-result  number  obama  objektbuch  occam  occident  oceans  off-convex  oly  online-learning  open-problems  openai  operational  opioids  optimate  optimization  order-disorder  orders  org:anglo  org:biz  org:bleg  org:edge  org:edu  org:foreign  org:gov  org:health  org:inst  org:junk  org:mag  org:mat  org:med  org:ngo  org:popup  org:rec  org:sci  organization  organizing  orient  oscillation  outcome-risk  outliers  overflow  p:**  p:***  p:someday  p:whenever  papers  paradox  parasites-microbiome  parsimony  pdf  peace-violence  people  performance  perturbation  phd  philosophy  phys-energy  physics  pic  pigeonhole-markov  plots  poetry  polarization  policy  polisci  political-econ  politics  poll  polynomials  popsci  populism  positivity  power  practice  pragmatic  pre-2013  prediction  preimage  preprint  presentation  prioritizing  priors-posteriors  probabilistic-method  probability  problem-solving  productivity  prof  programming  progression  proofs  property-rights  proposal  protocol  psychology  publishing  putnam-like  q-n-a  qra  QTL  quantifiers-sums  quantitative-qualitative  quantum  quantum-info  questions  quotes  rand-approx  rand-complexity  random  ranking  rant  rat-pack  rationality  ratty  reading  realness  realpolitik  reason  recommendations  reddit  redistribution  reduction  reference  reflection  regression  regularizer  regulation  reinforcement  relativity  religion  replication  research  research-program  responsibility  retention  retrofit  revolution  rhetoric  right-wing  rigidity  rigor  roadmap  robust  roots  rounding  running  russia  s-factor  s:*  s:**  s:***  s:null  sampling  sampling-bias  sanctity-degradation  scale  scholar  scholar-pack  schools  science  scitariat  search  security  selection  separation  sequential  series  shannon  shift  signal-noise  signum  simulation  singularity  sinosphere  skeleton  sky  smoothness  social  social-capital  social-choice  social-psych  social-science  society  sociology  soft-question  solid-study  space  spatial  spearhead  spectral  speculation  speed  speedometer  spengler  spock  spreading  ssc  stanford  startups  stat-mech  stats  status  stirling  stochastic-processes  stories  strategy  stream  street-fighting  structure  study  studying  stylized-facts  subculture  subjective-objective  success  sulla  summary  survey  sv  symmetry  synthesis  tactics  talks  taxes  tcs  tcstariat  teaching  tech  technology  techtariat  telos-atelos  temperance  temperature  tensors  terrorism  tetlock  the-basilisk  the-bones  the-classics  the-great-west-whale  the-trenches  the-world-is-just-atoms  theory-practice  theos  thermo  things  thinking  thucydides  thurston  tidbits  tightness  tim-roughgarden  time  time-complexity  time-preference  time-series  time-use  todo  toolkit  tools  top-n  topology  toxoplasmosis  track-record  trade  tradeoffs  transitions  trees  trends  tribalism  tricki  tricks  troll  trump  trust  truth  turing  tutorial  twin-study  twitter  uncertainty  unit  universalism-particularism  unsupervised  urban  urban-rural  usa  vague  values  variance-components  video  virtu  visual-understanding  visualization  vitality  volo-avolo  von-neumann  walls  war  water  waves  wealth  wealth-of-nations  weightlifting  weird-sun  welfare-state  west-hunter  whiggish-hegelian  white-paper  whole-partial-many  wigderson  wiki  wild-ideas  winner-take-all  wire-guided  wisdom  within-without  wonkish  wordlessness  workflow  workshop  world  wormholes  worrydream  writing  yak-shaving  yc  yoga  yvain  zeitgeist  zooming  🌞  🎓  🎩  🐸  👳  🔬  🖥  🤖  🦉 

Copy this bookmark: