nhaliday + coding-theory   37

Theory of Self-Reproducing Automata - John von Neumann
Fourth Lecture: THE ROLE OF HIGH AND OF EXTREMELY HIGH COMPLICATION

Comparisons between computing machines and the nervous systems. Estimates of size for computing machines, present and near future.

Estimates for size for the human central nervous system. Excursus about the “mixed” character of living organisms. Analog and digital elements. Observations about the “mixed” character of all componentry, artificial as well as natural. Interpretation of the position to be taken with respect to these.

Evaluation of the discrepancy in size between artificial and natural automata. Interpretation of this discrepancy in terms of physical factors. Nature of the materials used.

The probability of the presence of other intellectual factors. The role of complication and the theoretical penetration that it requires.

Questions of reliability and errors reconsidered. Probability of individual errors and length of procedure. Typical lengths of procedure for computing machines and for living organisms--that is, for artificial and for natural automata. Upper limits on acceptable probability of error in individual operations. Compensation by checking and self-correcting features.

Differences of principle in the way in which errors are dealt with in artificial and in natural automata. The “single error” principle in artificial automata. Crudeness of our approach in this case, due to the lack of adequate theory. More sophisticated treatment of this problem in natural automata: The role of the autonomy of parts. Connections between this autonomy and evolution.

- 10^10 neurons in brain, 10^4 vacuum tubes in largest computer at time
- machines faster: 5 ms from neuron potential to neuron potential, 10^-3 ms for vacuum tubes

https://en.wikipedia.org/wiki/John_von_Neumann#Computing
pdf  article  papers  essay  nibble  math  cs  computation  bio  neuro  neuro-nitgrit  scale  magnitude  comparison  acm  von-neumann  giants  thermo  phys-energy  speed  performance  time  density  frequency  hardware  ems  efficiency  dirty-hands  street-fighting  fermi  estimate  retention  physics  interdisciplinary  multi  wiki  links  people  🔬  atoms  automata  duplication  iteration-recursion  turing  complexity  measure  nature  technology  complex-systems  bits  information-theory  circuits  robust  structure  composition-decomposition  evolution  mutation  axioms  analogy  thinking  input-output  hi-order-bits  coding-theory  flexibility  rigidity 
april 2018 by nhaliday
Is the human brain analog or digital? - Quora
The brain is neither analog nor digital, but works using a signal processing paradigm that has some properties in common with both.
 
Unlike a digital computer, the brain does not use binary logic or binary addressable memory, and it does not perform binary arithmetic. Information in the brain is represented in terms of statistical approximations and estimations rather than exact values. The brain is also non-deterministic and cannot replay instruction sequences with error-free precision. So in all these ways, the brain is definitely not "digital".
 
At the same time, the signals sent around the brain are "either-or" states that are similar to binary. A neuron fires or it does not. These all-or-nothing pulses are the basic language of the brain. So in this sense, the brain is computing using something like binary signals. Instead of 1s and 0s, or "on" and "off", the brain uses "spike" or "no spike" (referring to the firing of a neuron).
q-n-a  qra  expert-experience  neuro  neuro-nitgrit  analogy  deep-learning  nature  discrete  smoothness  IEEE  bits  coding-theory  communication  trivia  bio  volo-avolo  causation  random  order-disorder  ems  models  methodology  abstraction  nitty-gritty  computation  physics  electromag  scale  coarse-fine 
april 2018 by nhaliday
What Does a “Normal” Human Genome Look Like? | Science
So, what have our first glimpses of variation in the genomes of generally healthy people taught us? First, balancing selection, the evolutionary process that favors genetic diversification rather than the fixation of a single “best” variant, appears to play a minor role outside the immune system. Local adaptation, which accounts for variation in traits such as pigmentation, dietary specialization, and susceptibility to particular pathogens is also a second-tier player. What is on the top tier? Increasingly, the answer appears to be mutations that are “deleterious” by biochemical or standard evolutionary criteria. These mutations, as has long been appreciated, overwhelmingly make up the most abundant form of nonneutral variation in all genomes. A model for human genetic individuality is emerging in which there actually is a “wild-type” human genome—one in which most genes exist in an evolutionarily optimized form. There just are no “wild-type” humans: We each fall short of this Platonic ideal in our own distinctive ways.
article  essay  org:nat  🌞  bio  biodet  genetics  genomics  mutation  genetic-load  QTL  evolution  sapiens  survey  summary  coding-theory  enhancement  signal-noise  egalitarianism-hierarchy  selection  tradeoffs  immune  recent-selection  perturbation  nibble  ideas  forms-instances 
november 2017 by nhaliday
My Old Boss | West Hunter
Back in those days, there was interest in finding better ways to communicate with a submerged submarine.  One method under consideration used an orbiting laser to send pulses of light over the ocean, using a special wavelength, for which there was a very good detector.  Since even the people running the laser might not know the boomer’s exact location, while weather and such might also interfere,  my old boss was trying to figure out methods of reliably transmitting messages when some pulses were randomly lost – which is of course a well-developed subject,  error-correcting codes. But he didn’t know that.  Hadn’t even heard of it.

Around this time, my old boss was flying from LA to Washington, and started talking with his seatmate about this  submarine communication problem.  His seatmate – Irving S. Reed – politely said that he had done a little work on some similar problems.  During this conversation, my informant, a fellow minion sitting behind my old boss, was doggedly choking back hysterical laughter, not wanting to interrupt this very special conversation.
west-hunter  scitariat  stories  reflection  working-stiff  engineering  dirty-hands  electromag  communication  coding-theory  giants  bits  management  signal-noise 
september 2017 by nhaliday
10 million DTC dense marker genotypes by end of 2017? – Gene Expression
Ultimately I do wonder if I was a bit too optimistic that 50% of the US population will be sequenced at 30x by 2025. But the dynamic is quite likely to change rapidly because of a technological shift as the sector goes through a productivity uptick. We’re talking about exponential growth, which humans have weak intuition about….
https://gnxp.nofe.me/2017/06/27/genome-sequencing-for-the-people-is-near/
https://gnxp.nofe.me/2017/07/11/23andme-ancestry-only-is-49-99-for-prime-day/
gnxp  scitariat  commentary  biotech  scaling-up  genetics  genomics  scale  bioinformatics  multi  toys  measurement  duplication  signal-noise  coding-theory 
june 2017 by nhaliday
6.896: Essential Coding Theory
- probabilistic method and Chernoff bound for Shannon coding
- probabilistic method for asymptotically good Hamming codes (Gilbert coding)
- sparsity used for LDPC codes
mit  course  yoga  tcs  complexity  coding-theory  math.AG  fields  polynomials  pigeonhole-markov  linear-algebra  probabilistic-method  lecture-notes  bits  sparsity  concentration-of-measure  linear-programming  linearity  expanders  hamming  pseudorandomness  crypto  rigorous-crypto  communication-complexity  no-go  madhu-sudan  shannon  unit  p:**  quixotic 
february 2017 by nhaliday
What is the relationship between information theory and Coding theory? - Quora
basically:
- finite vs. asymptotic
- combinatorial vs. probabilistic (lotsa overlap their)
- worst-case (Hamming) vs. distributional (Shannon)

Information and coding theory most often appear together in the subject of error correction over noisy channels. Historically, they were born at almost exactly the same time - both Richard Hamming and Claude Shannon were working at Bell Labs when this happened. Information theory tends to heavily use tools from probability theory (together with an "asymptotic" way of thinking about the world), while traditional "algebraic" coding theory tends to employ mathematics that are much more finite sequence length/combinatorial in nature, including linear algebra over Galois Fields. The emergence in the late 90s and first decade of 2000 of codes over graphs blurred this distinction though, as code classes such as low density parity check codes employ both asymptotic analysis and random code selection techniques which have counterparts in information theory.

They do not subsume each other. Information theory touches on many other aspects that coding theory does not, and vice-versa. Information theory also touches on compression (lossy & lossless), statistics (e.g. large deviations), modeling (e.g. Minimum Description Length). Coding theory pays a lot of attention to sphere packing and coverings for finite length sequences - information theory addresses these problems (channel & lossy source coding) only in an asymptotic/approximate sense.
q-n-a  qra  math  acm  tcs  information-theory  coding-theory  big-picture  comparison  confusion  explanation  linear-algebra  polynomials  limits  finiteness  math.CO  hi-order-bits  synthesis  probability  bits  hamming  shannon  intricacy  nibble  s:null  signal-noise 
february 2017 by nhaliday
Shtetl-Optimized » Blog Archive » Why I Am Not An Integrated Information Theorist (or, The Unconscious Expander)
In my opinion, how to construct a theory that tells us which physical systems are conscious and which aren’t—giving answers that agree with “common sense” whenever the latter renders a verdict—is one of the deepest, most fascinating problems in all of science. Since I don’t know a standard name for the problem, I hereby call it the Pretty-Hard Problem of Consciousness. Unlike with the Hard Hard Problem, I don’t know of any philosophical reason why the Pretty-Hard Problem should be inherently unsolvable; but on the other hand, humans seem nowhere close to solving it (if we had solved it, then we could reduce the abortion, animal rights, and strong AI debates to “gentlemen, let us calculate!”).

Now, I regard IIT as a serious, honorable attempt to grapple with the Pretty-Hard Problem of Consciousness: something concrete enough to move the discussion forward. But I also regard IIT as a failed attempt on the problem. And I wish people would recognize its failure, learn from it, and move on.

In my view, IIT fails to solve the Pretty-Hard Problem because it unavoidably predicts vast amounts of consciousness in physical systems that no sane person would regard as particularly “conscious” at all: indeed, systems that do nothing but apply a low-density parity-check code, or other simple transformations of their input data. Moreover, IIT predicts not merely that these systems are “slightly” conscious (which would be fine), but that they can be unboundedly more conscious than humans are.

To justify that claim, I first need to define Φ. Strikingly, despite the large literature about Φ, I had a hard time finding a clear mathematical definition of it—one that not only listed formulas but fully defined the structures that the formulas were talking about. Complicating matters further, there are several competing definitions of Φ in the literature, including ΦDM (discrete memoryless), ΦE (empirical), and ΦAR (autoregressive), which apply in different contexts (e.g., some take time evolution into account and others don’t). Nevertheless, I think I can define Φ in a way that will make sense to theoretical computer scientists. And crucially, the broad point I want to make about Φ won’t depend much on the details of its formalization anyway.

We consider a discrete system in a state x=(x1,…,xn)∈Sn, where S is a finite alphabet (the simplest case is S={0,1}). We imagine that the system evolves via an “updating function” f:Sn→Sn. Then the question that interests us is whether the xi‘s can be partitioned into two sets A and B, of roughly comparable size, such that the updates to the variables in A don’t depend very much on the variables in B and vice versa. If such a partition exists, then we say that the computation of f does not involve “global integration of information,” which on Tononi’s theory is a defining aspect of consciousness.
aaronson  tcstariat  philosophy  dennett  interdisciplinary  critique  nibble  org:bleg  within-without  the-self  neuro  psychology  cog-psych  metrics  nitty-gritty  composition-decomposition  complex-systems  cybernetics  bits  information-theory  entropy-like  forms-instances  empirical  walls  arrows  math.DS  structure  causation  quantitative-qualitative  number  extrema  optimization  abstraction  explanation  summary  degrees-of-freedom  whole-partial-many  network-structure  systematic-ad-hoc  tcs  complexity  hardness  no-go  computation  measurement  intricacy  examples  counterexample  coding-theory  linear-algebra  fields  graphs  graph-theory  expanders  math  math.CO  properties  local-global  intuition  error  definition  coupling-cohesion 
january 2017 by nhaliday
gt.geometric topology - Intuitive crutches for higher dimensional thinking - MathOverflow
Terry Tao:
I can't help you much with high-dimensional topology - it's not my field, and I've not picked up the various tricks topologists use to get a grip on the subject - but when dealing with the geometry of high-dimensional (or infinite-dimensional) vector spaces such as R^n, there are plenty of ways to conceptualise these spaces that do not require visualising more than three dimensions directly.

For instance, one can view a high-dimensional vector space as a state space for a system with many degrees of freedom. A megapixel image, for instance, is a point in a million-dimensional vector space; by varying the image, one can explore the space, and various subsets of this space correspond to various classes of images.

One can similarly interpret sound waves, a box of gases, an ecosystem, a voting population, a stream of digital data, trials of random variables, the results of a statistical survey, a probabilistic strategy in a two-player game, and many other concrete objects as states in a high-dimensional vector space, and various basic concepts such as convexity, distance, linearity, change of variables, orthogonality, or inner product can have very natural meanings in some of these models (though not in all).

It can take a bit of both theory and practice to merge one's intuition for these things with one's spatial intuition for vectors and vector spaces, but it can be done eventually (much as after one has enough exposure to measure theory, one can start merging one's intuition regarding cardinality, mass, length, volume, probability, cost, charge, and any number of other "real-life" measures).

For instance, the fact that most of the mass of a unit ball in high dimensions lurks near the boundary of the ball can be interpreted as a manifestation of the law of large numbers, using the interpretation of a high-dimensional vector space as the state space for a large number of trials of a random variable.

More generally, many facts about low-dimensional projections or slices of high-dimensional objects can be viewed from a probabilistic, statistical, or signal processing perspective.

Scott Aaronson:
Here are some of the crutches I've relied on. (Admittedly, my crutches are probably much more useful for theoretical computer science, combinatorics, and probability than they are for geometry, topology, or physics. On a related note, I personally have a much easier time thinking about R^n than about, say, R^4 or R^5!)

1. If you're trying to visualize some 4D phenomenon P, first think of a related 3D phenomenon P', and then imagine yourself as a 2D being who's trying to visualize P'. The advantage is that, unlike with the 4D vs. 3D case, you yourself can easily switch between the 3D and 2D perspectives, and can therefore get a sense of exactly what information is being lost when you drop a dimension. (You could call this the "Flatland trick," after the most famous literary work to rely on it.)
2. As someone else mentioned, discretize! Instead of thinking about R^n, think about the Boolean hypercube {0,1}^n, which is finite and usually easier to get intuition about. (When working on problems, I often find myself drawing {0,1}^4 on a sheet of paper by drawing two copies of {0,1}^3 and then connecting the corresponding vertices.)
3. Instead of thinking about a subset S⊆R^n, think about its characteristic function f:R^n→{0,1}. I don't know why that trivial perspective switch makes such a big difference, but it does ... maybe because it shifts your attention to the process of computing f, and makes you forget about the hopeless task of visualizing S!
4. One of the central facts about R^n is that, while it has "room" for only n orthogonal vectors, it has room for exp⁡(n) almost-orthogonal vectors. Internalize that one fact, and so many other properties of R^n (for example, that the n-sphere resembles a "ball with spikes sticking out," as someone mentioned before) will suddenly seem non-mysterious. In turn, one way to internalize the fact that R^n has so many almost-orthogonal vectors is to internalize Shannon's theorem that there exist good error-correcting codes.
5. To get a feel for some high-dimensional object, ask questions about the behavior of a process that takes place on that object. For example: if I drop a ball here, which local minimum will it settle into? How long does this random walk on {0,1}^n take to mix?

Gil Kalai:
This is a slightly different point, but Vitali Milman, who works in high-dimensional convexity, likes to draw high-dimensional convex bodies in a non-convex way. This is to convey the point that if you take the convex hull of a few points on the unit sphere of R^n, then for large n very little of the measure of the convex body is anywhere near the corners, so in a certain sense the body is a bit like a small sphere with long thin "spikes".
q-n-a  intuition  math  visual-understanding  list  discussion  thurston  tidbits  aaronson  tcs  geometry  problem-solving  yoga  👳  big-list  metabuch  tcstariat  gowers  mathtariat  acm  overflow  soft-question  levers  dimensionality  hi-order-bits  insight  synthesis  thinking  models  cartoons  coding-theory  information-theory  probability  concentration-of-measure  magnitude  linear-algebra  boolean-analysis  analogy  arrows  lifts-projections  measure  markov  sampling  shannon  conceptual-vocab  nibble  degrees-of-freedom  worrydream  neurons  retrofit  oscillation  paradox  novelty  tricki  concrete  high-dimension  s:***  manifolds  direction  curvature  convexity-curvature  elegance 
december 2016 by nhaliday
What’s the catch? | West Hunter
Neanderthals and the Wrath of Khan

if someone were to try to create a Neanderthal a few years from now, starting with ancient DNA, they’d have to have worry a lot about data errors, because such errors would translate into mutations, which might be harmful or even lethal. Assume that we have figured out how to get the gene expression right, have all the proper methylation etc: we have modern humans as a template and you know there isn’t that much difference.

They might try consensus averaging – take three high-quality Neanderthal genomes and make your synthetic genome by majority rule: we ignore a nucleotide change in one genome if it’s not there in the other two. ‘tell me three times’, a simple form of error-correcting code.

But doing this would cause a problem. Can you see what the problem is?
west-hunter  sapiens  speculation  enhancement  archaics  discussion  genetics  genetic-load  🌞  gedanken  unintended-consequences  cocktail  error  aDNA  signal-noise  coding-theory  scitariat  wild-ideas  ideas  archaeology  perturbation  iteration-recursion  duplication  forms-instances  traces 
november 2016 by nhaliday

bundles : academeframeinfotcs

related tags

aaronson  abstraction  accretion  acm  acmtariat  aDNA  advice  alg-combo  algebra  algebraic-complexity  algorithms  alignment  analogy  anthropic  antidemos  applications  approximation  archaeology  archaics  arrows  art  article  atoms  authoritarianism  automata  axioms  bayesian  behavioral-gen  big-list  big-peeps  big-picture  bio  biodet  bioinformatics  biotech  bits  boaz-barak  books  boolean-analysis  bostrom  cartoons  causation  characterization  chemistry  circuits  civil-liberty  civilization  classic  coalitions  coarse-fine  cocktail  coding-theory  cog-psych  commentary  communication  communication-complexity  comparison  competition  complement-substitute  complex-systems  complexity  composition-decomposition  computation  concentration-of-measure  concept  conceptual-vocab  concrete  concurrency  confusion  convexity-curvature  cool  cooperate-defect  coordination  cost-benefit  counterexample  counting  coupling-cohesion  course  critique  crypto  cs  curvature  cybernetics  cycles  dana-moshkovitz  darwinian  data  death  decision-theory  deep-learning  definite-planning  definition  degrees-of-freedom  dennett  density  differential-privacy  dimensionality  direct-indirect  direction  dirty-hands  discrete  discussion  distribution  draft  duality  duplication  ecology  eden  eden-heaven  EEA  efficiency  egalitarianism-hierarchy  EGT  electromag  elegance  embeddings  embodied  empirical  ems  engineering  enhancement  entropy-like  equilibrium  error  essay  estimate  ethics  evolution  evopsych  examples  existence  expanders  expert-experience  explanation  exposition  extrema  farmers-and-foragers  fashun  fermi  fertility  fiction  fields  finiteness  flexibility  flux-stasis  formal-values  forms-instances  fourier  frequency  frontier  futurism  game-theory  gedanken  gender  genetic-load  genetics  genomics  geometry  giants  gnxp  government  gowers  gradient-descent  graph-theory  graphical-models  graphs  gravity  ground-up  hamming  hanson  hard-tech  hardness  hardware  hashing  heuristic  hi-order-bits  hidden-motives  high-dimension  history  hsu  huge-data-the-biggest  humanity  ideas  identity  IEEE  illusion  immune  impact  incentives  individualism-collectivism  information-theory  init  input-output  insight  instinct  intel  intelligence  interdisciplinary  internet  intersection  intersection-connectedness  intricacy  intuition  ising  iteration-recursion  janus  jargon  knowledge  lecture-notes  lectures  len:long  lens  letters  levers  leviathan  lifts-projections  limits  linear-algebra  linear-programming  linearity  links  list  local-global  long-short-run  lower-bounds  luca-trevisan  madhu-sudan  magnitude  malthus  management  manifolds  markets  markov  math  math.AG  math.CO  math.DS  math.GR  math.NT  math.RT  mathtariat  matrix-factorization  maxim-gun  meaningness  measure  measurement  mechanics  metabuch  methodology  metric-space  metrics  mit  ML-MAP-E  models  moloch  moments  monte-carlo  motivation  multi  multiplicative  mutation  nature  network-structure  neuro  neuro-nitgrit  neurons  nibble  nietzschean  nitty-gritty  no-go  novelty  number  oly  online-learning  optimism  optimization  order-disorder  org:bleg  org:edu  org:junk  org:mat  org:nat  oscillation  overflow  p:*  p:**  p:***  p:someday  papers  paradox  paternal-age  pcp  pdf  people  performance  perturbation  pessimism  phase-transition  philosophy  phys-energy  physics  pigeonhole-markov  play  politics  polynomials  population  power  preprint  princeton  privacy  probabilistic-method  probability  problem-solving  proof-systems  proofs  properties  property-rights  pseudorandomness  psychology  public-goodish  q-n-a  qra  QTL  quantitative-qualitative  quantum  quantum-info  questions  quixotic  quotes  rand-approx  rand-complexity  random  ratty  reading  reason  recent-selection  recommendations  reference  reflection  relativity  relativization  research  retention  retrofit  rigidity  rigorous-crypto  risk  robust  rounding  s:***  s:null  sampling  sanjeev-arora  sapiens  scale  scaling-up  science  scifi-fantasy  scitariat  SDP  selection  sex  shannon  signal-noise  signaling  similarity  singularity  skunkworks  slides  smoothness  social-choice  social-psych  society  soft-question  software  space-complexity  sparsity  spatial  speculation  speed  stackex  stanford  stat-mech  stories  street-fighting  structure  study  sublinear  summary  survey  synthesis  systematic-ad-hoc  talks  taxes  tcs  tcstariat  technology  techtariat  telos-atelos  the-self  the-trenches  the-world-is-just-atoms  theory-of-mind  thermo  thinking  thurston  tidbits  time  toolkit  topics  toys  traces  tradeoffs  trends  tricki  trivia  turing  UGC  unintended-consequences  unit  utopia-dystopia  valiant  values  visual-understanding  volo-avolo  von-neumann  walls  washington  web  west-hunter  whole-partial-many  wigderson  wiki  wild-ideas  wire-guided  within-without  working-stiff  wormholes  worrydream  yoga  🌞  🎓  👳  🔬 

Copy this bookmark:



description:


tags: