nhaliday + math.co   93

 « earlier
co.combinatorics - Classification of Platonic solids - MathOverflow
My question is very basic: where can I find a complete (and hopefully self-contained) proof of the classification of Platonic solids? In all the references that I found, they use Euler's formula v−e+f=2v−e+f=2 to show that there are exactly five possible triples (v,e,f)(v,e,f). But of course this is not a complete proof because it does not rule out the possibility of different configurations or deformations. Has anyone ever written up a complete proof of this statement?!

...

This is a classical question. Here is my reading of it: Why is there a unique polytope with given combinatorics of faces, which are all regular polygons? Of course, for simple polytopes (tetrahedron, cube, dodecahedron) this is clear, but for the octahedron and icosahedron this is less clear.

The answer lies in the Cauchy's theorem. It was Legendre, while writing his Elements of Geometry and Trigonometry, noticed that Euclid's proof is incomplete in the Elements. Curiously, Euclid finds both radii of inscribed and circumscribed spheres (correctly) without ever explaining why they exist. Cauchy worked out a proof while still a student in 1813, more or less specifically for this purpose. The proof also had a technical gap which was found and patched up by Steinitz in 1920s.

The complete (corrected) proof can be found in the celebrated Proofs from the Book, or in Marcel Berger's Geometry. My book gives a bit more of historical context and some soft arguments (ch. 19). It's worth comparing this proof with (an erroneous) pre-Steinitz exposition, say in Hadamard's Leçons de Géométrie Elémentaire II, or with an early post-Steinitz correct but tedious proof given in (otherwise, excellent) Alexandrov's monograph (see also ch.26 in my book which compares all the approaches).

P.S. Note that Coxeter in Regular Polytopes can completely avoid this issue but taking a different (modern) definition of the regular polytopes (which are symmetric under group actions). For a modern exposition and the state of art of this approach, see McMullen and Schulte's Abstract Regular Polytopes.

https://en.wikipedia.org/wiki/Platonic_solid#Classification
https://mathoverflow.net/questions/46502/on-the-number-of-archimedean-solids
q-n-a  overflow  math  topology  geometry  math.CO  history  iron-age  mediterranean  the-classics  multi  curiosity  clarity  proofs  nibble  wiki  reference  characterization  uniqueness  list  ground-up
july 2017 by nhaliday
Interview: Mostly Sealing Wax | West Hunter
https://soundcloud.com/user-519115521/greg-cochran-part-2
https://medium.com/@houstoneuler/annotating-part-2-of-the-greg-cochran-interview-with-james-miller-678ba33f74fc

- conformity and Google, defense and spying (China knows prob almost all our "secrets")
- in the past you could just find new things faster than people could reverse-engineer. part of the problem is that innovation is slowing down today (part of the reason for convergence by China/developing world).
- introgression from archaics of various kinds
- mutational load and IQ, wrath of khan neanderthal
- trade and antiquity (not that useful besides ideas tbh), Roman empire, disease, smallpox
- spices needed to be grown elsewhere, but besides that...
- analogy: caste system in India (why no Brahmin car repairmen?), slavery in Greco-Roman times, more water mills in medieval times (rivers better in north, but still could have done it), new elite not liking getting hands dirty, low status of engineers, rise of finance
- crookery in finance, hedge fund edge might be substantially insider trading
- long-term wisdom of moving all manufacturing to China...?
- economic myopia: British financialization before WW1 vis-a-vis Germany. North vs. South and cotton/industry, camels in Middle East vs. wagons in Europe
- Western medicine easier to convert to science than Eastern, pseudoscience and wrong theories better than bag of recipes
- Greeks definitely knew some things that were lost (eg, line in Pliny makes reference to combinatorics calculation rediscovered by German dude much later. think he's referring to Catalan numbers?), Lucio Russo book
- Indo-Europeans, Western Europe, Amerindians, India, British Isles, gender, disease, and conquest
- no farming (Dark Age), then why were people still farming on Shetland Islands north of Scotland?
- "symbolic" walls, bodies with arrows
- family stuff, children learning, talking dog, memory and aging
- Chinese/Japanese writing difficulty and children learning to read
- Hatfield-McCoy feud: the McCoy family was actually a case study in a neurological journal. they had anger management issues because of cancers of their adrenal gland (!!).

the Chinese know...: https://macropolo.org/casting-off-real-beijings-cryptic-warnings-finance-taking-economy/
Over the last couple of years, a cryptic idiom has crept into the way China’s top leaders talk about risks in the country’s financial system: tuo shi xiang xu (脱实向虚), which loosely translates as “casting off the real for the empty.” Premier Li Keqiang warned against it at his press conference at the end of the 2016 National People’s Congress (NPC). At this year’s NPC, Li inserted this very expression into his annual work report. And in April, while on an inspection tour of Guangxi, President Xi Jinping used the term, saying that China must “unceasingly promote industrial modernization, raise the level of manufacturing, and not allow the real to be cast off for the empty.”

Such an odd turn of phrase is easy to overlook, but it belies concerns about a significant shift in the way that China’s economy works. What Xi and Li were warning against is typically called financialization in developed economies. It’s when “real” companies—industrial firms, manufacturers, utility companies, property developers, and anyone else that produces a tangible product or service—take their money and, rather than put it back into their businesses, invest it in “empty”, or speculative, assets. It occurs when the returns on financial investments outstrip those in the real economy, leading to a disproportionate amount of money being routed into the financial system.
west-hunter  interview  audio  podcast  econotariat  cracker-econ  westminster  culture-war  polarization  tech  sv  google  info-dynamics  business  multi  military  security  scitariat  intel  error  government  defense  critique  rant  race  clown-world  patho-altruism  history  mostly-modern  cold-war  russia  technology  innovation  stagnation  being-right  archaics  gene-flow  sapiens  genetics  the-trenches  thinking  sequential  similarity  genomics  bioinformatics  explanation  europe  asia  china  migration  evolution  recent-selection  immune  atmosphere  latin-america  ideas  sky  developing-world  embodied  africa  MENA  genetic-load  unintended-consequences  iq  enhancement  aDNA  gedanken  mutation  QTL  missing-heritability  tradeoffs  behavioral-gen  biodet  iron-age  mediterranean  the-classics  trade  gibbon  disease  parasites-microbiome  demographics  population  urban  transportation  efficiency  cost-benefit  india  agriculture  impact  status  class  elite  vampire-squid  analogy  finance  higher-ed  trends  rot  zeitgeist  🔬  hsu  stories  aphorism  crooked  realne
may 2017 by nhaliday
inequalities - Is the Jaccard distance a distance? - MathOverflow
Steinhaus Transform
the referenced survey: http://kenclarkson.org/nn_survey/p.pdf

It's known that this transformation produces a metric from a metric. Now if you take as the base metric D the symmetric difference between two sets, what you end up with is the Jaccard distance (which actually is known by many other names as well).
q-n-a  overflow  nibble  math  acm  sublinear  metrics  metric-space  proofs  math.CO  tcstariat  arrows  reduction  measure  math.MG  similarity  multi  papers  survey  computational-geometry  cs  algorithms  pdf  positivity  msr  tidbits  intersection  curvature  convexity-curvature  intersection-connectedness  signum
february 2017 by nhaliday
st.statistics - Lower bound for sum of binomial coefficients? - MathOverflow
- basically approximate w/ geometric sum (which scales as final term) and you can get it up to O(1) factor
- not good enough for many applications (want 1+o(1) approx.)
- Stirling can also give bound to constant factor precision w/ more calculation I believe
- tighter bound at Section 7.3 here: http://webbuild.knu.ac.kr/~trj/Combin/matousek-vondrak-prob-ln.pdf
q-n-a  overflow  nibble  math  math.CO  estimate  tidbits  magnitude  concentration-of-measure  stirling  binomial  metabuch  tricki  multi  tightness  pdf  lecture-notes  exposition  probability  probabilistic-method  yoga
february 2017 by nhaliday
What is the relationship between information theory and Coding theory? - Quora
basically:
- finite vs. asymptotic
- combinatorial vs. probabilistic (lotsa overlap their)
- worst-case (Hamming) vs. distributional (Shannon)

Information and coding theory most often appear together in the subject of error correction over noisy channels. Historically, they were born at almost exactly the same time - both Richard Hamming and Claude Shannon were working at Bell Labs when this happened. Information theory tends to heavily use tools from probability theory (together with an "asymptotic" way of thinking about the world), while traditional "algebraic" coding theory tends to employ mathematics that are much more finite sequence length/combinatorial in nature, including linear algebra over Galois Fields. The emergence in the late 90s and first decade of 2000 of codes over graphs blurred this distinction though, as code classes such as low density parity check codes employ both asymptotic analysis and random code selection techniques which have counterparts in information theory.

They do not subsume each other. Information theory touches on many other aspects that coding theory does not, and vice-versa. Information theory also touches on compression (lossy & lossless), statistics (e.g. large deviations), modeling (e.g. Minimum Description Length). Coding theory pays a lot of attention to sphere packing and coverings for finite length sequences - information theory addresses these problems (channel & lossy source coding) only in an asymptotic/approximate sense.
q-n-a  qra  math  acm  tcs  information-theory  coding-theory  big-picture  comparison  confusion  explanation  linear-algebra  polynomials  limits  finiteness  math.CO  hi-order-bits  synthesis  probability  bits  hamming  shannon  intricacy  nibble  s:null  signal-noise
february 2017 by nhaliday
reference request - The coupon collector's earworm - MathOverflow
I have a playlist with, say, N pieces of music. While using the shuffle option (each such piece is played randomly at each step), I realized that, generally speaking, I have to hear quite a lot of times the same piece before the last one appears. It makes me think of the following question:

At the moment the last non already heard piece is played, what is the max, in average, of number of times the same piece has already been played?

A: e log N + o(log N)
q-n-a  overflow  math  math.CO  tidbits  puzzles  probability  magnitude  oly  nibble  concentration-of-measure  binomial
january 2017 by nhaliday
Soft analysis, hard analysis, and the finite convergence principle | What's new
It is fairly well known that the results obtained by hard and soft analysis respectively can be connected to each other by various “correspondence principles” or “compactness principles”. It is however my belief that the relationship between the two types of analysis is in fact much closer[3] than just this; in many cases, qualitative analysis can be viewed as a convenient abstraction of quantitative analysis, in which the precise dependencies between various finite quantities has been efficiently concealed from view by use of infinitary notation. Conversely, quantitative analysis can often be viewed as a more precise and detailed refinement of qualitative analysis. Furthermore, a method from hard analysis often has some analogue in soft analysis and vice versa, though the language and notation of the analogue may look completely different from that of the original. I therefore feel that it is often profitable for a practitioner of one type of analysis to learn about the other, as they both offer their own strengths, weaknesses, and intuition, and knowledge of one gives more insight[4] into the workings of the other. I wish to illustrate this point here using a simple but not terribly well known result, which I shall call the “finite convergence principle” (thanks to Ben Green for suggesting this name; Jennifer Chayes has also suggested the “metastability principle”). It is the finitary analogue of an utterly trivial infinitary result – namely, that every bounded monotone sequence converges – but sometimes, a careful analysis of a trivial result can be surprisingly revealing, as I hope to demonstrate here.
gowers  mathtariat  math  math.CA  expert  reflection  philosophy  meta:math  logic  math.CO  lens  big-picture  symmetry  limits  finiteness  nibble  org:bleg  coarse-fine  metameta  convergence  expert-experience
january 2017 by nhaliday
(Gil Kalai) The weak epsilon-net problem | What's new
This is a problem in discrete and convex geometry. It seeks to quantify the intuitively obvious fact that large convex bodies are so “fat” that they cannot avoid “detection” by a small number of observation points.
gowers  mathtariat  tcstariat  tcs  math  concept  rounding  linear-programming  research  open-problems  geometry  math.CO  magnitude  probabilistic-method  math.MG  discrete  nibble  org:bleg  questions  curvature  pigeonhole-markov  convexity-curvature
january 2017 by nhaliday
Shtetl-Optimized » Blog Archive » Why I Am Not An Integrated Information Theorist (or, The Unconscious Expander)
In my opinion, how to construct a theory that tells us which physical systems are conscious and which aren’t—giving answers that agree with “common sense” whenever the latter renders a verdict—is one of the deepest, most fascinating problems in all of science. Since I don’t know a standard name for the problem, I hereby call it the Pretty-Hard Problem of Consciousness. Unlike with the Hard Hard Problem, I don’t know of any philosophical reason why the Pretty-Hard Problem should be inherently unsolvable; but on the other hand, humans seem nowhere close to solving it (if we had solved it, then we could reduce the abortion, animal rights, and strong AI debates to “gentlemen, let us calculate!”).

Now, I regard IIT as a serious, honorable attempt to grapple with the Pretty-Hard Problem of Consciousness: something concrete enough to move the discussion forward. But I also regard IIT as a failed attempt on the problem. And I wish people would recognize its failure, learn from it, and move on.

In my view, IIT fails to solve the Pretty-Hard Problem because it unavoidably predicts vast amounts of consciousness in physical systems that no sane person would regard as particularly “conscious” at all: indeed, systems that do nothing but apply a low-density parity-check code, or other simple transformations of their input data. Moreover, IIT predicts not merely that these systems are “slightly” conscious (which would be fine), but that they can be unboundedly more conscious than humans are.

To justify that claim, I first need to define Φ. Strikingly, despite the large literature about Φ, I had a hard time finding a clear mathematical definition of it—one that not only listed formulas but fully defined the structures that the formulas were talking about. Complicating matters further, there are several competing definitions of Φ in the literature, including ΦDM (discrete memoryless), ΦE (empirical), and ΦAR (autoregressive), which apply in different contexts (e.g., some take time evolution into account and others don’t). Nevertheless, I think I can define Φ in a way that will make sense to theoretical computer scientists. And crucially, the broad point I want to make about Φ won’t depend much on the details of its formalization anyway.

We consider a discrete system in a state x=(x1,…,xn)∈Sn, where S is a finite alphabet (the simplest case is S={0,1}). We imagine that the system evolves via an “updating function” f:Sn→Sn. Then the question that interests us is whether the xi‘s can be partitioned into two sets A and B, of roughly comparable size, such that the updates to the variables in A don’t depend very much on the variables in B and vice versa. If such a partition exists, then we say that the computation of f does not involve “global integration of information,” which on Tononi’s theory is a defining aspect of consciousness.
aaronson  tcstariat  philosophy  dennett  interdisciplinary  critique  nibble  org:bleg  within-without  the-self  neuro  psychology  cog-psych  metrics  nitty-gritty  composition-decomposition  complex-systems  cybernetics  bits  information-theory  entropy-like  forms-instances  empirical  walls  arrows  math.DS  structure  causation  quantitative-qualitative  number  extrema  optimization  abstraction  explanation  summary  degrees-of-freedom  whole-partial-many  network-structure  systematic-ad-hoc  tcs  complexity  hardness  no-go  computation  measurement  intricacy  examples  counterexample  coding-theory  linear-algebra  fields  graphs  graph-theory  expanders  math  math.CO  properties  local-global  intuition  error  definition
january 2017 by nhaliday
pr.probability - Google question: In a country in which people only want boys - MathOverflow
- limits to 1/2 w/ number of families -> ∞
- proportion of girls in one family is biased estimator of proportion in general population (larger families w/ more girls count more)
- interesting comment on Douglas Zare's answer (whether process has stopped or not)
puzzles  math  google  thinking  probability  q-n-a  gotchas  tidbits  math.CO  overflow  nibble  paradox  gender  bias-variance  stochastic-processes
december 2016 by nhaliday
per page:    204080120160