nhaliday + πŸ‘³   110

"Surely You're Joking, Mr. Feynman!": Adventures of a Curious Character ... - Richard P. Feynman - Google Books
Actually, there was a certain amount of genuine quality to my guesses. I had a scheme, which I still use today when somebody is explaining something that l’m trying to understand: I keep making up examples. For instance, the mathematicians would come in with a terrific theorem, and they’re all excited. As they’re telling me the conditions of the theorem, I construct something which fits all the conditions. You know, you have a set (one ball)β€”disjoint (two balls). Then the balls tum colors, grow hairs, or whatever, in my head as they put more conditions on. Finally they state the theorem, which is some dumb thing about the ball which isn’t true for my hairy green ball thing, so I say, β€œFalse!"
physics  math  feynman  thinking  empirical  examples  lens  intuition  operational  stories  metabuch  visual-understanding  thurston  hi-order-bits  geometry  topology  cartoons  giants  πŸ‘³  nibble  the-trenches  metameta  meta:math  s:**  quotes  gbooks  elegance 
january 2017 by nhaliday
pr.probability - What is convolution intuitively? - MathOverflow
I remember as a graduate student that Ingrid Daubechies frequently referred to convolution by a bump function as "blurring" - its effect on images is similar to what a short-sighted person experiences when taking off his or her glasses (and, indeed, if one works through the geometric optics, convolution is not a bad first approximation for this effect). I found this to be very helpful, not just for understanding convolution per se, but as a lesson that one should try to use physical intuition to model mathematical concepts whenever one can.

More generally, if one thinks of functions as fuzzy versions of points, then convolution is the fuzzy version of addition (or sometimes multiplication, depending on the context). The probabilistic interpretation is one example of this (where the fuzz is a a probability distribution), but one can also have signed, complex-valued, or vector-valued fuzz, of course.
q-n-a  overflow  math  concept  atoms  intuition  motivation  gowers  visual-understanding  aphorism  soft-question  tidbits  πŸ‘³  mathtariat  cartoons  ground-up  metabuch  analogy  nibble  yoga  neurons  retrofit  optics  concrete  s:*  multiplicative  fourier 
january 2017 by nhaliday
soft question - Thinking and Explaining - MathOverflow
- good question from Bill Thurston
- great answers by Terry Tao, fedja, Minhyong Kim, gowers, etc.

Terry Tao:
- symmetry as blurring/vibrating/wobbling, scale invariance
- anthropomorphization, adversarial perspective for estimates/inequalities/quantifiers, spending/economy

fedja walks through his though-process from another answer

Minhyong Kim: anthropology of mathematical philosophizing

Per Vognsen: normality as isotropy
comment: conjugate subgroup gHg^-1 ~ "H but somewhere else in G"

gowers: hidden things in basic mathematics/arithmetic
comment by Ryan Budney: x sin(x) via x -> (x, sin(x)), (x, y) -> xy
I kinda get what he's talking about but needed to use Mathematica to get the initial visualization down.
To remind myself later:
- xy can be easily visualized by juxtaposing the two parabolae x^2 and -x^2 diagonally
- x sin(x) can be visualized along that surface by moving your finger along the line (x, 0) but adding some oscillations in y direction according to sin(x)
q-n-a  soft-question  big-list  intuition  communication  teaching  math  thinking  writing  thurston  lens  overflow  synthesis  hi-order-bits  πŸ‘³  insight  meta:math  clarity  nibble  giants  cartoons  gowers  mathtariat  better-explained  stories  the-trenches  problem-solving  homogeneity  symmetry  fedja  examples  philosophy  big-picture  vague  isotropy  reflection  spatial  ground-up  visual-understanding  polynomials  dimensionality  math.GR  worrydream  scholar  πŸŽ“  neurons  metabuch  yoga  retrofit  mental-math  metameta  wisdom  wordlessness  oscillation  operational  adversarial  quantifiers-sums  exposition  explanation  tricki  concrete  s:***  manifolds  invariance  dynamical  info-dynamics  cool  direction  elegance 
january 2017 by nhaliday
soft question - Why does Fourier analysis of Boolean functions "work"? - Theoretical Computer Science Stack Exchange
Here is my point of view, which I learned from Guy Kindler, though someone more experienced can probably give a better answer: Consider the linear space of functions f: {0,1}^n -> R and consider a linear operator of the form Οƒ_w (for w in {0,1}^n), that maps a function f(x) as above to the function f(x+w). In many of the questions of TCS, there is an underlying need to analyze the effects that such operators have on certain functions.

Now, the point is that the Fourier basis is the basis that diagonalizes all those operators at the same time, which makes the analysis of those operators much simpler. More generally, the Fourier basis diagonalizes the convolution operator, which also underlies many of those questions. Thus, Fourier analysis is likely to be effective whenever one needs to analyze those operators.
q-n-a  math  tcs  synthesis  boolean-analysis  fourier  πŸ‘³  tidbits  motivation  intuition  linear-algebra  overflow  hi-order-bits  insight  curiosity  ground-up  arrows  nibble  s:*  elegance 
december 2016 by nhaliday
gt.geometric topology - Intuitive crutches for higher dimensional thinking - MathOverflow
Terry Tao:
I can't help you much with high-dimensional topology - it's not my field, and I've not picked up the various tricks topologists use to get a grip on the subject - but when dealing with the geometry of high-dimensional (or infinite-dimensional) vector spaces such as R^n, there are plenty of ways to conceptualise these spaces that do not require visualising more than three dimensions directly.

For instance, one can view a high-dimensional vector space as a state space for a system with many degrees of freedom. A megapixel image, for instance, is a point in a million-dimensional vector space; by varying the image, one can explore the space, and various subsets of this space correspond to various classes of images.

One can similarly interpret sound waves, a box of gases, an ecosystem, a voting population, a stream of digital data, trials of random variables, the results of a statistical survey, a probabilistic strategy in a two-player game, and many other concrete objects as states in a high-dimensional vector space, and various basic concepts such as convexity, distance, linearity, change of variables, orthogonality, or inner product can have very natural meanings in some of these models (though not in all).

It can take a bit of both theory and practice to merge one's intuition for these things with one's spatial intuition for vectors and vector spaces, but it can be done eventually (much as after one has enough exposure to measure theory, one can start merging one's intuition regarding cardinality, mass, length, volume, probability, cost, charge, and any number of other "real-life" measures).

For instance, the fact that most of the mass of a unit ball in high dimensions lurks near the boundary of the ball can be interpreted as a manifestation of the law of large numbers, using the interpretation of a high-dimensional vector space as the state space for a large number of trials of a random variable.

More generally, many facts about low-dimensional projections or slices of high-dimensional objects can be viewed from a probabilistic, statistical, or signal processing perspective.

Scott Aaronson:
Here are some of the crutches I've relied on. (Admittedly, my crutches are probably much more useful for theoretical computer science, combinatorics, and probability than they are for geometry, topology, or physics. On a related note, I personally have a much easier time thinking about R^n than about, say, R^4 or R^5!)

1. If you're trying to visualize some 4D phenomenon P, first think of a related 3D phenomenon P', and then imagine yourself as a 2D being who's trying to visualize P'. The advantage is that, unlike with the 4D vs. 3D case, you yourself can easily switch between the 3D and 2D perspectives, and can therefore get a sense of exactly what information is being lost when you drop a dimension. (You could call this the "Flatland trick," after the most famous literary work to rely on it.)
2. As someone else mentioned, discretize! Instead of thinking about R^n, think about the Boolean hypercube {0,1}^n, which is finite and usually easier to get intuition about. (When working on problems, I often find myself drawing {0,1}^4 on a sheet of paper by drawing two copies of {0,1}^3 and then connecting the corresponding vertices.)
3. Instead of thinking about a subset SβŠ†R^n, think about its characteristic function f:R^nβ†’{0,1}. I don't know why that trivial perspective switch makes such a big difference, but it does ... maybe because it shifts your attention to the process of computing f, and makes you forget about the hopeless task of visualizing S!
4. One of the central facts about R^n is that, while it has "room" for only n orthogonal vectors, it has room for exp⁑(n) almost-orthogonal vectors. Internalize that one fact, and so many other properties of R^n (for example, that the n-sphere resembles a "ball with spikes sticking out," as someone mentioned before) will suddenly seem non-mysterious. In turn, one way to internalize the fact that R^n has so many almost-orthogonal vectors is to internalize Shannon's theorem that there exist good error-correcting codes.
5. To get a feel for some high-dimensional object, ask questions about the behavior of a process that takes place on that object. For example: if I drop a ball here, which local minimum will it settle into? How long does this random walk on {0,1}^n take to mix?

Gil Kalai:
This is a slightly different point, but Vitali Milman, who works in high-dimensional convexity, likes to draw high-dimensional convex bodies in a non-convex way. This is to convey the point that if you take the convex hull of a few points on the unit sphere of R^n, then for large n very little of the measure of the convex body is anywhere near the corners, so in a certain sense the body is a bit like a small sphere with long thin "spikes".
q-n-a  intuition  math  visual-understanding  list  discussion  thurston  tidbits  aaronson  tcs  geometry  problem-solving  yoga  πŸ‘³  big-list  metabuch  tcstariat  gowers  mathtariat  acm  overflow  soft-question  levers  dimensionality  hi-order-bits  insight  synthesis  thinking  models  cartoons  coding-theory  information-theory  probability  concentration-of-measure  magnitude  linear-algebra  boolean-analysis  analogy  arrows  lifts-projections  measure  markov  sampling  shannon  conceptual-vocab  nibble  degrees-of-freedom  worrydream  neurons  retrofit  oscillation  paradox  novelty  tricki  concrete  high-dimension  s:***  manifolds  direction  curvature  convexity-curvature  elegance 
december 2016 by nhaliday
COS597C: How to solve it
- Familiarity with tools. You have to know the basic mathematical and conceptuatl tools, and over the semester we will encounter quite a few of them.
- Background reading on your topic. What is already known and how was it proven? Research involves figuring out how to stand on the shoulders of others (could be giants, midgets, or normal-sized people).
- Ability to generate new ideas and spot the ones that dont work. I cannot stress the second part enough. The only way you generate new ideas is by shooting down the ones you already have.
- Flashes of genius. Somewhat overrated; the other three points are more important. Insights come to the well-prepared.
course  tcs  princeton  yoga  πŸ‘³  unit  toolkit  metabuch  problem-solving  sanjeev-arora  wire-guided  s:*  p:** 
october 2016 by nhaliday
Β« earlier      
per page:    20 ‧ 40 ‧ 80 ‧ 120 ‧ 160

bundles : problem-solving ‧ stars ‧ tcs

related tags

aaronson βŠ•  academia βŠ•  accretion βŠ•  acm βŠ•  acmtariat βŠ•  additive-combo βŠ•  adversarial βŠ•  aggregator βŠ•  algebra βŠ•  algebraic-complexity βŠ•  algorithmic-econ βŠ•  algorithms βŠ•  analogy βŠ•  ankur-moitra βŠ•  aphorism βŠ•  apollonian-dionysian βŠ•  applications βŠ•  approximation βŠ•  arrows βŠ•  atoms βŠ•  average-case βŠ•  backup βŠ•  bandits βŠ•  bayesian βŠ•  berkeley βŠ•  better-explained βŠ•  big-list βŠ•  big-picture βŠ•  big-surf βŠ•  binomial βŠ•  bits βŠ•  blog βŠ•  boaz-barak βŠ•  boltzmann βŠ•  books βŠ•  boolean-analysis βŠ•  bounded-cognition βŠ•  brunn-minkowski βŠ•  calculation βŠ•  caltech βŠ•  cartoons βŠ•  causation βŠ•  chaining βŠ•  chart βŠ•  cheatsheet βŠ•  checklists βŠ•  circuits βŠ•  clarity βŠ•  clever-rats βŠ•  closure βŠ•  cmu βŠ•  coarse-fine βŠ•  coding-theory βŠ•  communication βŠ•  communication-complexity βŠ•  comparison βŠ•  complexity βŠ•  composition-decomposition βŠ•  compressed-sensing βŠ•  computation βŠ•  concentration-of-measure βŠ•  concept βŠ•  conceptual-vocab βŠ•  concrete βŠ•  confluence βŠ•  convexity-curvature βŠ•  cool βŠ•  cornell βŠ•  counting βŠ•  course βŠ•  critique βŠ•  crypto βŠ•  cs βŠ•  curiosity βŠ•  curvature βŠ•  dana-moshkovitz βŠ•  data-science βŠ•  data-structures βŠ•  database βŠ•  decision-theory βŠ•  deep-learning βŠ•  definition βŠ•  degrees-of-freedom βŠ•  differential βŠ•  dimensionality βŠ•  direction βŠ•  discrete βŠ•  discussion βŠ•  distribution βŠ•  DP βŠ•  draft βŠ•  duality βŠ•  duplication βŠ•  dynamic βŠ•  dynamical βŠ•  economics βŠ•  electromag βŠ•  elegance βŠ•  embeddings βŠ•  empirical βŠ•  encyclopedic βŠ•  ends-means βŠ•  ensembles βŠ•  entanglement βŠ•  entropy-like βŠ•  ergodic βŠ•  erik-demaine βŠ•  estimate βŠ•  examples βŠ•  exocortex βŠ•  expanders βŠ•  expectancy βŠ•  expert βŠ•  expert-experience βŠ•  explanation βŠ•  explore-exploit βŠ•  exposition βŠ•  extrema βŠ•  fedja βŠ•  feynman βŠ•  finiteness βŠ•  fluid βŠ•  fourier βŠ•  frontier βŠ•  game-theory βŠ•  gaussian-processes βŠ•  gbooks βŠ•  geometry βŠ•  georgia βŠ•  giants βŠ•  gowers βŠ•  gradient-descent βŠ•  graph-theory βŠ•  graphical-models βŠ•  graphs βŠ•  ground-up βŠ•  h2o βŠ•  hamming βŠ•  harvard βŠ•  hashing βŠ•  heuristic βŠ•  hi-order-bits βŠ•  hierarchy βŠ•  high-dimension βŠ•  homepage βŠ•  homogeneity βŠ•  huge-data-the-biggest βŠ•  hypothesis-testing βŠ•  ide βŠ•  ideas βŠ•  impact βŠ•  info-dynamics βŠ•  info-foraging βŠ•  information-theory βŠ•  init βŠ•  insight βŠ•  interdisciplinary βŠ•  intuition βŠ•  invariance βŠ•  ising βŠ•  isotropy βŠ•  israel βŠ•  iteration-recursion βŠ•  iterative-methods βŠ•  jelani-nelson βŠ•  kernels βŠ•  knowledge βŠ•  latent-variables βŠ•  learning-theory βŠ•  lecture-notes βŠ•  lectures βŠ•  lens βŠ•  let-me-see βŠ•  levers βŠ•  lifts-projections βŠ•  limits βŠ•  linear-algebra βŠ•  linear-programming βŠ•  linearity βŠ•  liner-notes βŠ•  links βŠ•  list βŠ•  local-global βŠ•  lower-bounds βŠ•  luca-trevisan βŠ•  machine-learning βŠ•  madhu-sudan βŠ•  magnitude βŠ•  manifolds βŠ•  markov βŠ•  martingale βŠ•  matching βŠ•  math βŠ•  math.AG βŠ•  math.AT βŠ•  math.CA βŠ•  math.CO βŠ•  math.DS βŠ•  math.FA βŠ•  math.GN βŠ•  math.GR βŠ•  math.MG βŠ•  math.NT βŠ•  mathtariat βŠ•  matrix-factorization βŠ•  measure βŠ•  mechanics βŠ•  mechanism-design βŠ•  mental-math βŠ•  meta:math βŠ•  metabuch βŠ•  metameta βŠ•  methodology βŠ•  metric-space βŠ•  michael-nielsen βŠ•  micro βŠ•  mihai βŠ•  mit βŠ•  mixing βŠ•  model-class βŠ•  models βŠ•  moments βŠ•  monte-carlo βŠ•  motivation βŠ•  multi βŠ•  multiplicative βŠ•  naturality βŠ•  network-structure βŠ•  neurons βŠ•  nibble βŠ•  nlp βŠ•  norms βŠ•  novelty βŠ•  occam βŠ•  ocw βŠ•  off-convex βŠ•  oly βŠ•  oly-programming βŠ•  online-learning βŠ•  openai βŠ•  operational βŠ•  optics βŠ•  optimization βŠ•  org:bleg βŠ•  org:edu βŠ•  org:inst βŠ•  org:mat βŠ•  oscillation βŠ•  overflow βŠ•  p:* βŠ•  p:** βŠ•  p:*** βŠ•  p:someday βŠ•  p:whenever βŠ•  PAC βŠ•  papers βŠ•  paradox βŠ•  pcp βŠ•  pdf βŠ•  percolation βŠ•  perturbation βŠ•  phase-transition βŠ•  philosophy βŠ•  physics βŠ•  pigeonhole-markov βŠ•  polynomials βŠ•  potential βŠ•  pragmatic βŠ•  pre-2013 βŠ•  preprint βŠ•  princeton βŠ•  prioritizing βŠ•  probabilistic-method βŠ•  probability βŠ•  problem-solving βŠ•  programming βŠ•  proof-systems βŠ•  proofs βŠ•  pseudorandomness βŠ•  puzzles βŠ•  q-n-a βŠ•  quantifiers-sums βŠ•  quantum βŠ•  quantum-info βŠ•  quixotic βŠ•  quotes βŠ•  rand-approx βŠ•  rand-complexity βŠ•  random βŠ•  random-matrices βŠ•  random-networks βŠ•  ratty βŠ•  reading βŠ•  recommendations βŠ•  reference βŠ•  reflection βŠ•  regularization βŠ•  reinforcement βŠ•  relativity βŠ•  relativization βŠ•  relaxation βŠ•  research βŠ•  research-program βŠ•  retrofit βŠ•  rigor βŠ•  rigorous-crypto βŠ•  roadmap βŠ•  rounding βŠ•  ryan-odonnell βŠ•  s:* βŠ•  s:** βŠ•  s:*** βŠ•  salil-vadhan βŠ•  sample-complexity βŠ•  sampling βŠ•  sanjeev-arora βŠ•  scholar βŠ•  scholar-pack βŠ•  SDP βŠ•  search βŠ•  seminar βŠ•  series βŠ•  shannon βŠ•  simulation βŠ•  skeleton βŠ•  smoothness βŠ•  social-choice βŠ•  soft-question βŠ•  space-complexity βŠ•  sparsity βŠ•  spatial βŠ•  spectral βŠ•  stanford βŠ•  stat-mech βŠ•  stats βŠ•  stochastic-processes βŠ•  stories βŠ•  stream βŠ•  structure βŠ•  studying βŠ•  sublinear βŠ•  submodular βŠ•  sum-of-squares βŠ•  summary βŠ•  symmetry βŠ•  synthesis βŠ•  talks βŠ•  tcs βŠ•  tcstariat βŠ•  teaching βŠ•  techtariat βŠ•  telos-atelos βŠ•  tensors βŠ•  texas βŠ•  the-prices βŠ•  the-trenches βŠ•  thermo βŠ•  thinking βŠ•  thurston βŠ•  tidbits βŠ•  tim-roughgarden βŠ•  toolkit βŠ•  tools βŠ•  top-n βŠ•  topics βŠ•  topology βŠ•  track-record βŠ•  tricki βŠ•  tricks βŠ•  UGC βŠ•  unit βŠ•  usaco-ioi βŠ•  vague βŠ•  valiant βŠ•  vazirani βŠ•  vc-dimension βŠ•  video βŠ•  virtu βŠ•  visual-understanding βŠ•  visualization βŠ•  washington βŠ•  water βŠ•  wigderson βŠ•  wiki βŠ•  winter-2015 βŠ•  winter-2017 βŠ•  wire-guided βŠ•  wisdom βŠ•  wordlessness βŠ•  wormholes βŠ•  worrydream βŠ•  writing βŠ•  yoga βŠ•  zooming βŠ•  πŸŽ“ βŠ•  πŸ‘³ βŠ–  πŸ”¬ βŠ•  πŸ–₯ βŠ• 

Copy this bookmark:



description:


tags: