nhaliday + soft-question   94

soft question - What are good non-English languages for mathematicians to know? - MathOverflow
I'm with Deane here: I think learning foreign languages is not a very mathematically productive thing to do; of course, there are lots of good reasons to learn foreign languages, but doing mathematics is not one of them. Not only are there few modern mathematics papers written in languages other than English, but the primary other language they are written (French) in is pretty easy to read without actually knowing it.

Even though I've been to France several times, my spoken French mostly consists of "merci," "si vous plait," "d'accord" and some food words; I've still skimmed 100 page long papers in French without a lot of trouble.

If nothing else, think of reading a paper in French as a good opportunity to teach Google Translate some mathematical French.
q-n-a  overflow  math  academia  learning  foreign-lang  publishing  science  french  soft-question  math.AG  nibble  quixotic 
february 2019 by nhaliday
general topology - What should be the intuition when working with compactness? - Mathematics Stack Exchange

The situation with compactness is sort of like the above. It turns out that finiteness, which you think of as one concept (in the same way that you think of "Foo" as one concept above), is really two concepts: discreteness and compactness. You've never seen these concepts separated before, though. When people say that compactness is like finiteness, they mean that compactness captures part of what it means to be finite in the same way that shortness captures part of what it means to be Foo.


As many have said, compactness is sort of a topological generalization of finiteness. And this is true in a deep sense, because topology deals with open sets, and this means that we often "care about how something behaves on an open set", and for compact spaces this means that there are only finitely many possible behaviors.


Compactness does for continuous functions what finiteness does for functions in general.

If a set A is finite then every function f:A→R has a max and a min, and every function f:A→R^n is bounded. If A is compact, the every continuous function from A to R has a max and a min and every continuous function from A to R^n is bounded.

If A is finite then every sequence of members of A has a subsequence that is eventually constant, and "eventually constant" is the only kind of convergence you can talk about without talking about a topology on the set. If A is compact, then every sequence of members of A has a convergent subsequence.
q-n-a  overflow  math  topology  math.GN  concept  finiteness  atoms  intuition  oly  mathtariat  multi  discrete  gowers  motivation  synthesis  hi-order-bits  soft-question  limits  things  nibble  definition  convergence  abstraction  span-cover 
january 2017 by nhaliday
ho.history overview - Proofs that require fundamentally new ways of thinking - MathOverflow
my favorite:
Although this has already been said elsewhere on MathOverflow, I think it's worth repeating that Gromov is someone who has arguably introduced more radical thoughts into mathematics than anyone else. Examples involving groups with polynomial growth and holomorphic curves have already been cited in other answers to this question. I have two other obvious ones but there are many more.

I don't remember where I first learned about convergence of Riemannian manifolds, but I had to laugh because there's no way I would have ever conceived of a notion. To be fair, all of the groundwork for this was laid out in Cheeger's thesis, but it was Gromov who reformulated everything as a convergence theorem and recognized its power.

Another time Gromov made me laugh was when I was reading what little I could understand of his book Partial Differential Relations. This book is probably full of radical ideas that I don't understand. The one I did was his approach to solving the linearized isometric embedding equation. His radical, absurd, but elementary idea was that if the system is sufficiently underdetermined, then the linear partial differential operator could be inverted by another linear partial differential operator. Both the statement and proof are for me the funniest in mathematics. Most of us view solving PDE's as something that requires hard work, involving analysis and estimates, and Gromov manages to do it using only elementary linear algebra. This then allows him to establish the existence of isometric embedding of Riemannian manifolds in a wide variety of settings.
q-n-a  overflow  soft-question  big-list  math  meta:math  history  insight  synthesis  gowers  mathtariat  hi-order-bits  frontier  proofs  magnitude  giants  differential  geometry  limits  flexibility  nibble  degrees-of-freedom  big-picture  novelty  zooming  big-surf  wild-ideas  metameta  courage  convergence  ideas  innovation  the-trenches  discovery  creative  elegance 
january 2017 by nhaliday
pr.probability - What is convolution intuitively? - MathOverflow
I remember as a graduate student that Ingrid Daubechies frequently referred to convolution by a bump function as "blurring" - its effect on images is similar to what a short-sighted person experiences when taking off his or her glasses (and, indeed, if one works through the geometric optics, convolution is not a bad first approximation for this effect). I found this to be very helpful, not just for understanding convolution per se, but as a lesson that one should try to use physical intuition to model mathematical concepts whenever one can.

More generally, if one thinks of functions as fuzzy versions of points, then convolution is the fuzzy version of addition (or sometimes multiplication, depending on the context). The probabilistic interpretation is one example of this (where the fuzz is a a probability distribution), but one can also have signed, complex-valued, or vector-valued fuzz, of course.
q-n-a  overflow  math  concept  atoms  intuition  motivation  gowers  visual-understanding  aphorism  soft-question  tidbits  👳  mathtariat  cartoons  ground-up  metabuch  analogy  nibble  yoga  neurons  retrofit  optics  concrete  s:*  multiplicative  fourier 
january 2017 by nhaliday
soft question - Thinking and Explaining - MathOverflow
- good question from Bill Thurston
- great answers by Terry Tao, fedja, Minhyong Kim, gowers, etc.

Terry Tao:
- symmetry as blurring/vibrating/wobbling, scale invariance
- anthropomorphization, adversarial perspective for estimates/inequalities/quantifiers, spending/economy

fedja walks through his though-process from another answer

Minhyong Kim: anthropology of mathematical philosophizing

Per Vognsen: normality as isotropy
comment: conjugate subgroup gHg^-1 ~ "H but somewhere else in G"

gowers: hidden things in basic mathematics/arithmetic
comment by Ryan Budney: x sin(x) via x -> (x, sin(x)), (x, y) -> xy
I kinda get what he's talking about but needed to use Mathematica to get the initial visualization down.
To remind myself later:
- xy can be easily visualized by juxtaposing the two parabolae x^2 and -x^2 diagonally
- x sin(x) can be visualized along that surface by moving your finger along the line (x, 0) but adding some oscillations in y direction according to sin(x)
q-n-a  soft-question  big-list  intuition  communication  teaching  math  thinking  writing  thurston  lens  overflow  synthesis  hi-order-bits  👳  insight  meta:math  clarity  nibble  giants  cartoons  gowers  mathtariat  better-explained  stories  the-trenches  problem-solving  homogeneity  symmetry  fedja  examples  philosophy  big-picture  vague  isotropy  reflection  spatial  ground-up  visual-understanding  polynomials  dimensionality  math.GR  worrydream  scholar  🎓  neurons  metabuch  yoga  retrofit  mental-math  metameta  wisdom  wordlessness  oscillation  operational  adversarial  quantifiers-sums  exposition  explanation  tricki  concrete  s:***  manifolds  invariance  dynamical  info-dynamics  cool  direction  elegance 
january 2017 by nhaliday
gt.geometric topology - Intuitive crutches for higher dimensional thinking - MathOverflow
Terry Tao:
I can't help you much with high-dimensional topology - it's not my field, and I've not picked up the various tricks topologists use to get a grip on the subject - but when dealing with the geometry of high-dimensional (or infinite-dimensional) vector spaces such as R^n, there are plenty of ways to conceptualise these spaces that do not require visualising more than three dimensions directly.

For instance, one can view a high-dimensional vector space as a state space for a system with many degrees of freedom. A megapixel image, for instance, is a point in a million-dimensional vector space; by varying the image, one can explore the space, and various subsets of this space correspond to various classes of images.

One can similarly interpret sound waves, a box of gases, an ecosystem, a voting population, a stream of digital data, trials of random variables, the results of a statistical survey, a probabilistic strategy in a two-player game, and many other concrete objects as states in a high-dimensional vector space, and various basic concepts such as convexity, distance, linearity, change of variables, orthogonality, or inner product can have very natural meanings in some of these models (though not in all).

It can take a bit of both theory and practice to merge one's intuition for these things with one's spatial intuition for vectors and vector spaces, but it can be done eventually (much as after one has enough exposure to measure theory, one can start merging one's intuition regarding cardinality, mass, length, volume, probability, cost, charge, and any number of other "real-life" measures).

For instance, the fact that most of the mass of a unit ball in high dimensions lurks near the boundary of the ball can be interpreted as a manifestation of the law of large numbers, using the interpretation of a high-dimensional vector space as the state space for a large number of trials of a random variable.

More generally, many facts about low-dimensional projections or slices of high-dimensional objects can be viewed from a probabilistic, statistical, or signal processing perspective.

Scott Aaronson:
Here are some of the crutches I've relied on. (Admittedly, my crutches are probably much more useful for theoretical computer science, combinatorics, and probability than they are for geometry, topology, or physics. On a related note, I personally have a much easier time thinking about R^n than about, say, R^4 or R^5!)

1. If you're trying to visualize some 4D phenomenon P, first think of a related 3D phenomenon P', and then imagine yourself as a 2D being who's trying to visualize P'. The advantage is that, unlike with the 4D vs. 3D case, you yourself can easily switch between the 3D and 2D perspectives, and can therefore get a sense of exactly what information is being lost when you drop a dimension. (You could call this the "Flatland trick," after the most famous literary work to rely on it.)
2. As someone else mentioned, discretize! Instead of thinking about R^n, think about the Boolean hypercube {0,1}^n, which is finite and usually easier to get intuition about. (When working on problems, I often find myself drawing {0,1}^4 on a sheet of paper by drawing two copies of {0,1}^3 and then connecting the corresponding vertices.)
3. Instead of thinking about a subset S⊆R^n, think about its characteristic function f:R^n→{0,1}. I don't know why that trivial perspective switch makes such a big difference, but it does ... maybe because it shifts your attention to the process of computing f, and makes you forget about the hopeless task of visualizing S!
4. One of the central facts about R^n is that, while it has "room" for only n orthogonal vectors, it has room for exp⁡(n) almost-orthogonal vectors. Internalize that one fact, and so many other properties of R^n (for example, that the n-sphere resembles a "ball with spikes sticking out," as someone mentioned before) will suddenly seem non-mysterious. In turn, one way to internalize the fact that R^n has so many almost-orthogonal vectors is to internalize Shannon's theorem that there exist good error-correcting codes.
5. To get a feel for some high-dimensional object, ask questions about the behavior of a process that takes place on that object. For example: if I drop a ball here, which local minimum will it settle into? How long does this random walk on {0,1}^n take to mix?

Gil Kalai:
This is a slightly different point, but Vitali Milman, who works in high-dimensional convexity, likes to draw high-dimensional convex bodies in a non-convex way. This is to convey the point that if you take the convex hull of a few points on the unit sphere of R^n, then for large n very little of the measure of the convex body is anywhere near the corners, so in a certain sense the body is a bit like a small sphere with long thin "spikes".
q-n-a  intuition  math  visual-understanding  list  discussion  thurston  tidbits  aaronson  tcs  geometry  problem-solving  yoga  👳  big-list  metabuch  tcstariat  gowers  mathtariat  acm  overflow  soft-question  levers  dimensionality  hi-order-bits  insight  synthesis  thinking  models  cartoons  coding-theory  information-theory  probability  concentration-of-measure  magnitude  linear-algebra  boolean-analysis  analogy  arrows  lifts-projections  measure  markov  sampling  shannon  conceptual-vocab  nibble  degrees-of-freedom  worrydream  neurons  retrofit  oscillation  paradox  novelty  tricki  concrete  high-dimension  s:***  manifolds  direction  curvature  convexity-curvature  elegance 
december 2016 by nhaliday
predictive models - Is this the state of art regression methodology? - Cross Validated
I've been following Kaggle competitions for a long time and I come to realize that many winning strategies involve using at least one of the "big threes": bagging, boosting and stacking.

For regressions, rather than focusing on building one best possible regression model, building multiple regression models such as (Generalized) linear regression, random forest, KNN, NN, and SVM regression models and blending the results into one in a reasonable way seems to out-perform each individual method a lot of times.
q-n-a  state-of-art  machine-learning  acm  data-science  atoms  overflow  soft-question  regression  ensembles  nibble  oly 
november 2016 by nhaliday
soft question - A Book You Would Like to Write - MathOverflow
- The Differential Topology of Loop Spaces
- Knot Theory: Kawaii examples for topological machines
- An Introduction to Forcing (for people who don't care about foundations.)
writing  math  q-n-a  discussion  books  list  synthesis  big-list  overflow  soft-question  techtariat  mathtariat  exposition  topology  open-problems  logic  nibble  fedja  questions 
october 2016 by nhaliday
soft question - How do you not forget old math? - MathOverflow
Terry Tao:
I find that blogging about material that I would otherwise forget eventually is extremely valuable in this regard. (I end up consulting my own blog posts on a regular basis.) EDIT: and now I remember I already wrote on this topic: terrytao.wordpress.com/career-advice/write-down-what-youve-d‌​one

The only way to cope with this loss of memory I know is to do some reading on systematic basis. Of course, if you read one paper in algebraic geometry (or whatever else) a month (or even two months), you may not remember the exact content of all of them by the end of the year but, since all mathematicians in one field use pretty much the same tricks and draw from pretty much the same general knowledge, you'll keep the core things in your memory no matter what you read (provided it is not patented junk, of course) and this is about as much as you can hope for.

Relating abstract things to "real life stuff" (and vice versa) is automatic when you work as a mathematician. For me, the proof of the Chacon-Ornstein ergodic theorem is just a sandpile moving over a pit with the sand falling down after every shift. I often tell my students that every individual term in the sequence doesn't matter at all for the limit but somehow together they determine it like no individual human is of any real importance while together they keep this civilization running, etc. No special effort is needed here and, moreover, if the analogy is not natural but contrived, it'll not be helpful or memorable. The standard mnemonic techniques are pretty useless in math. IMHO (the famous "foil" rule for the multiplication of sums of two terms is inferior to the natural "pair each term in the first sum with each term in the second sum" and to the picture of a rectangle tiled with smaller rectangles, though, of course, the foil rule sounds way more sexy).

One thing that I don't think the other respondents have emphasized enough is that you should work on prioritizing what you choose to study and remember.

Timothy Chow:
As others have said, forgetting lots of stuff is inevitable. But there are ways you can mitigate the damage of this information loss. I find that a useful technique is to try to organize your knowledge hierarchically. Start by coming up with a big picture, and make sure you understand and remember that picture thoroughly. Then drill down to the next level of detail, and work on remembering that. For example, if I were trying to remember everything in a particular book, I might start by memorizing the table of contents, and then I'd work on remembering the theorem statements, and then finally the proofs. (Don't take this illustration too literally; it's better to come up with your own conceptual hierarchy than to slavishly follow the formal hierarchy of a published text. But I do think that a hierarchical approach is valuable.)

Organizing your knowledge like this helps you prioritize. You can then consciously decide that certain large swaths of knowledge are not worth your time at the moment, and just keep a "stub" in memory to remind you that that body of knowledge exists, should you ever need to dive into it. In areas of higher priority, you can plunge more deeply. By making sure you thoroughly internalize the top levels of the hierarchy, you reduce the risk of losing sight of entire areas of important knowledge. Generally it's less catastrophic to forget the details than to forget about a whole region of the big picture, because you can often revisit the details as long as you know what details you need to dig up. (This is fortunate since the details are the most memory-intensive.)

Having a hierarchy also helps you accrue new knowledge. Often when you encounter something new, you can relate it to something you already know, and file it in the same branch of your mental tree.
thinking  math  growth  advice  expert  q-n-a  🎓  long-term  tradeoffs  scholar  overflow  soft-question  gowers  mathtariat  ground-up  hi-order-bits  intuition  synthesis  visual-understanding  decision-making  scholar-pack  cartoons  lens  big-picture  ergodic  nibble  zooming  trees  fedja  reflection  retention  meta:research  wisdom  skeleton  practice  prioritizing  concrete  s:***  info-dynamics  knowledge  studying  the-trenches  chart  expert-experience  quixotic  elegance 
june 2016 by nhaliday
« earlier      
per page:    204080120160

bundles : mathmeta

related tags

aaronson  abstraction  academia  accretion  acm  additive  adversarial  advice  ai  algebra  allodium  ama  analogy  analytical-holistic  aphorism  applications  arrows  atoms  automata  automation  axioms  best-practices  better-explained  bias-variance  big-list  big-picture  big-surf  bits  blog  boltzmann  books  boolean-analysis  career  cartoons  characterization  chart  checklists  clarity  classic  closure  coding-theory  cog-psych  communication  comparison  complexity  compressed-sensing  compression  computation  concentration-of-measure  concept  conceptual-vocab  concrete  confluence  confusion  convergence  convexity-curvature  cool  core-rats  counterexample  courage  creative  cs  culture  curiosity  curvature  darwinian  data-science  decision-making  deep-learning  definition  degrees-of-freedom  differential  dimensionality  direction  discovery  discrete  discussion  duality  dynamical  electromag  elegance  encyclopedic  enhancement  ensembles  entropy-like  ergodic  error  essay  estimate  examples  existence  expert  expert-experience  explanans  explanation  exposition  extrema  fedja  feynman  fields  finiteness  flexibility  foreign-lang  fourier  french  frontier  games  geometry  giants  gotchas  gowers  grad-school  graph-theory  graphs  ground-up  growth  habit  hard-core  hardness  heuristic  hi-order-bits  high-dimension  high-variance  history  homogeneity  humility  ideas  IEEE  impact  info-dynamics  information-theory  init  inner-product  innovation  insight  instinct  integral  interdisciplinary  intricacy  intuition  invariance  iron-age  isotropy  janus  knowledge  language  latex  learning  lens  letters  levers  lifts-projections  limits  linear-algebra  liner-notes  links  list  local-global  logic  lol  long-short-run  long-term  machine-learning  magnitude  manifolds  markov  math  math.AC  math.AG  math.AT  math.CA  math.CO  math.CV  math.FA  math.GN  math.GR  math.MG  math.NT  math.RT  mathtariat  meaningness  measure  mechanics  mediterranean  mental-math  meta:math  meta:reading  meta:research  metabuch  metameta  metric-space  model-class  models  moments  monotonicity  motivation  multi  multiplicative  neuro  neurons  nibble  no-go  norms  notation  notetaking  novelty  objektbuch  occam  off-convex  old-anglo  oly  online-learning  open-closed  open-problems  operational  optics  optimization  orders  orourke  oscillation  overflow  p:null  p:someday  p:whenever  papers  paradox  parsimony  people  phd  philosophy  physics  pic  pigeonhole-markov  poetry  polynomials  positivity  practice  prioritizing  probabilistic-method  probability  problem-solving  productivity  proofs  psychology  publishing  puzzles  q-n-a  qra  quantifiers-sums  quantum  quantum-info  questions  quixotic  quotes  random  reading  reason  rec-math  recommendations  recruiting  reduction  reference  reflection  regression  regularity  reinforcement  research  research-program  retention  retrofit  rigor  roadmap  robust  s:*  s:**  s:***  s:null  sampling  scholar  scholar-pack  science  separation  shannon  signum  skeleton  sky  soft-question  software  space  span-cover  sparsity  spatial  spectral  speculation  stat-mech  state  state-of-art  stats  stories  strategy  stream  structure  studying  success  summary  symmetry  synchrony  synthesis  systematic-ad-hoc  tactics  tcs  tcstariat  teaching  techtariat  the-classics  the-trenches  thick-thin  things  thinking  thurston  tidbits  todo  tools  top-n  topology  tradeoffs  trees  tricki  tricks  troll  turing  uniqueness  unsupervised  vague  virtu  visual-understanding  visualization  von-neumann  waves  wild-ideas  wisdom  wordlessness  workflow  wormholes  worrydream  writing  yak-shaving  yoga  zooming  🎓  👳  🔬 

Copy this bookmark: