nhaliday + motivation   70

What's the expected level of paper for top conferences in Computer Science - Academia Stack Exchange
Top. The top level.

My experience on program committees for STOC, FOCS, ITCS, SODA, SOCG, etc., is that there are FAR more submissions of publishable quality than can be accepted into the conference. By "publishable quality" I mean a well-written presentation of a novel, interesting, and non-trivial result within the scope of the conference.


There are several questions that come up over and over in the FOCS/STOC review cycle:

- How surprising / novel / elegant / interesting is the result?
- How surprising / novel / elegant / interesting / general are the techniques?
- How technically difficult is the result? Ironically, FOCS and STOC committees have a reputation for ignoring the distinction between trivial (easy to derive from scratch) and nondeterministically trivial (easy to understand after the fact).
- What is the expected impact of this result? Is this paper going to change the way people do theoretical computer science over the next five years?
- Is the result of general interest to the theoretical computer science community? Or is it only of interest to a narrow subcommunity? In particular, if the topic is outside the STOC/FOCS mainstream—say, for example, computational topology—does the paper do a good job of explaining and motivating the results to a typical STOC/FOCS audience?
nibble  q-n-a  overflow  academia  tcs  cs  meta:research  publishing  scholar  lens  properties  cost-benefit  analysis  impetus  increase-decrease  soft-question  motivation  proofs  search  complexity  analogy  problem-solving  elegance  synthesis  hi-order-bits  novelty  discovery 
june 2019 by nhaliday
probability - Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean? - Cross Validated
The confidence interval is the answer to the request: "Give me an interval that will bracket the true value of the parameter in 100p% of the instances of an experiment that is repeated a large number of times." The credible interval is an answer to the request: "Give me an interval that brackets the true value with probability pp given the particular sample I've actually observed." To be able to answer the latter request, we must first adopt either (a) a new concept of the data generating process or (b) a different concept of the definition of probability itself.


PS. Note that my question is not about the ban itself; it is about the suggested approach. I am not asking about frequentist vs. Bayesian inference either. The Editorial is pretty negative about Bayesian methods too; so it is essentially about using statistics vs. not using statistics at all.


q-n-a  overflow  nibble  stats  data-science  science  methodology  concept  confidence  conceptual-vocab  confusion  explanation  thinking  hypothesis-testing  jargon  multi  meta:science  best-practices  error  discussion  bayesian  frequentist  hmm  publishing  intricacy  wut  comparison  motivation  clarity  examples  robust  metabuch  🔬  info-dynamics  reference  grokkability-clarity 
february 2017 by nhaliday
general topology - What should be the intuition when working with compactness? - Mathematics Stack Exchange

The situation with compactness is sort of like the above. It turns out that finiteness, which you think of as one concept (in the same way that you think of "Foo" as one concept above), is really two concepts: discreteness and compactness. You've never seen these concepts separated before, though. When people say that compactness is like finiteness, they mean that compactness captures part of what it means to be finite in the same way that shortness captures part of what it means to be Foo.


As many have said, compactness is sort of a topological generalization of finiteness. And this is true in a deep sense, because topology deals with open sets, and this means that we often "care about how something behaves on an open set", and for compact spaces this means that there are only finitely many possible behaviors.


Compactness does for continuous functions what finiteness does for functions in general.

If a set A is finite then every function f:A→R has a max and a min, and every function f:A→R^n is bounded. If A is compact, the every continuous function from A to R has a max and a min and every continuous function from A to R^n is bounded.

If A is finite then every sequence of members of A has a subsequence that is eventually constant, and "eventually constant" is the only kind of convergence you can talk about without talking about a topology on the set. If A is compact, then every sequence of members of A has a convergent subsequence.
q-n-a  overflow  math  topology  math.GN  concept  finiteness  atoms  intuition  oly  mathtariat  multi  discrete  gowers  motivation  synthesis  hi-order-bits  soft-question  limits  things  nibble  definition  convergence  abstraction  span-cover 
january 2017 by nhaliday
pr.probability - What is convolution intuitively? - MathOverflow
I remember as a graduate student that Ingrid Daubechies frequently referred to convolution by a bump function as "blurring" - its effect on images is similar to what a short-sighted person experiences when taking off his or her glasses (and, indeed, if one works through the geometric optics, convolution is not a bad first approximation for this effect). I found this to be very helpful, not just for understanding convolution per se, but as a lesson that one should try to use physical intuition to model mathematical concepts whenever one can.

More generally, if one thinks of functions as fuzzy versions of points, then convolution is the fuzzy version of addition (or sometimes multiplication, depending on the context). The probabilistic interpretation is one example of this (where the fuzz is a a probability distribution), but one can also have signed, complex-valued, or vector-valued fuzz, of course.
q-n-a  overflow  math  concept  atoms  intuition  motivation  gowers  visual-understanding  aphorism  soft-question  tidbits  👳  mathtariat  cartoons  ground-up  metabuch  analogy  nibble  yoga  neurons  retrofit  optics  concrete  s:*  multiplicative  fourier 
january 2017 by nhaliday
soft question - Why does Fourier analysis of Boolean functions "work"? - Theoretical Computer Science Stack Exchange
Here is my point of view, which I learned from Guy Kindler, though someone more experienced can probably give a better answer: Consider the linear space of functions f: {0,1}^n -> R and consider a linear operator of the form σ_w (for w in {0,1}^n), that maps a function f(x) as above to the function f(x+w). In many of the questions of TCS, there is an underlying need to analyze the effects that such operators have on certain functions.

Now, the point is that the Fourier basis is the basis that diagonalizes all those operators at the same time, which makes the analysis of those operators much simpler. More generally, the Fourier basis diagonalizes the convolution operator, which also underlies many of those questions. Thus, Fourier analysis is likely to be effective whenever one needs to analyze those operators.
q-n-a  math  tcs  synthesis  boolean-analysis  fourier  👳  tidbits  motivation  intuition  linear-algebra  overflow  hi-order-bits  insight  curiosity  ground-up  arrows  nibble  s:*  elegance  guessing 
december 2016 by nhaliday
Reflections on the recent solution of the cap-set problem I | Gowers's Weblog
As regular readers of this blog will know, I have a strong interest in the question of where mathematical ideas come from, and a strong conviction that they always result from a fairly systematic process — and that the opposite impression, that some ideas are incredible bolts from the blue that require “genius” or “sudden inspiration” to find, is an illusion that results from the way mathematicians present their proofs after they have discovered them.
math  research  academia  gowers  hmm  mathtariat  org:bleg  nibble  big-surf  algebraic-complexity  math.CO  questions  heavyweights  exposition  technical-writing  roots  problem-solving  polynomials  linear-algebra  motivation  guessing 
may 2016 by nhaliday

bundles : mathmetathinkingvague

related tags

aaronson  abstraction  academia  accretion  acm  acmtariat  additive  advanced  aesthetics  alg-combo  algebra  algebraic-complexity  algorithmic-econ  algorithms  amortization-potential  analogy  analysis  aphorism  applicability-prereqs  applications  approximation  arrows  article  atoms  automata-languages  axioms  bayesian  ben-recht  best-practices  better-explained  bias-variance  big-list  big-picture  big-surf  binomial  bio  bits  boaz-barak  boolean-analysis  capital  cartoons  certificates-recognition  characterization  clarity  clever-rats  closure  coarse-fine  coding-theory  commentary  communication-complexity  comparison  compensation  complexity  composition-decomposition  computation  concentration-of-measure  concept  conceptual-vocab  concrete  conference  confidence  confusion  convergence  convexity-curvature  core-rats  cost-benefit  counterexample  course  crypto  cs  curiosity  curvature  data-science  data-structures  deep-learning  definition  degrees-of-freedom  differential  differential-privacy  discovery  discrete  discussion  distribution  econ-productivity  economics  efficiency  electromag  elegance  ensembles  entropy-like  equilibrium  erdos  error  essay  estimate  evidence  examples  expert  expert-experience  explanans  explanation  exploratory  exposition  extratricky  extrema  finiteness  fourier  frequentist  frontier  game-theory  gedanken  generative  geometry  government  gowers  gradient-descent  graphical-models  graphs  grokkability  grokkability-clarity  ground-up  growth-econ  guessing  hardness  harvard  heavyweights  heuristic  hi-order-bits  history  hmm  hsu  hypothesis-testing  ideas  identity  IEEE  impact  impetus  increase-decrease  inference  info-dynamics  information-theory  init  input-output  insight  integral  intel  interdisciplinary  intricacy  intuition  invariance  iron-age  iteration-recursion  jargon  labor  latent-variables  learning  learning-theory  lectures  lens  levers  lifts-projections  limits  linear-algebra  linear-programming  liner-notes  links  list  local-global  logic  lower-bounds  luca-trevisan  machine-learning  madhu-sudan  magnitude  manifolds  markov  math  math.AC  math.AG  math.AT  math.CA  math.CO  math.CT  math.CV  math.DS  math.FA  math.GN  math.GR  math.NT  math.RT  mathtariat  matrix-factorization  measure  mediterranean  meta:math  meta:research  meta:science  metabuch  metameta  methodology  metric-space  mihai  mixing  model-class  models  moments  monte-carlo  motivation  multi  multiplicative  neuro  neurons  nibble  norms  novelty  objektbuch  occam  off-convex  oly  open-problems  optics  optimization  org:bleg  org:edu  org:junk  org:popup  orourke  overflow  oxbridge  p:*  p:whenever  papers  parsimony  pcp  pdf  philosophy  physics  polynomials  probability  problem-solving  prof  proof-systems  proofs  properties  pseudorandomness  publishing  q-n-a  qra  quantum  quantum-info  questions  quixotic  quotes  rand-approx  rand-complexity  random  rec-math  reduction  reference  reflection  regularity  relativization  relaxation  research  research-program  retrofit  rhetoric  rigor  rigorous-crypto  roadmap  robust  roots  rounding  s:*  sampling  scholar  science  scitariat  SDP  search  series  simplex  sky  slides  smoothness  society  soft-question  space  space-complexity  span-cover  spatial  stats  stories  stylized-facts  sublinear  summary  synthesis  systematic-ad-hoc  tcs  tcstariat  teaching  technical-writing  techtariat  telos-atelos  tensors  the-classics  things  thinking  thurston  tidbits  time-complexity  topics  topology  tricki  turing  unit  video  visual-understanding  visualization  white-paper  wiki  wormholes  worrydream  wut  yoga  zooming  🎓  🎩  👳  🔬 

Copy this bookmark: