nhaliday + oly   214

The Future of Mathematics? [video] | Hacker News
Kevin Buzzard (the Lean guy)

- general reflection on proof asssistants/theorem provers
- Kevin Hale's formal abstracts project, etc
- thinks of available theorem provers, Lean is "[the only one currently available that may be capable of formalizing all of mathematics eventually]" (goes into more detail right at the end, eg, quotient types)
hn  commentary  discussion  video  talks  presentation  math  formal-methods  expert-experience  msr  frontier  state-of-art  proofs  rigor  education  higher-ed  optimism  prediction  lens  search  meta:research  speculation  exocortex  skunkworks  automation  research  math.NT  big-surf  software  parsimony  cost-benefit  intricacy  correctness  programming  pls  python  functional  haskell  heavyweights  research-program  review  reflection  multi  pdf  slides  oly  experiment  span-cover  git  vcs  teaching  impetus  academia  composition-decomposition  coupling-cohesion  database  trust  types  plt  lifts-projections  induction  critique  beauty  truth  elegance  aesthetics 
5 weeks ago by nhaliday
Unhappy Go Lucky!
- regularly publishes unofficial editorials for AtCoder
- also seems like an otaku >_>
oly  oly-programming  blog  stream  algorithms  problem-solving  accretion  puzzles  techtariat  japan  asia  quixotic  yoga 
8 weeks ago by nhaliday
[Tutorial] A way to Practice Competitive Programming : From Rating 1000 to 2400+ - Codeforces
this guy really didn't take that long to reach red..., as of today he's done 20 contests in 2y to my 44 contests in 7y (w/ a long break)...>_>

tho he has 3 times as many submissions as me. maybe he does a lot of virtual rounds?

some snippets from the PDF guide linked:
To be rating 1900, skills as follows are needed:
- You know and can use major algorithms like these:
Brute force DP DFS BFS Dijkstra
Binary Indexed Tree nCr, nPr Mod inverse Bitmasks Binary Search
- You can code faster (For example, 5 minutes for R1100 problems, 10 minutes for
R1400 problems)

If you are not good at fast-coding and fast-debugging, you should solve AtCoder problems. Actually, and statistically, many Japanese are good at fast-coding relatively while not so good at solving difficult problems. I think that’s because of AtCoder.

I recommend to solve problem C and D in AtCoder Beginner Contest. On average, if you can solve problem C of AtCoder Beginner Contest within 10 minutes and problem D within 20 minutes, you are Div1 in FastCodingForces :)


Interestingly, typical problems are concentrated in Div2-only round problems. If you are not good at Div2-only round, it is likely that you are not good at using typical algorithms, especially 10 algorithms that are written above.

If you can use some typical problem but not good at solving more than R1500 in Codeforces, you should begin TopCoder. This type of practice is effective for people who are good at Div.2 only round but not good at Div.1+Div.2 combined or Div.1+Div.2 separated round.

Sometimes, especially in Div1+Div2 round, some problems need mathematical concepts or thinking. Since there are a lot of problems which uses them (and also light-implementation!) in TopCoder, you should solve TopCoder problems.

I recommend to solve Div1Easy of recent 100 SRMs. But some problems are really difficult, (e.g. even red-ranked coder could not solve) so before you solve, you should check how many percent of people did solve this problem. You can use https://competitiveprogramming.info/ to know some informations.

To be rating 2200, skills as follows are needed:
- You know and can use 10 algorithms which I stated in pp.11 and segment trees
(including lazy propagations)
- You can solve problems very fast: For example, 5 mins for R1100, 10 mins for
R1500, 15 mins for R1800, 40 mins for R2000.
- You have decent skills for mathematical-thinking or considering problems
- Strong mental which can think about the solution more than 1 hours, and don’t give up even if you are below average in Div1 in the middle of the contest

This is only my way to practice, but I did many virtual contests when I was rating 2000. In this page, virtual contest does not mean “Virtual Participation” in Codeforces. It means choosing 4 or 5 problems which the difficulty is near your rating (For example, if you are rating 2000, choose R2000 problems in Codeforces) and solve them within 2 hours. You can use https://vjudge.net/. In this website, you can make virtual contests from problems on many online judges. (e.g. AtCoder, Codeforces, Hackerrank, Codechef, POJ, ...)

If you cannot solve problem within the virtual contests and could not be able to find the solution during the contest, you should read editorial. Google it. (e.g. If you want to know editorial of Codeforces Round #556 (Div. 1), search “Codeforces Round #556 editorial” in google) There is one more important thing to gain rating in Codeforces. To solve problem fast, you should equip some coding library (or template code). For example, I think that equipping segment tree libraries, lazy segment tree libraries, modint library, FFT library, geometry library, etc. is very effective.

2200 to 2400:
Rating 2200 and 2400 is actually very different ...

To be rating 2400, skills as follows are needed:
- You should have skills that stated in previous section (rating 2200)
- You should solve difficult problems which are only solved by less than 100 people in Div1 contests


At first, there are a lot of educational problems in AtCoder. I recommend you should solve problem E and F (especially 700-900 points problem in AtCoder) of AtCoder Regular Contest, especially ARC058-ARC090. Though old AtCoder Regular Contests are balanced for “considering” and “typical”, but sadly, AtCoder Grand Contest and recent AtCoder Regular Contest problems are actually too biased for considering I think, so I don’t recommend if your goal is gain rating in Codeforces. (Though if you want to gain rating more than 2600, you should solve problems from AtCoder Grand Contest)

For me, actually, after solving AtCoder Regular Contests, my average performance in CF virtual contest increased from 2100 to 2300 (I could not reach 2400 because start was early)

If you cannot solve problems, I recommend to give up and read editorial as follows:
Point value 600 700 800 900 1000-
CF rating R2000 R2200 R2400 R2600 R2800
Time to editorial 40 min 50 min 60 min 70 min 80 min

If you solve AtCoder educational problems, your skills of competitive programming will be increased. But there is one more problem. Without practical skills, you rating won’t increase. So, you should do 50+ virtual participations (especially Div.1) in Codeforces. In virtual participation, you can learn how to compete as a purple/orange-ranked coder (e.g. strategy) and how to use skills in Codeforces contests that you learned in AtCoder. I strongly recommend to read editorial of all problems except too difficult one (e.g. Less than 30 people solved in contest) after the virtual contest. I also recommend to write reflections about strategy, learns and improvements after reading editorial on notebooks after the contests/virtual.

In addition, about once a week, I recommend you to make time to think about much difficult problem (e.g. R2800 in Codeforces) for couple of hours. If you could not reach the solution after thinking couple of hours, I recommend you to read editorial because you can learn a lot. Solving high-level problems may give you chance to gain over 100 rating in a single contest, but also can give you chance to solve easier problems faster.
oly  oly-programming  problem-solving  learning  practice  accretion  strategy  hmm  pdf  guide  reflection  advice  wire-guided  marginal  stylized-facts  speed  time  cost-benefit  tools  multi  sleuthin  review  comparison  puzzles  contest  aggregator  recommendations  objektbuch  time-use  growth  studying  🖥  👳  yoga 
august 2019 by nhaliday
Anti-hash test. - Codeforces
- Thue-Morse sequence
- nice paper: http://www.mii.lt/olympiads_in_informatics/pdf/INFOL119.pdf
In general, polynomial string hashing is a useful technique in construction of efficient string algorithms. One simply needs to remember to carefully select the modulus M and the variable of the polynomial p depending on the application. A good rule of thumb is to pick both values as prime numbers with M as large as possible so that no integer overflow occurs and p being at least the size of the alphabet.
2.2. Upper Bound on M
[stuff about 32- and 64-bit integers]
2.3. Lower Bound on M
On the other side Mis bounded due to the well-known birthday paradox: if we consider a collection of m keys with m ≥ 1.2√M then the chance of a collision to occur within this collection is at least 50% (assuming that the distribution of fingerprints is close to uniform on the set of all strings). Thus if the birthday paradox applies then one needs to choose M=ω(m^2)to have a fair chance to avoid a collision. However, one should note that not always the birthday paradox applies. As a benchmark consider the following two problems.

I generally prefer to use Schwartz-Zippel to reason about collision probabilities w/ this kind of thing, eg, https://people.eecs.berkeley.edu/~sinclair/cs271/n3.pdf.

A good way to get more accurate results: just use multiple primes and the Chinese remainder theorem to get as large an M as you need w/o going beyond 64-bit arithmetic.

more on this: https://codeforces.com/blog/entry/60442
oly  oly-programming  gotchas  howto  hashing  algorithms  strings  random  best-practices  counterexample  multi  pdf  papers  nibble  examples  fields  polynomials  lecture-notes  yoga  probability  estimate  magnitude  hacker  adversarial  CAS  lattice  discrete 
august 2019 by nhaliday
The 'science' of training in competitive programming - Codeforces
"Hard problems" is subjective. A good rule of thumb for learning problem solving (at least according to me) is that your problem selection is good if you fail to solve roughly 50% of problems you attempt. Anything in [20%,80%] should still be fine, although many people have problems staying motivated if they fail too often. Read solutions for problems you fail to solve.

(There is some actual math behind this. Hopefully one day I'll have the time to write it down.)
- misof in a comment
I don't believe in any of things like "either you solve it in 30mins — few hours, or you never solve it at all". There are some magic at first glance algorithms like polynomial hashing, interval tree or FFT (which is magic even at tenth glance :P), but there are not many of them and vast majority of algorithms are possible to be invented on our own, for example dp. In high school I used to solve many problems from IMO and PMO and when I didn't solve a problem I tried it once again for some time. And I have solved some problems after third or sth like that attempt. Though, if we are restricting ourselves to beginners, I think that it still holds true, but it would be better to read solutions after some time, because there are so many other things which we can learn, so better not get stuck at one particular problem, when there are hundreds of other important concepts to be learnt.
oly  oly-programming  problem-solving  learning  practice  accretion  strategy  marginal  wire-guided  stylized-facts  hmm  advice  tactics  time  time-use  cost-benefit  growth  studying  🖥  👳 
august 2019 by nhaliday
How to come up with the solutions: techniques - Codeforces
Technique 1: "Total Recall"
Technique 2: "From Specific to General"
Let's say that you've found the solution for the problem (hurray!). Let's consider some particular case of a problem. Of course, you can apply the algorithm/solution to it. That's why, in order to solve a general problem, you need to solve all of its specific cases. Try solving some (or multiple) specific cases and then try and generalize them to the solution of the main problem.
Technique 3: "Bold Hypothesis"
Technique 4: "To solve a problem, you should think like a problem"
Technique 5: "Think together"
Technique 6: "Pick a Method"
Technique 7: "Print Out and Look"
Technique 8: "Google"
oly  oly-programming  problem-solving  thinking  expert-experience  retention  metabuch  visual-understanding  zooming  local-global  collaboration  tactics  debugging  bare-hands  let-me-see  advice 
august 2019 by nhaliday
mypy - Optional Static Typing for Python
developed by Dropbox in Python and past contributors include Greg Price, Reid Barton

developed by Facebook in OCaml, seems less complete atm
python  types  programming  pls  devtools  compilers  formal-methods  dropbox  multi  facebook  ocaml-sml  libraries  the-prices  oly  static-dynamic 
july 2019 by nhaliday
What kind of problems give an MLE (memory limit exceeded)? - Quora
To be fair, memory limit exceeded is a pretty rare occurrence. Problems aren't usually set such that you have to optimize for memory.
q-n-a  qra  programming  oly  oly-programming  space-complexity  stylized-facts  computer-memory  prioritizing 
july 2019 by nhaliday
About - Project Euler
I've written my program but should it take days to get to the answer?
Absolutely not! Each problem has been designed according to a "one-minute rule", which means that although it may take several hours to design a successful algorithm with more difficult problems, an efficient implementation will allow a solution to be obtained on a modestly powered computer in less than one minute.
math  rec-math  math.NT  math.CO  programming  oly  database  community  forum  stream  problem-solving  accretion  puzzles  contest  🖥  👳 
june 2019 by nhaliday
haskell - Using -with-rtsopts ghc option as a pragma - Stack Overflow
When you specify that pragma at the top of the file, this is instead what happens (with ghc --make algo.hs):

ghc -c algo.hs -rtsopts -with-rtsopts=-K32M
ghc -o algo -package somepackage algo.o
The OPTIONS_GHC pragma tells the compiler about options to add when compiling that specific module into an object file. Because -rtsopts is a linker option (it tells GHC to link in a different set of command-line handling stuff), you can't specify it when compiling an object file. You must specify it when linking, and such options cannot be specified in a module header.
q-n-a  stackex  programming  haskell  functional  gotchas  hmm  oly  space-complexity  build-packaging 
may 2019 by nhaliday
Burrito: Rethinking the Electronic Lab Notebook
Seems very well-suited for ML experiments (if you can get it to work), also the nilfs aspect is cool and basically implements exactly one of the my project ideas (mini-VCS for competitive programming). Unfortunately gnarly installation instructions specify running it on Linux VM: https://github.com/pgbovine/burrito/blob/master/INSTALL. Linux is hard requirement due to nilfs.
techtariat  project  tools  devtools  linux  programming  yak-shaving  integration-extension  nitty-gritty  workflow  exocortex  scholar  software  python  app  desktop  notetaking  state  machine-learning  data-science  nibble  sci-comp  oly  vcs  multi  repo  paste  homepage  research 
may 2019 by nhaliday
Teach debugging
A friend of mine and I couldn't understand why some people were having so much trouble; the material seemed like common sense. The Feynman Method was the only tool we needed.

1. Write down the problem
2. Think real hard
3. Write down the solution

The Feynman Method failed us on the last project: the design of a divider, a real-world-scale project an order of magnitude more complex than anything we'd been asked to tackle before. On the day he assigned the project, the professor exhorted us to begin early. Over the next few weeks, we heard rumors that some of our classmates worked day and night without making progress.


And then, just after midnight, a number of our newfound buddies from dinner reported successes. Half of those who started from scratch had working designs. Others were despondent, because their design was still broken in some subtle, non-obvious way. As I talked with one of those students, I began poring over his design. And after a few minutes, I realized that the Feynman method wasn't the only way forward: it should be possible to systematically apply a mechanical technique repeatedly to find the source of our problems. Beneath all the abstractions, our projects consisted purely of NAND gates (woe to those who dug around our toolbox enough to uncover dynamic logic), which outputs a 0 only when both inputs are 1. If the correct output is 0, both inputs should be 1. The input that isn't is in error, an error that is, itself, the output of a NAND gate where at least one input is 0 when it should be 1. We applied this method recursively, finding the source of all the problems in both our designs in under half an hour.

How To Debug Any Program: https://www.blinddata.com/blog/how-to-debug-any-program-9
May 8th 2019 by Saketh Are

Start by Questioning Everything


When a program is behaving unexpectedly, our attention tends to be drawn first to the most complex portions of the code. However, mistakes can come in all forms. I've personally been guilty of rushing to debug sophisticated portions of my code when the real bug was that I forgot to read in the input file. In the following section, we'll discuss how to reliably focus our attention on the portions of the program that need correction.

Then Question as Little as Possible

Suppose that we have a program and some input on which its behavior doesn’t match our expectations. The goal of debugging is to narrow our focus to as small a section of the program as possible. Once our area of interest is small enough, the value of the incorrect output that is being produced will typically tell us exactly what the bug is.

In order to catch the point at which our program diverges from expected behavior, we must inspect the intermediate state of the program. Suppose that we select some point during execution of the program and print out all values in memory. We can inspect the results manually and decide whether they match our expectations. If they don't, we know for a fact that we can focus on the first half of the program. It either contains a bug, or our expectations of what it should produce were misguided. If the intermediate state does match our expectations, we can focus on the second half of the program. It either contains a bug, or our understanding of what input it expects was incorrect.

Question Things Efficiently

For practical purposes, inspecting intermediate state usually doesn't involve a complete memory dump. We'll typically print a small number of variables and check whether they have the properties we expect of them. Verifying the behavior of a section of code involves:

1. Before it runs, inspecting all values in memory that may influence its behavior.
2. Reasoning about the expected behavior of the code.
3. After it runs, inspecting all values in memory that may be modified by the code.

Reasoning about expected behavior is typically the easiest step to perform even in the case of highly complex programs. Practically speaking, it's time-consuming and mentally strenuous to write debug output into your program and to read and decipher the resulting values. It is therefore advantageous to structure your code into functions and sections that pass a relatively small amount of information between themselves, minimizing the number of values you need to inspect.


Finding the Right Question to Ask

We’ve assumed so far that we have available a test case on which our program behaves unexpectedly. Sometimes, getting to that point can be half the battle. There are a few different approaches to finding a test case on which our program fails. It is reasonable to attempt them in the following order:

1. Verify correctness on the sample inputs.
2. Test additional small cases generated by hand.
3. Adversarially construct corner cases by hand.
4. Re-read the problem to verify understanding of input constraints.
5. Design large cases by hand and write a program to construct them.
6. Write a generator to construct large random cases and a brute force oracle to verify outputs.
techtariat  dan-luu  engineering  programming  debugging  IEEE  reflection  stories  education  higher-ed  checklists  iteration-recursion  divide-and-conquer  thinking  ground-up  nitty-gritty  giants  feynman  error  input-output  structure  composition-decomposition  abstraction  systematic-ad-hoc  reduction  teaching  state  correctness  multi  oly  oly-programming  metabuch  neurons  problem-solving  wire-guided  marginal  strategy  tactics  methodology  simplification-normalization 
may 2019 by nhaliday
Vladimir Novakovski's answer to What financial advice would you give to a 21-year-old? - Quora
Learn economics and see that investment and consumption levels (as percentages) depend only marginally on age and existing net worth and mostly on your risk preferences and utility function.
qra  q-n-a  oly  advice  reflection  personal-finance  ORFE  outcome-risk  investing  time-preference  age-generation  dependence-independence  economics 
february 2019 by nhaliday
gn.general topology - Pair of curves joining opposite corners of a square must intersect---proof? - MathOverflow
In his 'Ordinary Differential Equations' (sec. 1.2) V.I. Arnold says "... every pair of curves in the square joining different pairs of opposite corners must intersect".

This is obvious geometrically but I was wondering how one could go about proving this rigorously. I have thought of a proof using Brouwer's Fixed Point Theorem which I describe below. I would greatly appreciate the group's comments on whether this proof is right and if a simpler proof is possible.


Since the full Jordan curve theorem is quite subtle, it might be worth pointing out that theorem in question reduces to the Jordan curve theorem for polygons, which is easier.

Suppose on the contrary that the curves A,BA,B joining opposite corners do not meet. Since A,BA,B are closed sets, their minimum distance apart is some ε>0ε>0. By compactness, each of A,BA,B can be partitioned into finitely many arcs, each of which lies in a disk of diameter <ε/3<ε/3. Then, by a homotopy inside each disk we can replace A,BA,B by polygonal paths A′,B′A′,B′ that join the opposite corners of the square and are still disjoint.

Also, we can replace A′,B′A′,B′ by simple polygonal paths A″,B″A″,B″ by omitting loops. Now we can close A″A″ to a polygon, and B″B″ goes from its "inside" to "outside" without meeting it, contrary to the Jordan curve theorem for polygons.

- John Stillwell
nibble  q-n-a  overflow  math  geometry  topology  tidbits  intricacy  intersection  proofs  gotchas  oly  mathtariat  fixed-point  math.AT  manifolds  intersection-connectedness 
october 2017 by nhaliday
Best Topology Olympiad ***EVER*** - Affine Mess - Quora
Most people take courses in topology, algebraic topology, knot theory, differential topology and what have you without once doing anything with a finite topological space. There may have been some quirky questions about such spaces early on in a point-set topology course, but most of us come out of these courses thinking that finite topological spaces are either discrete or only useful as an exotic counterexample to some standard separation property. The mere idea of calculating the fundamental group for a 4-point space seems ludicrous.

Only it’s not. This is a genuine question, not a joke, and I find it both hilarious and super educational. DO IT!!
nibble  qra  announcement  math  geometry  topology  puzzles  rec-math  oly  links  math.AT  ground-up  finiteness  math.GN 
october 2017 by nhaliday
What is the best way to parse command-line arguments with Python? - Quora
- Anders Kaseorg

Use the standard optparse library.

It’s important to uphold your users’ expectation that your utility will parse arguments in the same way as every other UNIX utility. If you roll your own parsing code, you’ll almost certainly break that expectation in obvious or subtle ways.

Although the documentation claims that optparse has been deprecated in favor of argparse, which supports more features like optional option arguments and configurable prefix characters, I can’t recommend argparse until it’s been fixed to parse required option arguments in the standard UNIX way. Currently, argparse uses an unexpected heuristic which may lead to subtle bugs in other scripts that call your program.

consider also click (which uses the optparse behavior)
q-n-a  qra  oly  best-practices  programming  terminal  unix  python  libraries  gotchas  howto  pls  yak-shaving  integration-extension  protocol-metadata 
august 2017 by nhaliday
Links 6/15: URLing Toward Freedom | Slate Star Codex
Why do some schools produce a disproportionate share of math competition winners? May not just be student characteristics.

My post The Control Group Is Out Of Control, as well as some of the Less Wrong posts that inspired it, has gotten cited in a recent preprint article, A Skeptical Eye On Psi, on what psi can teach us about the replication crisis. One of the authors is someone I previously yelled at, so I like to think all of that yelling is having a positive effect.

A study from Sweden (it’s always Sweden) does really good work examining the effect of education on IQ. It takes an increase in mandatory Swedish schooling length which was rolled out randomly at different times in different districts, and finds that the districts where people got more schooling have higher IQ; in particular, an extra year of education increases permanent IQ by 0.75 points. I was previously ambivalent about this, but this is a really strong study and I guess I have to endorse it now (though it’s hard to say how g-loaded it is or how linear it is). Also of note; the extra schooling permanently harmed emotional control ability by 0.5 points on a scale identical to IQ (mean 100, SD 15). This is of course the opposite of past studies suggest that education does not improve IQ but does help non-cognitive factors. But this study was an extra year tacked on to the end of education, whereas earlier ones have been measuring extra education tacked on to the beginning, or just making the whole educational process more efficient. Still weird, but again, this is a good experiment (EDIT: This might not be on g)
ratty  yvain  ssc  links  commentary  study  summary  economics  education  oly  math  success  tails  endo-exo  roots  causation  regularizer  environmental-effects  psychology  social-psych  replication  social-science  europe  nordic  iq  cog-psych  intervention  effect-size  marginal  tradeoffs  cost-benefit  large-factor  multi  personality  serene  growth  stress  psych-architecture  emotion  endogenous-exogenous 
march 2017 by nhaliday
ExtraTricky - A Rant About AlphaGo Discussions
The most important idea to be able to analyze endgames is the idea of adding two games together. The sum of two games is another game where on your turn you pick one of the two games to play in. So you could imagine a game of "chess plus checkers" where each turn is either a turn on the chess board or a turn on the checkers board. Say your opponent makes a move on the chess board. Now you have a choice: do you want to respond to that move also on the chess board, or is it better to take a turn on the checkers board and accept the potential loss of allowing two consecutive chess moves?

If you were to actually add a game of chess and a game of checkers, you'd have to also determine a way to say who wins. I'm going to conveniently avoid talking about that for general games, because for Go positions the answer is simple: add up the points from each game. So you could imagine a game of "Go plus Go" where you're playing simultaneously on two boards, and on your turn you pick one of the boards to play on. At the end of the game, instead of counting territory from just one board, you count it from both.

As it turns out, when a Go game reaches the final stages, the board is typically partitioned into small areas that don't interact with each other. In these cases, even though these sections exist on the same board, you can think of them being entirely separate games being added together. Once we have that, there's still the question: how do you determine which section to play in?
extratricky  oly  games  deepgoog  thinking  things  analysis  nibble  org:bleg 
february 2017 by nhaliday
Main Page - Competitive Programming Algorithms: E-Maxx Algorithms in English
original russian version: http://e-maxx.ru/algo/

some notable stuff:
- O(N) factorization sieve
- discrete logarithm
- factorial N! (mod P) in O(P log N)
- flow algorithms
- enumerating submasks
- bridges, articulation points
- Ukkonen algorithm
- sqrt(N) trick, eg, for range mode query
explanation  programming  algorithms  russia  foreign-lang  oly  oly-programming  problem-solving  accretion  math.NT  graphs  graph-theory  optimization  data-structures  yoga  tidbits  multi  anglo  language  arrows  strings 
february 2017 by nhaliday
Could I be using proof by contradiction too much? - Mathematics Stack Exchange
One general reason to avoid proof by contradiction is the following. When you prove something by contradiction, all you learn is that the statement you wanted to prove is true. When you prove something directly, you learn every intermediate implication you had to prove along the way.
q-n-a  overflow  math  proofs  contradiction  oly  mathtariat  nibble 
february 2017 by nhaliday
general topology - What should be the intuition when working with compactness? - Mathematics Stack Exchange

The situation with compactness is sort of like the above. It turns out that finiteness, which you think of as one concept (in the same way that you think of "Foo" as one concept above), is really two concepts: discreteness and compactness. You've never seen these concepts separated before, though. When people say that compactness is like finiteness, they mean that compactness captures part of what it means to be finite in the same way that shortness captures part of what it means to be Foo.


As many have said, compactness is sort of a topological generalization of finiteness. And this is true in a deep sense, because topology deals with open sets, and this means that we often "care about how something behaves on an open set", and for compact spaces this means that there are only finitely many possible behaviors.


Compactness does for continuous functions what finiteness does for functions in general.

If a set A is finite then every function f:A→R has a max and a min, and every function f:A→R^n is bounded. If A is compact, the every continuous function from A to R has a max and a min and every continuous function from A to R^n is bounded.

If A is finite then every sequence of members of A has a subsequence that is eventually constant, and "eventually constant" is the only kind of convergence you can talk about without talking about a topology on the set. If A is compact, then every sequence of members of A has a convergent subsequence.
q-n-a  overflow  math  topology  math.GN  concept  finiteness  atoms  intuition  oly  mathtariat  multi  discrete  gowers  motivation  synthesis  hi-order-bits  soft-question  limits  things  nibble  definition  convergence  abstraction  span-cover 
january 2017 by nhaliday
5/8 bound in group theory - MathOverflow
very elegant proof (remember sum d_i^2 = |G| and # irreducible rep.s = # conjugacy classes)
q-n-a  overflow  math  tidbits  proofs  math.RT  math.GR  oly  commutativity  pigeonhole-markov  nibble  shift 
january 2017 by nhaliday
« earlier      
per page:    204080120160

bundles : framemathpeepsproblem-solvingvague

related tags

2016-election  abstraction  academia  accretion  acm  acmtariat  additive-combo  advanced  adversarial  advice  aesthetics  age-generation  aggregator  ai  akrasia  alg-combo  algebra  algorithms  allodium  ama  AMT  analogy  analysis  anglo  announcement  aphorism  api  app  applicability-prereqs  applications  arrows  art  asia  atoms  auto-learning  automation  axioms  bare-hands  barons  bayesian  beauty  best-practices  better-explained  bias-variance  big-list  big-picture  big-surf  binomial  bio  biodet  bioinformatics  bitcoin  bits  blog  boltzmann  books  boolean-analysis  brexit  brunn-minkowski  build-packaging  business  c(pp)  calculation  caltech  canada  career  cartoons  CAS  causation  certificates-recognition  characterization  chart  cheatsheet  checklists  chemistry  clarity  clever-rats  closure  cloud  cmu  coarse-fine  coding-theory  cog-psych  collaboration  coloring  combo-optimization  commentary  community  commutativity  comparison  compilers  complexity  composition-decomposition  computation  computer-memory  computer-vision  concentration-of-measure  concept  conceptual-vocab  concrete  concurrency  conference  confluence  constraint-satisfaction  contest  context  contradiction  contrarianism  convergence  convexity-curvature  cool  core-rats  correctness  cost-benefit  counterexample  coupling-cohesion  courage  course  creative  crime  critique  crooked  crypto  cryptocurrency  cs  culture  curiosity  curvature  dan-luu  data  data-science  data-structures  database  dataset  dataviz  debate  debt  debugging  decision-making  deep-learning  deepgoog  definition  degrees-of-freedom  dennett  dependence-independence  design  desktop  devtools  differential  dimensionality  direction  discovery  discrete  discussion  distributed  distribution  divide-and-conquer  documentation  DP  dropbox  dumb-ML  duplication  duty  dynamic  dynamical  eastern-europe  economics  education  effect-size  elegance  embodied  emotion  endo-exo  endogenous-exogenous  engineering  enhancement  ensembles  entropy-like  environmental-effects  epistemic  erdos  ergo  error  error-handling  essay  estimate  europe  examples  exocortex  experiment  expert  expert-experience  explanans  explanation  exploratory  exposition  extratricky  extrema  facebook  faq  features  fedja  feynman  fields  film  finance  finiteness  fire  fixed-point  flexibility  flux-stasis  foreign-lang  form-design  formal-methods  forum  fourier  frameworks  frontier  functional  games  gelman  gender  gender-diff  generative  geometry  giants  git  gnu  google  gotchas  government  gowers  graph-theory  graphs  greedy  grokkability-clarity  ground-up  growth  guessing  guide  hacker  hanson  hashing  haskell  heavyweights  heuristic  hi-order-bits  high-dimension  higher-ed  history  hmm  hn  homepage  homo-hetero  howto  human-ml  identity  IEEE  impact  impetus  incentives  induction  info-dynamics  info-foraging  information-theory  init  inner-product  input-output  insight  integral  integration-extension  intersection  intersection-connectedness  intervention  interview  interview-prep  intricacy  intuition  invariance  investing  iq  iteration-recursion  japan  jargon  jvm  kaggle  knowledge  language  large-factor  latex  lattice  learning  learning-theory  lecture-notes  lectures  legacy  len:short  lens  lesswrong  let-me-see  levers  libraries  lifts-projections  limits  linear-algebra  linguistics  links  linux  list  local-global  logic  long-short-run  long-term  machine-learning  magnitude  male-variability  manifolds  map-territory  marginal  markov  matching  math  math.AC  math.AG  math.AT  math.CA  math.CO  math.CT  math.CV  math.FA  math.GN  math.GR  math.MG  math.NT  math.RT  mathtariat  matrix-factorization  measure  media  memory-management  meta:math  meta:research  meta:rhetoric  metabuch  metameta  methodology  metrics  michael-nielsen  mihai  minimum-viable  mit  model-class  models  moments  money  money-for-time  monotonicity  motivation  msr  multi  multiplicative  naturality  neurons  news  nibble  nips  nitty-gritty  nonlinearity  nordic  norms  nostalgia  notation  notetaking  novelty  objektbuch  ocaml-sml  occam  off-convex  oly  oly-programming  oop  open-closed  open-problems  operational  optimism  optimization  orders  ORFE  org:biz  org:bleg  org:bv  org:com  org:mag  org:mat  org:ngo  org:rec  organization  organizing  orourke  outcome-risk  overflow  oxbridge  p:**  p:whenever  papers  parenting  parsimony  paste  pdf  people  performance  personal-finance  personality  persuasion  phd  philosophy  physics  pigeonhole-markov  planning  play  plots  pls  plt  poast  politics  polynomials  positivity  postmortem  practice  pragmatic  prediction  presentation  princeton  prioritizing  priors-posteriors  pro-rata  probabilistic-method  probability  problem-solving  productivity  prof  profile  programming  project  proof-systems  proofs  properties  protocol-metadata  psych-architecture  psychology  psychometrics  puzzles  python  q-n-a  qra  quantum  quantum-info  questions  quixotic  rand-approx  random  random-matrices  ranking  rant  rationality  ratty  reading  realness  rec-math  recommendations  recruiting  reduction  reference  reflection  regression  regression-to-mean  regularity  regularizer  reinforcement  replication  repo  research  research-program  retention  retrofit  review  rhetoric  rigidity  rigor  rigorous-crypto  roadmap  robotics  roots  russia  ryan-odonnell  s:*  s:**  s:null  scala  scholar  sci-comp  scitariat  search  security  seminar  separation  sequential  serene  series  shift  signum  simplex  simplification-normalization  simulation  singularity  skeleton  skunkworks  sleuthin  slides  smoothness  social  social-choice  social-psych  social-science  soft-question  software  space-complexity  span-cover  spatial  spectral  speculation  speed  ssc  stackex  stanford  stat-mech  state  state-of-art  static-dynamic  stats  status  stirling  stock-flow  stories  strategy  stream  street-fighting  stress  strings  structure  students  study  studying  stylized-facts  subculture  sublinear  success  summary  sv  symmetry  syntax  synthesis  systematic-ad-hoc  systems  tactics  tails  talks  tcs  tcstariat  teaching  tech  tech-infrastructure  technical-writing  technology  techtariat  tensors  terminal  tetlock  texas  the-prices  the-trenches  the-world-is-just-atoms  thermo  things  thinking  thurston  tidbits  time  time-preference  time-series  time-use  todo  toolkit  tools  top-n  topology  toxoplasmosis  track-record  tracker  tradeoffs  transportation  trees  trends  tricki  tricks  trivia  trump  trust  truth  turing  tutorial  types  ui  uniqueness  unit  unix  unsupervised  usaco-ioi  vague  vcs  video  visual-understanding  visualization  west-hunter  wiki  wire-guided  wisdom  workflow  working-stiff  world  wormholes  worrydream  writing  yak-shaving  yoga  yvain  zooming  👳  🔬  🖥  🤖 

Copy this bookmark: