nhaliday + algorithms   162

data structures - Why are Red-Black trees so popular? - Computer Science Stack Exchange
- AVL trees have smaller average depth than red-black trees, and thus searching for a value in AVL tree is consistently faster.
- Red-black trees make less structural changes to balance themselves than AVL trees, which could make them potentially faster for insert/delete. I'm saying potentially, because this would depend on the cost of the structural change to the tree, as this will depend a lot on the runtime and implemntation (might also be completely different in a functional language when the tree is immutable?)

There are many benchmarks online that compare AVL and Red-black trees, but what struck me is that my professor basically said, that usually you'd do one of two things:
- Either you don't really care that much about performance, in which case the 10-20% difference of AVL vs Red-black in most cases won't matter at all.
- Or you really care about performance, in which you case you'd ditch both AVL and Red-black trees, and go with B-trees, which can be tweaked to work much better (or (a,b)-trees, I'm gonna put all of those in one basket.)

--

> For some kinds of binary search trees, including red-black trees but not AVL trees, the "fixes" to the tree can fairly easily be predicted on the way down and performed during a single top-down pass, making the second pass unnecessary. Such insertion algorithms are typically implemented with a loop rather than recursion, and often run slightly faster in practice than their two-pass counterparts.

So a RedBlack tree insert can be implemented without recursion, on some CPUs recursion is very expensive if you overrun the function call cache (e.g SPARC due to is use of Register window)

--

There are some cases where you can't use B-trees at all.

One prominent case is std::map from C++ STL. The standard requires that insert does not invalidate existing iterators

...

I also believe that "single pass tail recursive" implementation is not the reason for red black tree popularity as a mutable data structure.

First of all, stack depth is irrelevant here, because (given log𝑛 height) you would run out of the main memory before you run out of stack space. Jemalloc is happy with preallocating worst case depth on the stack.
nibble  q-n-a  overflow  cs  algorithms  tcs  data-structures  functional  orders  trees  cost-benefit  tradeoffs  roots  explanans  impetus  performance  applicability-prereqs  programming  pls  c(pp) 
yesterday by nhaliday
LeetCode - The World's Leading Online Programming Learning Platform
very much targeted toward interview prep
https://www.quora.com/Is-LeetCode-Online-Judges-premium-membership-really-worth-it
This data is especially valuable because you get to know a company's interview style beforehand. For example, most questions that appeared in Facebook interviews have short solution typically not more than 30 lines of code. Their interview process focus on your ability to write clean, concise code. On the other hand, Google style interviews lean more on the analytical side and is algorithmic heavy, typically with multiple solutions to a question - each with a different run time complexity.
programming  tech  career  working-stiff  recruiting  interview-prep  algorithms  problem-solving  oly-programming  multi  q-n-a  qra  comparison  stylized-facts  facebook  google  cost-benefit  homo-hetero 
2 days ago by nhaliday
Interview with Donald Knuth | Interview with Donald Knuth | InformIT
Andrew Binstock and Donald Knuth converse on the success of open source, the problem with multicore architecture, the disappointing lack of interest in literate programming, the menace of reusable code, and that urban legend about winning a programming contest with a single compilation.
nibble  interview  giants  expert-experience  programming  cs  software  contrarianism  carmack  oss  prediction  trends  linux  concurrency  desktop  comparison  checking  debugging  stories  engineering  hmm  idk  algorithms  books  debate  flux-stasis  duplication  parsimony  best-practices  writing  documentation  latex  intricacy  structure  hardware  caching  workflow  editors  composition-decomposition  coupling-cohesion  exposition  technical-writing  thinking 
12 days ago by nhaliday
Bareiss algorithm - Wikipedia
During the execution of Bareiss algorithm, every integer that is computed is the determinant of a submatrix of the input matrix. This allows, using the Hadamard inequality, to bound the size of these integers. Otherwise, the Bareiss algorithm may be viewed as a variant of Gaussian elimination and needs roughly the same number of arithmetic operations.
nibble  ground-up  cs  tcs  algorithms  complexity  linear-algebra  numerics  sci-comp  fields 
13 days ago by nhaliday
algorithm, algorithmic, algorithmicx, algorithm2e, algpseudocode = confused - TeX - LaTeX Stack Exchange
algorithm2e is only one currently maintained, but answerer prefers style of algorithmicx, and after perusing the docs, so do I
q-n-a  stackex  libraries  list  recommendations  comparison  publishing  cs  programming  algorithms  tools 
22 days ago by nhaliday
The Hanson-Yudkowsky AI-Foom Debate - Machine Intelligence Research Institute
How Deviant Recent AI Progress Lumpiness?: http://www.overcomingbias.com/2018/03/how-deviant-recent-ai-progress-lumpiness.html
I seem to disagree with most people working on artificial intelligence (AI) risk. While with them I expect rapid change once AI is powerful enough to replace most all human workers, I expect this change to be spread across the world, not concentrated in one main localized AI system. The efforts of AI risk folks to design AI systems whose values won’t drift might stop global AI value drift if there is just one main AI system. But doing so in a world of many AI systems at similar abilities levels requires strong global governance of AI systems, which is a tall order anytime soon. Their continued focus on preventing single system drift suggests that they expect a single main AI system.

The main reason that I understand to expect relatively local AI progress is if AI progress is unusually lumpy, i.e., arriving in unusually fewer larger packages rather than in the usual many smaller packages. If one AI team finds a big lump, it might jump way ahead of the other teams.

However, we have a vast literature on the lumpiness of research and innovation more generally, which clearly says that usually most of the value in innovation is found in many small innovations. We have also so far seen this in computer science (CS) and AI. Even if there have been historical examples where much value was found in particular big innovations, such as nuclear weapons or the origin of humans.

Apparently many people associated with AI risk, including the star machine learning (ML) researchers that they often idolize, find it intuitively plausible that AI and ML progress is exceptionally lumpy. Such researchers often say, “My project is ‘huge’, and will soon do it all!” A decade ago my ex-co-blogger Eliezer Yudkowsky and I argued here on this blog about our differing estimates of AI progress lumpiness. He recently offered Alpha Go Zero as evidence of AI lumpiness:

...

In this post, let me give another example (beyond two big lumps in a row) of what could change my mind. I offer a clear observable indicator, for which data should have available now: deviant citation lumpiness in recent ML research. One standard measure of research impact is citations; bigger lumpier developments gain more citations that smaller ones. And it turns out that the lumpiness of citations is remarkably constant across research fields! See this March 3 paper in Science:

I Still Don’t Get Foom: http://www.overcomingbias.com/2014/07/30855.html
All of which makes it look like I’m the one with the problem; everyone else gets it. Even so, I’m gonna try to explain my problem again, in the hope that someone can explain where I’m going wrong. Here goes.

“Intelligence” just means an ability to do mental/calculation tasks, averaged over many tasks. I’ve always found it plausible that machines will continue to do more kinds of mental tasks better, and eventually be better at pretty much all of them. But what I’ve found it hard to accept is a “local explosion.” This is where a single machine, built by a single project using only a tiny fraction of world resources, goes in a short time (e.g., weeks) from being so weak that it is usually beat by a single human with the usual tools, to so powerful that it easily takes over the entire world. Yes, smarter machines may greatly increase overall economic growth rates, and yes such growth may be uneven. But this degree of unevenness seems implausibly extreme. Let me explain.

If we count by economic value, humans now do most of the mental tasks worth doing. Evolution has given us a brain chock-full of useful well-honed modules. And the fact that most mental tasks require the use of many modules is enough to explain why some of us are smarter than others. (There’d be a common “g” factor in task performance even with independent module variation.) Our modules aren’t that different from those of other primates, but because ours are different enough to allow lots of cultural transmission of innovation, we’ve out-competed other primates handily.

We’ve had computers for over seventy years, and have slowly build up libraries of software modules for them. Like brains, computers do mental tasks by combining modules. An important mental task is software innovation: improving these modules, adding new ones, and finding new ways to combine them. Ideas for new modules are sometimes inspired by the modules we see in our brains. When an innovation team finds an improvement, they usually sell access to it, which gives them resources for new projects, and lets others take advantage of their innovation.

...

In Bostrom’s graph above the line for an initially small project and system has a much higher slope, which means that it becomes in a short time vastly better at software innovation. Better than the entire rest of the world put together. And my key question is: how could it plausibly do that? Since the rest of the world is already trying the best it can to usefully innovate, and to abstract to promote such innovation, what exactly gives one small project such a huge advantage to let it innovate so much faster?

...

In fact, most software innovation seems to be driven by hardware advances, instead of innovator creativity. Apparently, good ideas are available but must usually wait until hardware is cheap enough to support them.

Yes, sometimes architectural choices have wider impacts. But I was an artificial intelligence researcher for nine years, ending twenty years ago, and I never saw an architecture choice make a huge difference, relative to other reasonable architecture choices. For most big systems, overall architecture matters a lot less than getting lots of detail right. Researchers have long wandered the space of architectures, mostly rediscovering variations on what others found before.

Some hope that a small project could be much better at innovation because it specializes in that topic, and much better understands new theoretical insights into the basic nature of innovation or intelligence. But I don’t think those are actually topics where one can usefully specialize much, or where we’ll find much useful new theory. To be much better at learning, the project would instead have to be much better at hundreds of specific kinds of learning. Which is very hard to do in a small project.

What does Bostrom say? Alas, not much. He distinguishes several advantages of digital over human minds, but all software shares those advantages. Bostrom also distinguishes five paths: better software, brain emulation (i.e., ems), biological enhancement of humans, brain-computer interfaces, and better human organizations. He doesn’t think interfaces would work, and sees organizations and better biology as only playing supporting roles.

...

Similarly, while you might imagine someday standing in awe in front of a super intelligence that embodies all the power of a new age, superintelligence just isn’t the sort of thing that one project could invent. As “intelligence” is just the name we give to being better at many mental tasks by using many good mental modules, there’s no one place to improve it. So I can’t see a plausible way one project could increase its intelligence vastly faster than could the rest of the world.

Takeoff speeds: https://sideways-view.com/2018/02/24/takeoff-speeds/
Futurists have argued for years about whether the development of AGI will look more like a breakthrough within a small group (“fast takeoff”), or a continuous acceleration distributed across the broader economy or a large firm (“slow takeoff”).

I currently think a slow takeoff is significantly more likely. This post explains some of my reasoning and why I think it matters. Mostly the post lists arguments I often hear for a fast takeoff and explains why I don’t find them compelling.

(Note: this is not a post about whether an intelligence explosion will occur. That seems very likely to me. Quantitatively I expect it to go along these lines. So e.g. while I disagree with many of the claims and assumptions in Intelligence Explosion Microeconomics, I don’t disagree with the central thesis or with most of the arguments.)
ratty  lesswrong  subculture  miri-cfar  ai  risk  ai-control  futurism  books  debate  hanson  big-yud  prediction  contrarianism  singularity  local-global  speed  speedometer  time  frontier  distribution  smoothness  shift  pdf  economics  track-record  abstraction  analogy  links  wiki  list  evolution  mutation  selection  optimization  search  iteration-recursion  intelligence  metameta  chart  analysis  number  ems  coordination  cooperate-defect  death  values  formal-values  flux-stasis  philosophy  farmers-and-foragers  malthus  scale  studying  innovation  insight  conceptual-vocab  growth-econ  egalitarianism-hierarchy  inequality  authoritarianism  wealth  near-far  rationality  epistemic  biases  cycles  competition  arms  zero-positive-sum  deterrence  war  peace-violence  winner-take-all  technology  moloch  multi  plots  research  science  publishing  humanity  labor  marginal  urban-rural  structure  composition-decomposition  complex-systems  gregory-clark  decentralized  heavy-industry  magnitude  multiplicative  endogenous-exogenous  models  uncertainty  decision-theory  time-prefer 
april 2018 by nhaliday
Information Processing: US Needs a National AI Strategy: A Sputnik Moment?
FT podcasts on US-China competition and AI: http://infoproc.blogspot.com/2018/05/ft-podcasts-on-us-china-competition-and.html

A new recommended career path for effective altruists: China specialist: https://80000hours.org/articles/china-careers/
Our rough guess is that it would be useful for there to be at least ten people in the community with good knowledge in this area within the next few years.

By “good knowledge” we mean they’ve spent at least 3 years studying these topics and/or living in China.

We chose ten because that would be enough for several people to cover each of the major areas listed (e.g. 4 within AI, 2 within biorisk, 2 within foreign relations, 1 in another area).

AI Policy and Governance Internship: https://www.fhi.ox.ac.uk/ai-policy-governance-internship/

https://www.fhi.ox.ac.uk/deciphering-chinas-ai-dream/
https://www.fhi.ox.ac.uk/wp-content/uploads/Deciphering_Chinas_AI-Dream.pdf
Deciphering China’s AI Dream
The context, components, capabilities, and consequences of
China’s strategy to lead the world in AI

Europe’s AI delusion: https://www.politico.eu/article/opinion-europes-ai-delusion/
Brussels is failing to grasp threats and opportunities of artificial intelligence.
By BRUNO MAÇÃES

When the computer program AlphaGo beat the Chinese professional Go player Ke Jie in a three-part match, it didn’t take long for Beijing to realize the implications.

If algorithms can already surpass the abilities of a master Go player, it can’t be long before they will be similarly supreme in the activity to which the classic board game has always been compared: war.

As I’ve written before, the great conflict of our time is about who can control the next wave of technological development: the widespread application of artificial intelligence in the economic and military spheres.

...

If China’s ambitions sound plausible, that’s because the country’s achievements in deep learning are so impressive already. After Microsoft announced that its speech recognition software surpassed human-level language recognition in October 2016, Andrew Ng, then head of research at Baidu, tweeted: “We had surpassed human-level Chinese recognition in 2015; happy to see Microsoft also get there for English less than a year later.”

...

One obvious advantage China enjoys is access to almost unlimited pools of data. The machine-learning technologies boosting the current wave of AI expansion are as good as the amount of data they can use. That could be the number of people driving cars, photos labeled on the internet or voice samples for translation apps. With 700 or 800 million Chinese internet users and fewer data protection rules, China is as rich in data as the Gulf States are in oil.

How can Europe and the United States compete? They will have to be commensurately better in developing algorithms and computer power. Sadly, Europe is falling behind in these areas as well.

...

Chinese commentators have embraced the idea of a coming singularity: the moment when AI surpasses human ability. At that point a number of interesting things happen. First, future AI development will be conducted by AI itself, creating exponential feedback loops. Second, humans will become useless for waging war. At that point, the human mind will be unable to keep pace with robotized warfare. With advanced image recognition, data analytics, prediction systems, military brain science and unmanned systems, devastating wars might be waged and won in a matter of minutes.

...

The argument in the new strategy is fully defensive. It first considers how AI raises new threats and then goes on to discuss the opportunities. The EU and Chinese strategies follow opposite logics. Already on its second page, the text frets about the legal and ethical problems raised by AI and discusses the “legitimate concerns” the technology generates.

The EU’s strategy is organized around three concerns: the need to boost Europe’s AI capacity, ethical issues and social challenges. Unfortunately, even the first dimension quickly turns out to be about “European values” and the need to place “the human” at the center of AI — forgetting that the first word in AI is not “human” but “artificial.”

https://twitter.com/mr_scientism/status/983057591298351104
https://archive.is/m3Njh
US military: "LOL, China thinks it's going to be a major player in AI, but we've got all the top AI researchers. You guys will help us develop weapons, right?"

US AI researchers: "No."

US military: "But... maybe just a computer vision app."

US AI researchers: "NO."

https://www.theverge.com/2018/4/4/17196818/ai-boycot-killer-robots-kaist-university-hanwha
https://www.nytimes.com/2018/04/04/technology/google-letter-ceo-pentagon-project.html
https://twitter.com/mr_scientism/status/981685030417326080
https://archive.is/3wbHm
AI-risk was a mistake.
hsu  scitariat  commentary  video  presentation  comparison  usa  china  asia  sinosphere  frontier  technology  science  ai  speedometer  innovation  google  barons  deepgoog  stories  white-paper  strategy  migration  iran  human-capital  corporation  creative  alien-character  military  human-ml  nationalism-globalism  security  investing  government  games  deterrence  defense  nuclear  arms  competition  risk  ai-control  musk  optimism  multi  news  org:mag  europe  EU  80000-hours  effective-altruism  proposal  article  realness  offense-defense  war  biotech  altruism  language  foreign-lang  philosophy  the-great-west-whale  enhancement  foreign-policy  geopolitics  anglo  jobs  career  planning  hmm  travel  charity  tech  intel  media  teaching  tutoring  russia  india  miri-cfar  pdf  automation  class  labor  polisci  society  trust  n-factor  corruption  leviathan  ethics  authoritarianism  individualism-collectivism  revolution  economics  inequality  civic  law  regulation  data  scale  pro-rata  capital  zero-positive-sum  cooperate-defect  distribution  time-series  tre 
february 2018 by nhaliday
Rank aggregation basics: Local Kemeny optimisation | David R. MacIver
This turns our problem from a global search to a local one: Basically we can start from any point in the search space and search locally by swapping adjacent pairs until we hit a minimum. This turns out to be quite easy to do. _We basically run insertion sort_: At step n we have the first n items in a locally Kemeny optimal order. Swap the n+1th item backwards until the majority think its predecessor is < it. This ensures all adjacent pairs are in the majority order, so swapping them would result in a greater than or equal K. This is of course an O(n^2) algorithm. In fact, the problem of merely finding a locally Kemeny optimal solution can be done in O(n log(n)) (for much the same reason as you can sort better than insertion sort). You just take the directed graph of majority votes and find a Hamiltonian Path. The nice thing about the above version of the algorithm is that it gives you a lot of control over where you start your search.
techtariat  liner-notes  papers  tcs  algorithms  machine-learning  acm  optimization  approximation  local-global  orders  graphs  graph-theory  explanation  iteration-recursion  time-complexity  nibble 
september 2017 by nhaliday
Kelly criterion - Wikipedia
In probability theory and intertemporal portfolio choice, the Kelly criterion, Kelly strategy, Kelly formula, or Kelly bet, is a formula used to determine the optimal size of a series of bets. In most gambling scenarios, and some investing scenarios under some simplifying assumptions, the Kelly strategy will do better than any essentially different strategy in the long run (that is, over a span of time in which the observed fraction of bets that are successful equals the probability that any given bet will be successful). It was described by J. L. Kelly, Jr, a researcher at Bell Labs, in 1956.[1] The practical use of the formula has been demonstrated.[2][3][4]

The Kelly Criterion is to bet a predetermined fraction of assets and can be counterintuitive. In one study,[5][6] each participant was given $25 and asked to bet on a coin that would land heads 60% of the time. Participants had 30 minutes to play, so could place about 300 bets, and the prizes were capped at $250. Behavior was far from optimal. "Remarkably, 28% of the participants went bust, and the average payout was just $91. Only 21% of the participants reached the maximum. 18 of the 61 participants bet everything on one toss, while two-thirds gambled on tails at some stage in the experiment." Using the Kelly criterion and based on the odds in the experiment, the right approach would be to bet 20% of the pot on each throw (see first example in Statement below). If losing, the size of the bet gets cut; if winning, the stake increases.
nibble  betting  investing  ORFE  acm  checklists  levers  probability  algorithms  wiki  reference  atoms  extrema  parsimony  tidbits  decision-theory  decision-making  street-fighting  mental-math  calculation 
august 2017 by nhaliday
Talks
Quantum Supremacy: Office of Science and Technology Policy QIS Forum, Eisenhower Executive Office Building, White House Complex, Washington DC, October 18, 2016. Another version at UTCS Faculty Lunch, October 26, 2016. Another version at UT Austin Physics Colloquium, Austin, TX, November 9, 2016.

Complexity-Theoretic Foundations of Quantum Supremacy Experiments: Quantum Algorithms Workshop, Aspen Center for Physics, Aspen, CO, March 25, 2016

When Exactly Do Quantum Computers Provide A Speedup?: Yale Quantum Institute Seminar, Yale University, New Haven, CT, October 10, 2014. Another version at UT Austin Physics Colloquium, Austin, TX, November 19, 2014; Applied and Interdisciplinary Mathematics Seminar, Northeastern University, Boston, MA, November 25, 2014; Hebrew University Physics Colloquium, Jerusalem, Israel, January 5, 2015; Computer Science Colloquium, Technion, Haifa, Israel, January 8, 2015; Stanford University Physics Colloquium, January 27, 2015
tcstariat  aaronson  tcs  complexity  quantum  quantum-info  talks  list  slides  accretion  algorithms  applications  physics  nibble  frontier  computation  volo-avolo  speedometer  questions 
may 2017 by nhaliday
Strings, periods, and borders
A border of x is any proper prefix of x that equals a suffix of x.

...overlapping borders of a string imply that the string is periodic...

In the border array ß[1..n] of x, entry ß[i] is the length
of the longest border of x[1..i].
pdf  nibble  slides  lectures  algorithms  strings  exposition  yoga  atoms  levers  tidbits  sequential 
may 2017 by nhaliday
inequalities - Is the Jaccard distance a distance? - MathOverflow
Steinhaus Transform
the referenced survey: http://kenclarkson.org/nn_survey/p.pdf

It's known that this transformation produces a metric from a metric. Now if you take as the base metric D the symmetric difference between two sets, what you end up with is the Jaccard distance (which actually is known by many other names as well).
q-n-a  overflow  nibble  math  acm  sublinear  metrics  metric-space  proofs  math.CO  tcstariat  arrows  reduction  measure  math.MG  similarity  multi  papers  survey  computational-geometry  cs  algorithms  pdf  positivity  msr  tidbits  intersection  curvature  convexity-curvature  intersection-connectedness  signum 
february 2017 by nhaliday
MinHash - Wikipedia
- goal: compute Jaccard coefficient J(A, B) = |A∩B| / |A∪B| in sublinear space
- idea: pick random injective hash function h, define h_min(S) = argmin_{x in S} h(x), and note that Pr[h_min(A) = h_min(B)] = J(A, B)
- reduce variance w/ Chernoff bound
algorithms  data-structures  sublinear  hashing  wiki  reference  random  tcs  nibble  measure  metric-space  metrics  similarity  PAC  intersection  intersection-connectedness 
february 2017 by nhaliday
Count–min sketch - Wikipedia
- estimates frequency vector (f_i)
- idea:
d = O(log 1/δ) hash functions h_j: [n] -> [w] (w = O(1/ε))
d*w counters a[r, c]
for each event i, increment counters a[1, h_1(i)], a[2, h_2(i)], ..., a[d, h_d(i)]
estimate for f_i is min_j a[j, h_j(i)]
- never underestimates but upward-biased
- pf: Markov to get constant probability of success, then exponential decrease with repetition
lecture notes: http://theory.stanford.edu/~tim/s15/l/l2.pdf
- note this can work w/ negative updates. just use median instead of min. pf still uses markov on the absolute value of error.
algorithms  data-structures  sublinear  hashing  wiki  reference  bias-variance  approximation  random  tcs  multi  stanford  lecture-notes  pdf  tim-roughgarden  nibble  pigeonhole-markov  PAC 
february 2017 by nhaliday
Lecture 11
In which we prove that the Edmonds-Karp algorithm for maximum flow is a strongly polynomial time algorithm, and we begin to talk about the push-relabel approach.
pdf  lecture-notes  exposition  optimization  algorithms  linear-programming  graphs  stanford  luca-trevisan  nibble  direction  stock-flow  tcs  constraint-satisfaction  tcstariat 
january 2017 by nhaliday
Lecture 16
In which we define a multi-commodity flow problem, and we see that its dual is the relaxation of a useful graph partitioning problem. The relaxation can be rounded to yield an approximate graph partitioning algorithm.
pdf  lecture-notes  exposition  optimization  linear-programming  graphs  graph-theory  algorithms  duality  rounding  stanford  approximation  rand-approx  luca-trevisan  relaxation  nibble  stock-flow  constraint-satisfaction  tcs  tcstariat 
january 2017 by nhaliday
Computational Complexity: Favorite Theorems: The Yao Principle
The Yao Principle applies when we don't consider the algorithmic complexity of the players. For example in communication complexity we have two players who each have a separate half of an input string and they want to compute some function of the input with the minimum amount of communication between them. The Yao principle states that the best probabilistic strategies for the players will achieve exactly the communication bounds as the best deterministic strategy over a worst-case distribution of inputs.

The Yao Principle plays a smaller role where we measure the running time of an algorithm since applying the Principle would require solving an extremely large linear program. But since so many of our bounds are in information-based models like communication and decision-tree complexity, the Yao Principle, though not particularly complicated, plays an important role in lower bounds in a large number of results in our field.
tcstariat  tcs  complexity  adversarial  rand-approx  algorithms  game-theory  yoga  levers  communication-complexity  random  lower-bounds  average-case  nibble  org:bleg 
january 2017 by nhaliday
« earlier      
per page:    204080120160

bundles : academeframetcstechie

related tags

2016-election  80000-hours  aaronson  abstraction  academia  accretion  acm  acmtariat  advanced  adversarial  advice  aesthetics  aggregator  aging  ai  ai-control  alg-combo  algebra  algebraic-complexity  algorithmic-econ  algorithms  alien-character  alignment  altruism  analogy  analysis  anglo  anglosphere  ankur-moitra  announcement  anthropic  apollonian-dionysian  applicability-prereqs  applications  approximation  arms  arrows  art  art-generation  article  asia  atoms  audio  authoritarianism  automata  automation  average-case  axioms  backup  bandits  bangbang  barons  bayesian  berkeley  best-practices  better-explained  betting  bias-variance  biases  big-list  big-peeps  big-picture  big-surf  big-yud  binomial  bio  bioinformatics  biotech  bits  blog  blowhards  boaz-barak  books  boolean-analysis  bostrom  britain  business  c(pp)  caching  calculation  caltech  canada  capital  career  carmack  characterization  charity  chart  checking  checklists  chemistry  chicago  china  christianity  civic  civilization  clarity  class  clever-rats  cmu  coarse-fine  code-dive  coding-theory  cog-psych  columbia  combo-optimization  commentary  communication-complexity  community  comparison  competition  compilers  complex-systems  complexity  composition-decomposition  compressed-sensing  computation  computational-geometry  computer-vision  concentration-of-measure  concept  conceptual-vocab  concurrency  conference  confluence  constraint-satisfaction  contrarianism  convexity-curvature  cool  cooperate-defect  coordination  cornell  corporation  corruption  cost-benefit  counting  coupling-cohesion  course  creative  criminal-justice  critique  crux  crypto  cs  culture  curiosity  current-events  curvature  cycles  d3  dan-luu  dana-moshkovitz  dark-arts  data  data-science  data-structures  database  dataviz  dbs  death  debate  debugging  decentralized  decision-making  decision-theory  deep-learning  deep-materialism  deepgoog  defense  desktop  detail-architecture  deterrence  developmental  differential  differential-privacy  dimensionality  diogenes  direction  discovery  discrete  discussion  distributed  distribution  divide-and-conquer  documentation  DP  draft  drama  duality  duplication  dynamic  economics  eden  eden-heaven  editors  education  EEA  effective-altruism  efficiency  egalitarianism-hierarchy  EGT  elections  electromag  elegance  embeddings  empirical  ems  encyclopedic  endogenous-exogenous  engineering  enhancement  ensembles  entertainment  entropy-like  epistemic  erik-demaine  error  essay  estimate  ethical-algorithms  ethics  EU  europe  events  evolution  evopsych  examples  existence  expanders  expansionism  expectancy  experiment  expert  expert-experience  explanans  explanation  explore-exploit  exposition  extratricky  extrema  facebook  fall-2016  farmers-and-foragers  features  fields  finance  flexibility  flux-stasis  focs  foreign-lang  foreign-policy  formal-values  forms-instances  forum  fourier  frequentist  frontier  functional  futurism  gallic  game-theory  games  gedanken  generalization  generative  genetics  genomics  geometry  geopolitics  georgia  giants  gnon  google  gotchas  government  gowers  grad-school  gradient-descent  graph-theory  graphs  greedy  gregory-clark  ground-up  growth  growth-econ  guide  GWAS  gwern  hacker  hanson  hardness  hardware  harvard  hashing  haskell  heavy-industry  heuristic  hi-order-bits  high-dimension  higher-ed  history  hmm  hn  homepage  homo-hetero  howto  hsu  huge-data-the-biggest  human-capital  human-ml  humanity  humility  hypocrisy  ideas  identity  idk  IEEE  iidness  impetus  impro  india  individualism-collectivism  inequality  info-foraging  information-theory  init  innovation  insight  intel  intelligence  interdisciplinary  internet  intersection  intersection-connectedness  interview  interview-prep  intricacy  intuition  investing  iq  iran  iteration-recursion  iterative-methods  japan  jelani-nelson  jobs  justice  jvm  knowledge  labor  land  language  large-factor  latent-variables  latex  law  learning-theory  lecture-notes  lectures  legacy  len:short  lens  lesswrong  let-me-see  letters  levers  leviathan  lexical  libraries  limits  linear-algebra  linear-programming  linearity  liner-notes  links  linux  list  local-global  logic  long-short-run  lower-bounds  luca-trevisan  machine-learning  macro  madhu-sudan  magnitude  malthus  management  marginal  markets  markov  matching  math  math.AC  math.CA  math.CO  math.GR  math.MG  math.NT  math.RT  mathtariat  matrix-factorization  measure  measurement  mechanism-design  media  memory-management  mental-math  metabuch  metameta  methodology  metric-space  metrics  micro  microsoft  migration  mihai  military  miri-cfar  mit  mixing  model-organism  models  moloch  moments  monte-carlo  morality  mostly-modern  motivation  msr  multi  multiplicative  music-theory  musk  mutation  mystic  n-factor  narrative  nationalism-globalism  nature  near-far  network-structure  neuro  neuro-nitgrit  neurons  news  nibble  nietzschean  nitty-gritty  nonlinearity  norms  nostalgia  notation  nuclear  number  numerics  objektbuch  ocw  offense-defense  oly  oly-programming  online-learning  oop  open-problems  operational  optimism  optimization  orders  ORFE  org:biz  org:bleg  org:edu  org:inst  org:junk  org:lite  org:mag  org:mat  org:rec  organization  organizing  os  oss  outcome-risk  overflow  p:*  p:**  p:***  p:null  p:someday  PAC  papers  parenting  pareto  parsimony  pcp  pdf  peace-violence  people  performance  perturbation  phd  philosophy  physics  pigeonhole-markov  planning  play  plots  pls  plt  podcast  polisci  politics  polynomials  positivity  postmortem  potential  power  pragmatic  pre-2013  prediction  preprint  presentation  princeton  prioritizing  privacy  pro-rata  probabilistic-method  probability  problem-solving  productivity  prof  profile  programming  proof-systems  proofs  properties  proposal  psychology  psychometrics  publishing  puzzles  python  q-n-a  qra  quantum  quantum-info  questions  quixotic  rand-approx  rand-complexity  random  random-matrices  random-networks  ranking  rationality  ratty  reading  realness  rec-math  recommendations  recruiting  reduction  reference  reflection  regularization  regularizer  regulation  reinforcement  relativity  relaxation  religion  research  research-program  review  revolution  rhetoric  rigidity  rigor  rigorous-crypto  risk  roadmap  robotics  robust  roots  rounding  russia  ryan-odonnell  s:*  s:***  salil-vadhan  sampling  sampling-bias  sanjeev-arora  scale  scaling-tech  scholar  scholar-pack  sci-comp  science  scitariat  SDP  search  sebastien-bubeck  security  selection  sequential  series  sex  shift  signum  similarity  simulation  singularity  sinosphere  skeleton  slides  smoothness  social  social-choice  society  software  space  space-complexity  sparsity  spatial  spectral  speculation  speed  speedometer  spreading  stackex  stanford  stat-mech  state  state-of-art  stats  stoc  stochastic-processes  stock-flow  stories  strategy  stream  street-fighting  strings  structure  students  studying  stylized-facts  subculture  subjective-objective  sublinear  submodular  summary  summer-2014  summer-2016  supply-demand  survey  sv  syntax  synthesis  system-design  systems  talks  tcs  tcstariat  teaching  tech  technical-writing  technocracy  technology  techtariat  texas  the-classics  the-great-west-whale  the-prices  the-trenches  theos  thermo  thinking  threat-modeling  tidbits  tightness  tim-roughgarden  time  time-complexity  time-preference  time-series  tip-of-tongue  todo  toolkit  tools  top-n  topics  topology  track-record  trade  tradeoffs  travel  trees  trends  tricki  tricks  trust  tutorial  tutoring  twitter  UGC  uncertainty  uniqueness  unit  universalism-particularism  unix  unsupervised  urban-rural  usa  usaco-ioi  valiant  values  vazirani  via:popular  video  virtu  visual-understanding  visualization  volo-avolo  von-neumann  war  washington  water  wealth  web  weird  white-paper  wigderson  wiki  winner-take-all  winter-2015  winter-2017  workflow  working-stiff  world-war  wormholes  worse-is-better/the-right-thing  writing  yoga  zero-positive-sum  🐸  👳  🖥  🤖  🦉 

Copy this bookmark:



description:


tags: