nhaliday + cs   136

Ask HN: What's a promising area to work on? | Hacker News
hn  discussion  q-n-a  ideas  impact  trends  the-bones  speedometer  technology  applications  tech  cs  programming  list  top-n  recommendations  lens  machine-learning  deep-learning  security  privacy  crypto  software  hardware  cloud  biotech  CRISPR  bioinformatics  biohacking  blockchain  cryptocurrency  crypto-anarchy  healthcare  graphics  SIGGRAPH  vr  automation  universalism-particularism  expert-experience  reddit  social  arbitrage  supply-demand  ubiquity  cost-benefit  compensation  chart  career  planning  strategy  long-term  advice  sub-super  commentary  rhetoric  org:com  techtariat  human-capital  prioritizing  tech-infrastructure  working-stiff  data-science 
3 days ago by nhaliday
Parallel Computing: Theory and Practice
by Umut Acar who also co-authored a different book on parallel algorithms w/ Guy Blelloch from a more high-level and functional perspective
unit  books  cmu  cs  programming  tcs  algorithms  concurrency  c(pp)  divide-and-conquer  libraries  complexity  time-complexity  data-structures  orders  graphs  graph-theory  trees  models  functional  metal-to-virtual  systems 
8 weeks ago by nhaliday
Philip Guo - A Five-Minute Guide to Ph.D. Program Applications
If you spend five minutes reading this article, you'll learn how to make your Ph.D. program application the strongest possible. Why five minutes? Because it's probably the longest that anyone will spend reading your application.
techtariat  grad-school  phd  advice  transitions  career  progression  hi-order-bits  cs  init 
9 weeks ago by nhaliday
The Existential Risk of Math Errors - Gwern.net
How big is this upper bound? Mathematicians have often made errors in proofs. But it’s rarer for ideas to be accepted for a long time and then rejected. But we can divide errors into 2 basic cases corresponding to type I and type II errors:

1. Mistakes where the theorem is still true, but the proof was incorrect (type I)
2. Mistakes where the theorem was false, and the proof was also necessarily incorrect (type II)

Before someone comes up with a final answer, a mathematician may have many levels of intuition in formulating & working on the problem, but we’ll consider the final end-product where the mathematician feels satisfied that he has solved it. Case 1 is perhaps the most common case, with innumerable examples; this is sometimes due to mistakes in the proof that anyone would accept is a mistake, but many of these cases are due to changing standards of proof. For example, when David Hilbert discovered errors in Euclid’s proofs which no one noticed before, the theorems were still true, and the gaps more due to Hilbert being a modern mathematician thinking in terms of formal systems (which of course Euclid did not think in). (David Hilbert himself turns out to be a useful example of the other kind of error: his famous list of 23 problems was accompanied by definite opinions on the outcome of each problem and sometimes timings, several of which were wrong or questionable5.) Similarly, early calculus used ‘infinitesimals’ which were sometimes treated as being 0 and sometimes treated as an indefinitely small non-zero number; this was incoherent and strictly speaking, practically all of the calculus results were wrong because they relied on an incoherent concept - but of course the results were some of the greatest mathematical work ever conducted6 and when later mathematicians put calculus on a more rigorous footing, they immediately re-derived those results (sometimes with important qualifications), and doubtless as modern math evolves other fields have sometimes needed to go back and clean up the foundations and will in the future.7

...

Isaac Newton, incidentally, gave two proofs of the same solution to a problem in probability, one via enumeration and the other more abstract; the enumeration was correct, but the other proof totally wrong and this was not noticed for a long time, leading Stigler to remark:

...

TYPE I > TYPE II?
“Lefschetz was a purely intuitive mathematician. It was said of him that he had never given a completely correct proof, but had never made a wrong guess either.”
- Gian-Carlo Rota13

Case 2 is disturbing, since it is a case in which we wind up with false beliefs and also false beliefs about our beliefs (we no longer know that we don’t know). Case 2 could lead to extinction.

...

Except, errors do not seem to be evenly & randomly distributed between case 1 and case 2. There seem to be far more case 1s than case 2s, as already mentioned in the early calculus example: far more than 50% of the early calculus results were correct when checked more rigorously. Richard Hamming attributes to Ralph Boas a comment that while editing Mathematical Reviews that “of the new results in the papers reviewed most are true but the corresponding proofs are perhaps half the time plain wrong”.

...

Gian-Carlo Rota gives us an example with Hilbert:

...

Olga labored for three years; it turned out that all mistakes could be corrected without any major changes in the statement of the theorems. There was one exception, a paper Hilbert wrote in his old age, which could not be fixed; it was a purported proof of the continuum hypothesis, you will find it in a volume of the Mathematische Annalen of the early thirties.

...

Leslie Lamport advocates for machine-checked proofs and a more rigorous style of proofs similar to natural deduction, noting a mathematician acquaintance guesses at a broad error rate of 1/329 and that he routinely found mistakes in his own proofs and, worse, believed false conjectures30.

[more on these "structured proofs":
https://academia.stackexchange.com/questions/52435/does-anyone-actually-publish-structured-proofs
https://mathoverflow.net/questions/35727/community-experiences-writing-lamports-structured-proofs
]

We can probably add software to that list: early software engineering work found that, dismayingly, bug rates seem to be simply a function of lines of code, and one would expect diseconomies of scale. So one would expect that in going from the ~4,000 lines of code of the Microsoft DOS operating system kernel to the ~50,000,000 lines of code in Windows Server 2003 (with full systems of applications and libraries being even larger: the comprehensive Debian repository in 2007 contained ~323,551,126 lines of code) that the number of active bugs at any time would be… fairly large. Mathematical software is hopefully better, but practitioners still run into issues (eg Durán et al 2014, Fonseca et al 2017) and I don’t know of any research pinning down how buggy key mathematical systems like Mathematica are or how much published mathematics may be erroneous due to bugs. This general problem led to predictions of doom and spurred much research into automated proof-checking, static analysis, and functional languages31.

[related:
https://mathoverflow.net/questions/11517/computer-algebra-errors
I don't know any interesting bugs in symbolic algebra packages but I know a true, enlightening and entertaining story about something that looked like a bug but wasn't.

Define sinc𝑥=(sin𝑥)/𝑥.

Someone found the following result in an algebra package: ∫∞0𝑑𝑥sinc𝑥=𝜋/2
They then found the following results:

...

So of course when they got:

∫∞0𝑑𝑥sinc𝑥sinc(𝑥/3)sinc(𝑥/5)⋯sinc(𝑥/15)=(467807924713440738696537864469/935615849440640907310521750000)𝜋

hmm:
Which means that nobody knows Fourier analysis nowdays. Very sad and discouraging story... – fedja Jan 29 '10 at 18:47

--

Because the most popular systems are all commercial, they tend to guard their bug database rather closely -- making them public would seriously cut their sales. For example, for the open source project Sage (which is quite young), you can get a list of all the known bugs from this page. 1582 known issues on Feb.16th 2010 (which includes feature requests, problems with documentation, etc).

That is an order of magnitude less than the commercial systems. And it's not because it is better, it is because it is younger and smaller. It might be better, but until SAGE does a lot of analysis (about 40% of CAS bugs are there) and a fancy user interface (another 40%), it is too hard to compare.

I once ran a graduate course whose core topic was studying the fundamental disconnect between the algebraic nature of CAS and the analytic nature of the what it is mostly used for. There are issues of logic -- CASes work more or less in an intensional logic, while most of analysis is stated in a purely extensional fashion. There is no well-defined 'denotational semantics' for expressions-as-functions, which strongly contributes to the deeper bugs in CASes.]

...

Should such widely-believed conjectures as P≠NP or the Riemann hypothesis turn out be false, then because they are assumed by so many existing proofs, a far larger math holocaust would ensue38 - and our previous estimates of error rates will turn out to have been substantial underestimates. But it may be a cloud with a silver lining, if it doesn’t come at a time of danger.

https://mathoverflow.net/questions/338607/why-doesnt-mathematics-collapse-down-even-though-humans-quite-often-make-mista

more on formal methods in programming:
https://www.quantamagazine.org/formal-verification-creates-hacker-proof-code-20160920/
https://intelligence.org/2014/03/02/bob-constable/

https://softwareengineering.stackexchange.com/questions/375342/what-are-the-barriers-that-prevent-widespread-adoption-of-formal-methods
Update: measured effort
In the October 2018 issue of Communications of the ACM there is an interesting article about Formally verified software in the real world with some estimates of the effort.

Interestingly (based on OS development for military equipment), it seems that producing formally proved software requires 3.3 times more effort than with traditional engineering techniques. So it's really costly.

On the other hand, it requires 2.3 times less effort to get high security software this way than with traditionally engineered software if you add the effort to make such software certified at a high security level (EAL 7). So if you have high reliability or security requirements there is definitively a business case for going formal.

WHY DON'T PEOPLE USE FORMAL METHODS?: https://www.hillelwayne.com/post/why-dont-people-use-formal-methods/
You can see examples of how all of these look at Let’s Prove Leftpad. HOL4 and Isabelle are good examples of “independent theorem” specs, SPARK and Dafny have “embedded assertion” specs, and Coq and Agda have “dependent type” specs.6

If you squint a bit it looks like these three forms of code spec map to the three main domains of automated correctness checking: tests, contracts, and types. This is not a coincidence. Correctness is a spectrum, and formal verification is one extreme of that spectrum. As we reduce the rigour (and effort) of our verification we get simpler and narrower checks, whether that means limiting the explored state space, using weaker types, or pushing verification to the runtime. Any means of total specification then becomes a means of partial specification, and vice versa: many consider Cleanroom a formal verification technique, which primarily works by pushing code review far beyond what’s humanly possible.

...

The question, then: “is 90/95/99% correct significantly cheaper than 100% correct?” The answer is very yes. We all are comfortable saying that a codebase we’ve well-tested and well-typed is mostly correct modulo a few fixes in prod, and we’re even writing more than four lines of code a day. In fact, the vast… [more]
ratty  gwern  analysis  essay  realness  truth  correctness  reason  philosophy  math  proofs  formal-methods  cs  programming  engineering  worse-is-better/the-right-thing  intuition  giants  old-anglo  error  street-fighting  heuristic  zooming  risk  threat-modeling  software  lens  logic  inference  physics  differential  geometry  estimate  distribution  robust  speculation  nonlinearity  cost-benefit  convexity-curvature  measure  scale  trivia  cocktail  history  early-modern  europe  math.CA  rigor  news  org:mag  org:sci  miri-cfar  pdf  thesis  comparison  examples  org:junk  q-n-a  stackex  pragmatic  tradeoffs  cracker-prog  techtariat  invariance  DSL  chart  ecosystem  grokkability  heavyweights  CAS  static-dynamic  lower-bounds  complexity  tcs  open-problems  big-surf  ideas  certificates-recognition  proof-systems  PCP  mediterranean  SDP  meta:prediction  epistemic  questions  guessing  distributed  overflow  nibble  soft-question  track-record  big-list  hmm  frontier  state-of-art  move-fast-(and-break-things)  grokkability-clarity  technical-writing  trust 
july 2019 by nhaliday
The Law of Leaky Abstractions – Joel on Software
[TCP/IP example]

All non-trivial abstractions, to some degree, are leaky.

...

- Something as simple as iterating over a large two-dimensional array can have radically different performance if you do it horizontally rather than vertically, depending on the “grain of the wood” — one direction may result in vastly more page faults than the other direction, and page faults are slow. Even assembly programmers are supposed to be allowed to pretend that they have a big flat address space, but virtual memory means it’s really just an abstraction, which leaks when there’s a page fault and certain memory fetches take way more nanoseconds than other memory fetches.

- The SQL language is meant to abstract away the procedural steps that are needed to query a database, instead allowing you to define merely what you want and let the database figure out the procedural steps to query it. But in some cases, certain SQL queries are thousands of times slower than other logically equivalent queries. A famous example of this is that some SQL servers are dramatically faster if you specify “where a=b and b=c and a=c” than if you only specify “where a=b and b=c” even though the result set is the same. You’re not supposed to have to care about the procedure, only the specification. But sometimes the abstraction leaks and causes horrible performance and you have to break out the query plan analyzer and study what it did wrong, and figure out how to make your query run faster.

...

- C++ string classes are supposed to let you pretend that strings are first-class data. They try to abstract away the fact that strings are hard and let you act as if they were as easy as integers. Almost all C++ string classes overload the + operator so you can write s + “bar” to concatenate. But you know what? No matter how hard they try, there is no C++ string class on Earth that will let you type “foo” + “bar”, because string literals in C++ are always char*’s, never strings. The abstraction has sprung a leak that the language doesn’t let you plug. (Amusingly, the history of the evolution of C++ over time can be described as a history of trying to plug the leaks in the string abstraction. Why they couldn’t just add a native string class to the language itself eludes me at the moment.)

- And you can’t drive as fast when it’s raining, even though your car has windshield wipers and headlights and a roof and a heater, all of which protect you from caring about the fact that it’s raining (they abstract away the weather), but lo, you have to worry about hydroplaning (or aquaplaning in England) and sometimes the rain is so strong you can’t see very far ahead so you go slower in the rain, because the weather can never be completely abstracted away, because of the law of leaky abstractions.

One reason the law of leaky abstractions is problematic is that it means that abstractions do not really simplify our lives as much as they were meant to. When I’m training someone to be a C++ programmer, it would be nice if I never had to teach them about char*’s and pointer arithmetic. It would be nice if I could go straight to STL strings. But one day they’ll write the code “foo” + “bar”, and truly bizarre things will happen, and then I’ll have to stop and teach them all about char*’s anyway.

...

The law of leaky abstractions means that whenever somebody comes up with a wizzy new code-generation tool that is supposed to make us all ever-so-efficient, you hear a lot of people saying “learn how to do it manually first, then use the wizzy tool to save time.” Code generation tools which pretend to abstract out something, like all abstractions, leak, and the only way to deal with the leaks competently is to learn about how the abstractions work and what they are abstracting. So the abstractions save us time working, but they don’t save us time learning.
techtariat  org:com  working-stiff  essay  programming  cs  software  abstraction  worrydream  thinking  intricacy  degrees-of-freedom  networking  examples  traces  no-go  volo-avolo  tradeoffs  c(pp)  pls  strings  dbs  transportation  driving  analogy  aphorism  learning  paradox  systems  elegance  nitty-gritty  concrete  cracker-prog  metal-to-virtual  protocol-metadata  design  system-design 
july 2019 by nhaliday
data structures - Why are Red-Black trees so popular? - Computer Science Stack Exchange
- AVL trees have smaller average depth than red-black trees, and thus searching for a value in AVL tree is consistently faster.
- Red-black trees make less structural changes to balance themselves than AVL trees, which could make them potentially faster for insert/delete. I'm saying potentially, because this would depend on the cost of the structural change to the tree, as this will depend a lot on the runtime and implemntation (might also be completely different in a functional language when the tree is immutable?)

There are many benchmarks online that compare AVL and Red-black trees, but what struck me is that my professor basically said, that usually you'd do one of two things:
- Either you don't really care that much about performance, in which case the 10-20% difference of AVL vs Red-black in most cases won't matter at all.
- Or you really care about performance, in which you case you'd ditch both AVL and Red-black trees, and go with B-trees, which can be tweaked to work much better (or (a,b)-trees, I'm gonna put all of those in one basket.)

--

> For some kinds of binary search trees, including red-black trees but not AVL trees, the "fixes" to the tree can fairly easily be predicted on the way down and performed during a single top-down pass, making the second pass unnecessary. Such insertion algorithms are typically implemented with a loop rather than recursion, and often run slightly faster in practice than their two-pass counterparts.

So a RedBlack tree insert can be implemented without recursion, on some CPUs recursion is very expensive if you overrun the function call cache (e.g SPARC due to is use of Register window)

--

There are some cases where you can't use B-trees at all.

One prominent case is std::map from C++ STL. The standard requires that insert does not invalidate existing iterators

...

I also believe that "single pass tail recursive" implementation is not the reason for red black tree popularity as a mutable data structure.

First of all, stack depth is irrelevant here, because (given log𝑛 height) you would run out of the main memory before you run out of stack space. Jemalloc is happy with preallocating worst case depth on the stack.
nibble  q-n-a  overflow  cs  algorithms  tcs  data-structures  functional  orders  trees  cost-benefit  tradeoffs  roots  explanans  impetus  performance  applicability-prereqs  programming  pls  c(pp)  ubiquity 
june 2019 by nhaliday
Hardware is unforgiving
Today, anyone with a CS 101 background can take Geoffrey Hinton's course on neural networks and deep learning, and start applying state of the art machine learning techniques in production within a couple months. In software land, you can fix minor bugs in real time. If it takes a whole day to run your regression test suite, you consider yourself lucky because it means you're in one of the few environments that takes testing seriously. If the architecture is fundamentally flawed, you pull out your copy of Feathers' “Working Effectively with Legacy Code” and you apply minor fixes until you're done.

This isn't to say that software isn't hard, it's just a different kind of hard: the sort of hard that can be attacked with genius and perseverance, even without experience. But, if you want to build a ship, and you "only" have a decade of experience with carpentry, milling, metalworking, etc., well, good luck. You're going to need it. With a large ship, “minor” fixes can take days or weeks, and a fundamental flaw means that your ship sinks and you've lost half a year of work and tens of millions of dollars. By the time you get to something with the complexity of a modern high-performance microprocessor, a minor bug discovered in production costs three months and five million dollars. A fundamental flaw in the architecture will cost you five years and hundreds of millions of dollars2.

Physical mistakes are costly. There's no undo and editing isn't simply a matter of pressing some keys; changes consume real, physical resources. You need enough wisdom and experience to avoid common mistakes entirely – especially the ones that can't be fixed.
techtariat  comparison  software  hardware  programming  engineering  nitty-gritty  realness  roots  explanans  startups  tech  sv  the-world-is-just-atoms  examples  stories  economics  heavy-industry  hard-tech  cs  IEEE  oceans  trade  korea  asia  recruiting  britain  anglo  expert-experience  growth-econ  world  developing-world  books  recommendations  intricacy  dan-luu  age-generation  system-design  correctness  metal-to-virtual  psycho-atoms  move-fast-(and-break-things)  kumbaya-kult 
june 2019 by nhaliday
Interview with Donald Knuth | Interview with Donald Knuth | InformIT
Andrew Binstock and Donald Knuth converse on the success of open source, the problem with multicore architecture, the disappointing lack of interest in literate programming, the menace of reusable code, and that urban legend about winning a programming contest with a single compilation.

Reusable vs. re-editable code: https://hal.archives-ouvertes.fr/hal-01966146/document
- Konrad Hinsen

https://www.johndcook.com/blog/2008/05/03/reusable-code-vs-re-editable-code/
I think whether code should be editable or in “an untouchable black box” depends on the number of developers involved, as well as their talent and motivation. Knuth is a highly motivated genius working in isolation. Most software is developed by large teams of programmers with varying degrees of motivation and talent. I think the further you move away from Knuth along these three axes the more important black boxes become.
nibble  interview  giants  expert-experience  programming  cs  software  contrarianism  carmack  oss  prediction  trends  linux  concurrency  desktop  comparison  checking  debugging  stories  engineering  hmm  idk  algorithms  books  debate  flux-stasis  duplication  parsimony  best-practices  writing  documentation  latex  intricacy  structure  hardware  caching  workflow  editors  composition-decomposition  coupling-cohesion  exposition  technical-writing  thinking  cracker-prog  code-organizing  grokkability  multi  techtariat  commentary  pdf  reflection  essay  examples  python  data-science  libraries  grokkability-clarity 
june 2019 by nhaliday
Bareiss algorithm - Wikipedia
During the execution of Bareiss algorithm, every integer that is computed is the determinant of a submatrix of the input matrix. This allows, using the Hadamard inequality, to bound the size of these integers. Otherwise, the Bareiss algorithm may be viewed as a variant of Gaussian elimination and needs roughly the same number of arithmetic operations.
nibble  ground-up  cs  tcs  algorithms  complexity  linear-algebra  numerics  sci-comp  fields  calculation  nitty-gritty 
june 2019 by nhaliday
What's the expected level of paper for top conferences in Computer Science - Academia Stack Exchange
Top. The top level.

My experience on program committees for STOC, FOCS, ITCS, SODA, SOCG, etc., is that there are FAR more submissions of publishable quality than can be accepted into the conference. By "publishable quality" I mean a well-written presentation of a novel, interesting, and non-trivial result within the scope of the conference.

...

There are several questions that come up over and over in the FOCS/STOC review cycle:

- How surprising / novel / elegant / interesting is the result?
- How surprising / novel / elegant / interesting / general are the techniques?
- How technically difficult is the result? Ironically, FOCS and STOC committees have a reputation for ignoring the distinction between trivial (easy to derive from scratch) and nondeterministically trivial (easy to understand after the fact).
- What is the expected impact of this result? Is this paper going to change the way people do theoretical computer science over the next five years?
- Is the result of general interest to the theoretical computer science community? Or is it only of interest to a narrow subcommunity? In particular, if the topic is outside the STOC/FOCS mainstream—say, for example, computational topology—does the paper do a good job of explaining and motivating the results to a typical STOC/FOCS audience?
nibble  q-n-a  overflow  academia  tcs  cs  meta:research  publishing  scholar  lens  properties  cost-benefit  analysis  impetus  increase-decrease  soft-question  motivation  proofs  search  complexity  analogy  problem-solving  elegance  synthesis  hi-order-bits  novelty  discovery 
june 2019 by nhaliday
Analysis of Current and Future Computer Science Needs via Advertised Faculty Searches for 2019 - CRN
Differences are also seen when analyzing results based on the type of institution. Positions related to Security have the highest percentages for all but top-100 institutions. The area of Artificial Intelligence/Data Mining/Machine Learning is of most interest for top-100 PhD institutions. Roughly 35% of positions for PhD institutions are in data-oriented areas. The results show a strong interest in data-oriented areas by public PhD and private PhD, MS, and BS institutions while public MS and BS institutions are most interested in Security.
org:edu  data  analysis  visualization  trends  recruiting  jobs  career  planning  academia  higher-ed  cs  tcs  machine-learning  systems  pro-rata  measure  long-term  🎓  uncertainty  progression  grad-school  phd  distribution  ranking  top-n  security  status  s-factor  comparison  homo-hetero  correlation  org:ngo  white-paper  cost-benefit 
june 2019 by nhaliday
algorithm, algorithmic, algorithmicx, algorithm2e, algpseudocode = confused - TeX - LaTeX Stack Exchange
algorithm2e is only one currently maintained, but answerer prefers style of algorithmicx, and after perusing the docs, so do I
q-n-a  stackex  libraries  list  recommendations  comparison  publishing  cs  programming  algorithms  tools 
june 2019 by nhaliday
Philip Guo - Research Design Patterns
List of ways to generate research directions. Some are pretty specific to applied CS.
techtariat  nibble  academia  meta:research  scholar  cs  systems  list  top-n  checklists  ideas  creative  frontier  memes(ew)  info-dynamics  innovation  novelty  the-trenches  tactics 
may 2019 by nhaliday
Theory of Self-Reproducing Automata - John von Neumann
Fourth Lecture: THE ROLE OF HIGH AND OF EXTREMELY HIGH COMPLICATION

Comparisons between computing machines and the nervous systems. Estimates of size for computing machines, present and near future.

Estimates for size for the human central nervous system. Excursus about the “mixed” character of living organisms. Analog and digital elements. Observations about the “mixed” character of all componentry, artificial as well as natural. Interpretation of the position to be taken with respect to these.

Evaluation of the discrepancy in size between artificial and natural automata. Interpretation of this discrepancy in terms of physical factors. Nature of the materials used.

The probability of the presence of other intellectual factors. The role of complication and the theoretical penetration that it requires.

Questions of reliability and errors reconsidered. Probability of individual errors and length of procedure. Typical lengths of procedure for computing machines and for living organisms--that is, for artificial and for natural automata. Upper limits on acceptable probability of error in individual operations. Compensation by checking and self-correcting features.

Differences of principle in the way in which errors are dealt with in artificial and in natural automata. The “single error” principle in artificial automata. Crudeness of our approach in this case, due to the lack of adequate theory. More sophisticated treatment of this problem in natural automata: The role of the autonomy of parts. Connections between this autonomy and evolution.

- 10^10 neurons in brain, 10^4 vacuum tubes in largest computer at time
- machines faster: 5 ms from neuron potential to neuron potential, 10^-3 ms for vacuum tubes

https://en.wikipedia.org/wiki/John_von_Neumann#Computing
pdf  article  papers  essay  nibble  math  cs  computation  bio  neuro  neuro-nitgrit  scale  magnitude  comparison  acm  von-neumann  giants  thermo  phys-energy  speed  performance  time  density  frequency  hardware  ems  efficiency  dirty-hands  street-fighting  fermi  estimate  retention  physics  interdisciplinary  multi  wiki  links  people  🔬  atoms  duplication  iteration-recursion  turing  complexity  measure  nature  technology  complex-systems  bits  information-theory  circuits  robust  structure  composition-decomposition  evolution  mutation  axioms  analogy  thinking  input-output  hi-order-bits  coding-theory  flexibility  rigidity  automata-languages 
april 2018 by nhaliday
Complexity no Bar to AI - Gwern.net
Critics of AI risk suggest diminishing returns to computing (formalized asymptotically) means AI will be weak; this argument relies on a large number of questionable premises and ignoring additional resources, constant factors, and nonlinear returns to small intelligence advantages, and is highly unlikely. (computer science, transhumanism, AI, R)
created: 1 June 2014; modified: 01 Feb 2018; status: finished; confidence: likely; importance: 10
ratty  gwern  analysis  faq  ai  risk  speedometer  intelligence  futurism  cs  computation  complexity  tcs  linear-algebra  nonlinearity  convexity-curvature  average-case  adversarial  article  time-complexity  singularity  iteration-recursion  magnitude  multiplicative  lower-bounds  no-go  performance  hardware  humanity  psychology  cog-psych  psychometrics  iq  distribution  moments  complement-substitute  hanson  ems  enhancement  parable  detail-architecture  universalism-particularism  neuro  ai-control  environment  climate-change  threat-modeling  security  theory-practice  hacker  academia  realness  crypto  rigorous-crypto  usa  government 
april 2018 by nhaliday
AI-complete - Wikipedia
In the field of artificial intelligence, the most difficult problems are informally known as AI-complete or AI-hard, implying that the difficulty of these computational problems is equivalent to that of solving the central artificial intelligence problem—making computers as intelligent as people, or strong AI.[1] To call a problem AI-complete reflects an attitude that it would not be solved by a simple specific algorithm.

AI-complete problems are hypothesised to include computer vision, natural language understanding, and dealing with unexpected circumstances while solving any real world problem.[2]

Currently, AI-complete problems cannot be solved with modern computer technology alone, but would also require human computation. This property can be useful, for instance to test for the presence of humans as with CAPTCHAs, and for computer security to circumvent brute-force attacks.[3][4]

...

AI-complete problems are hypothesised to include:

Bongard problems
Computer vision (and subproblems such as object recognition)
Natural language understanding (and subproblems such as text mining, machine translation, and word sense disambiguation[8])
Dealing with unexpected circumstances while solving any real world problem, whether it's navigation or planning or even the kind of reasoning done by expert systems.

...

Current AI systems can solve very simple and/or restricted versions of AI-complete problems, but never in their full generality. When AI researchers attempt to "scale up" their systems to handle more complicated, real world situations, the programs tend to become excessively brittle without commonsense knowledge or a rudimentary understanding of the situation: they fail as unexpected circumstances outside of its original problem context begin to appear. When human beings are dealing with new situations in the world, they are helped immensely by the fact that they know what to expect: they know what all things around them are, why they are there, what they are likely to do and so on. They can recognize unusual situations and adjust accordingly. A machine without strong AI has no other skills to fall back on.[9]
concept  reduction  cs  computation  complexity  wiki  reference  properties  computer-vision  ai  risk  ai-control  machine-learning  deep-learning  language  nlp  order-disorder  tactics  strategy  intelligence  humanity  speculation  crux 
march 2018 by nhaliday
[1410.0369] The Universe of Minds
kinda dumb, don't think this guy is anywhere close to legit (e.g., he claims set of mind designs is countable, but gives no actual reason to believe that)
papers  preprint  org:mat  ratty  miri-cfar  ai  intelligence  philosophy  logic  software  cs  computation  the-self 
march 2018 by nhaliday
If Quantum Computers are not Possible Why are Classical Computers Possible? | Combinatorics and more
As most of my readers know, I regard quantum computing as unrealistic. You can read more about it in my Notices AMS paper and its extended version (see also this post) and in the discussion of Puzzle 4 from my recent puzzles paper (see also this post). The amazing progress and huge investment in quantum computing (that I presented and update  routinely in this post) will put my analysis to test in the next few years.
tcstariat  mathtariat  org:bleg  nibble  tcs  cs  computation  quantum  volo-avolo  no-go  contrarianism  frontier  links  quantum-info  analogy  comparison  synthesis  hi-order-bits  speedometer  questions  signal-noise 
november 2017 by nhaliday
Americans Used to be Proud of their Universities | The American Conservative
Some Notes on the Finances of Top Chinese Universities: https://www.insidehighered.com/blogs/world-view/some-notes-finances-top-chinese-universities
A glimpse into the finances of top Chinese universities suggests they share more than we might have imagined with American flagship public universities, but also that claims of imminent “catch up” might be overblown
news  org:mag  right-wing  reflection  history  early-modern  pre-ww2  mostly-modern  europe  germanic  britain  gibbon  trends  rot  zeitgeist  usa  china  asia  sinosphere  higher-ed  academia  westminster  comparison  analogy  multi  org:edu  money  monetary-fiscal  data  analysis  pro-rata  cs  tech  realness  social-science  the-world-is-just-atoms  science  innovation  is-ought  truth  identity-politics 
october 2017 by nhaliday
Definite optimism as human capital | Dan Wang
I’ve come to the view that creativity and innovative capacity aren’t a fixed stock, coiled and waiting to be released by policy. Now, I know that a country will not do well if it has poor infrastructure, interest rate management, tax and regulation levels, and a whole host of other issues. But getting them right isn’t sufficient to promote innovation; past a certain margin, when they’re all at rational levels, we ought to focus on promoting creativity and drive as a means to propel growth.

...

When I say “positive” vision, I don’t mean that people must see the future as a cheerful one. Instead, I’m saying that people ought to have a vision at all: A clear sense of how the technological future will be different from today. To have a positive vision, people must first expand their imaginations. And I submit that an interest in science fiction, the material world, and proximity to industry all help to refine that optimism. I mean to promote imagination by direct injection.

...

If a state has lost most of its jobs for electrical engineers, or nuclear engineers, or mechanical engineers, then fewer young people in that state will study those practices, and technological development in related fields slow down a little further. When I bring up these thoughts on resisting industrial decline to economists, I’m unsatisfied with their responses. They tend to respond by tautology (“By definition, outsourcing improves on the status quo”) or arithmetic (see: gains from comparative advantage, Ricardo). These kinds of logical exercises are not enough. I would like for more economists to consider a human capital perspective for preserving manufacturing expertise (to some degree).

I wonder if the so-called developed countries should be careful of their own premature deindustrialization. The US industrial base has faltered, but there is still so much left to build. Until we’ve perfected asteroid mining and super-skyscrapers and fusion rockets and Jupiter colonies and matter compilers, we can’t be satisfied with innovation confined mostly to the digital world.

Those who don’t mind the decline of manufacturing employment like to say that people have moved on to higher-value work. But I’m not sure that this is usually the case. Even if there’s an endlessly capacious service sector to absorb job losses in manufacturing, it’s often the case that these new jobs feature lower productivity growth and involve greater rent-seeking. Not everyone is becoming hedge fund managers and machine learning engineers. According to BLS, the bulk of service jobs are in 1. government (22 million), 2. professional services (19m), 3. healthcare (18m), 4. retail (15m), and 5. leisure and hospitality (15m). In addition to being often low-paying but still competitive, a great deal of service sector jobs tend to stress capacity for emotional labor over capacity for manual labor. And it’s the latter that tends to be more present in fields involving technological upgrading.

...

Here’s a bit more skepticism of service jobs. In an excellent essay on declining productivity growth, Adair Turner makes the point that many service jobs are essentially zero-sum. I’d like to emphasize and elaborate on that idea here.

...

Call me a romantic, but I’d like everyone to think more about industrial lubricants, gas turbines, thorium reactors, wire production, ball bearings, underwater cables, and all the things that power our material world. I abide by a strict rule never to post or tweet about current political stuff; instead I try to draw more attention to the world of materials. And I’d like to remind people that there are many things more edifying than following White House scandals.

...

First, we can all try to engage more actively with the material world, not merely the digital or natural world. Go ahead and pick an industrial phenomenon and learn more about it. Learn more about the history of aviation, and what it took to break the sound barrier; gaze at the container ships as they sail into port, and keep in mind that they carry 90 percent of the goods you see around you; read about what we mold plastics to do; meditate on the importance of steel in civilization; figure out what’s driving the decline in the cost of solar energy production, or how we draw electricity from nuclear fission, or what it takes to extract petroleum or natural gas from the ground.

...

Here’s one more point that I’d like to add on Girard at college: I wonder if to some extent current dynamics are the result of the liberal arts approach of “college teaches you how to think, not what to think.” I’ve never seen much data to support this wonderful claim that college is good at teaching critical thinking skills. Instead, students spend most of their energies focused on raising or lowering the status of the works they study or the people around them, giving rise to the Girardian terror that has gripped so many campuses.

College as an incubator of Girardian terror: http://danwang.co/college-girardian-terror/
It’s hard to construct a more perfect incubator for mimetic contagion than the American college campus. Most 18-year-olds are not super differentiated from each other. By construction, whatever distinctions any does have are usually earned through brutal, zero-sum competitions. These tournament-type distinctions include: SAT scores at or near perfection; being a top player on a sports team; gaining master status from chess matches; playing first instrument in state orchestra; earning high rankings in Math Olympiad; and so on, culminating in gaining admission to a particular college.

Once people enter college, they get socialized into group environments that usually continue to operate in zero-sum competitive dynamics. These include orchestras and sport teams; fraternities and sororities; and many types of clubs. The biggest source of mimetic pressures are the classes. Everyone starts out by taking the same intro classes; those seeking distinction throw themselves into the hardest classes, or seek tutelage from star professors, and try to earn the highest grades.

Mimesis Machines and Millennials: http://quillette.com/2017/11/02/mimesis-machines-millennials/
In 1956, a young Liverpudlian named John Winston Lennon heard the mournful notes of Elvis Presley’s Heartbreak Hotel, and was transformed. He would later recall, “nothing really affected me until I heard Elvis. If there hadn’t been an Elvis, there wouldn’t have been the Beatles.” It is an ancient human story. An inspiring model, an inspired imitator, and a changed world.

Mimesis is the phenomenon of human mimicry. Humans see, and they strive to become what they see. The prolific Franco-Californian philosopher René Girard described the human hunger for imitation as mimetic desire. According to Girard, mimetic desire is a mighty psychosocial force that drives human behavior. When attempted imitation fails, (i.e. I want, but fail, to imitate my colleague’s promotion to VP of Business Development), mimetic rivalry arises. According to mimetic theory, periodic scapegoating—the ritualistic expelling of a member of the community—evolved as a way for archaic societies to diffuse rivalries and maintain the general peace.

As civilization matured, social institutions evolved to prevent conflict. To Girard, sacrificial religious ceremonies first arose as imitations of earlier scapegoating rituals. From the mimetic worldview healthy social institutions perform two primary functions,

They satisfy mimetic desire and reduce mimetic rivalry by allowing imitation to take place.
They thereby reduce the need to diffuse mimetic rivalry through scapegoating.
Tranquil societies possess and value institutions that are mimesis tolerant. These institutions, such as religion and family, are Mimesis Machines. They enable millions to see, imitate, and become new versions of themselves. Mimesis Machines, satiate the primal desire for imitation, and produce happy, contented people. Through Mimesis Machines, Elvis fans can become Beatles.

Volatile societies, on the other hand, possess and value mimesis resistant institutions that frustrate attempts at mimicry, and mass produce frustrated, resentful people. These institutions, such as capitalism and beauty hierarchies, are Mimesis Shredders. They stratify humanity, and block the ‘nots’ from imitating the ‘haves’.
techtariat  venture  commentary  reflection  innovation  definite-planning  thiel  barons  economics  growth-econ  optimism  creative  malaise  stagnation  higher-ed  status  error  the-world-is-just-atoms  heavy-industry  sv  zero-positive-sum  japan  flexibility  china  outcome-risk  uncertainty  long-short-run  debt  trump  entrepreneurialism  human-capital  flux-stasis  cjones-like  scifi-fantasy  labor  dirty-hands  engineering  usa  frontier  speedometer  rent-seeking  econ-productivity  government  healthcare  essay  rhetoric  contrarianism  nascent-state  unintended-consequences  volo-avolo  vitality  technology  tech  cs  cycles  energy-resources  biophysical-econ  trends  zeitgeist  rot  alt-inst  proposal  multi  news  org:mag  org:popup  philosophy  big-peeps  speculation  concept  religion  christianity  theos  buddhism  politics  polarization  identity-politics  egalitarianism-hierarchy  inequality  duplication  society  anthropology  culture-war  westminster  info-dynamics  tribalism  institutions  envy  age-generation  letters  noble-lie 
october 2017 by nhaliday
Merkle tree - Wikipedia
In cryptography and computer science, a hash tree or Merkle tree is a tree in which every non-leaf node is labelled with the hash of the labels or values (in case of leaves) of its child nodes.
concept  cs  data-structures  bitcoin  cryptocurrency  blockchain  atoms  wiki  reference  nibble  hashing  ideas  crypto  rigorous-crypto  protocol-metadata 
june 2017 by nhaliday
Peter Norvig, the meaning of polynomials, debugging as psychotherapy | Quomodocumque
He briefly showed a demo where, given values of a polynomial, a machine can put together a few lines of code that successfully computes the polynomial. But the code looks weird to a human eye. To compute some quadratic, it nests for-loops and adds things up in a funny way that ends up giving the right output. So has it really ”learned” the polynomial? I think in computer science, you typically feel you’ve learned a function if you can accurately predict its value on a given input. For an algebraist like me, a function determines but isn’t determined by the values it takes; to me, there’s something about that quadratic polynomial the machine has failed to grasp. I don’t think there’s a right or wrong answer here, just a cultural difference to be aware of. Relevant: Norvig’s description of “the two cultures” at the end of this long post on natural language processing (which is interesting all the way through!)
mathtariat  org:bleg  nibble  tech  ai  talks  summary  philosophy  lens  comparison  math  cs  tcs  polynomials  nlp  debugging  psychology  cog-psych  complex-systems  deep-learning  analogy  legibility  interpretability  composition-decomposition  coupling-cohesion  apollonian-dionysian  heavyweights 
march 2017 by nhaliday
Taulbee Survey - CRA
- about 30% academic, 10% tenure-track for both ML and theory
- for industry flow, it's about 60% research for ML and 40% research for theory (presumably research in something that's not theory for the most part)
- so overall 60-70% w/ some kind of research career
grad-school  phd  data  planning  long-term  cs  schools  🎓  objektbuch  poll  transitions  progression 
february 2017 by nhaliday
inequalities - Is the Jaccard distance a distance? - MathOverflow
Steinhaus Transform
the referenced survey: http://kenclarkson.org/nn_survey/p.pdf

It's known that this transformation produces a metric from a metric. Now if you take as the base metric D the symmetric difference between two sets, what you end up with is the Jaccard distance (which actually is known by many other names as well).
q-n-a  overflow  nibble  math  acm  sublinear  metrics  metric-space  proofs  math.CO  tcstariat  arrows  reduction  measure  math.MG  similarity  multi  papers  survey  computational-geometry  cs  algorithms  pdf  positivity  msr  tidbits  intersection  curvature  convexity-curvature  intersection-connectedness  signum 
february 2017 by nhaliday
Paperscape
- includes physics, cs, etc.
- CS is _a lot_ smaller, or at least has much lower citation counts
- size = number citations, placement = citation network structure
papers  publishing  science  meta:science  data  visualization  network-structure  big-picture  dynamic  exploratory  🎓  physics  cs  math  hi-order-bits  survey  visual-understanding  preprint  aggregator  database  search  maps  zooming  metameta  scholar-pack  🔬  info-dynamics  scale  let-me-see  chart 
february 2017 by nhaliday
Information Processing: Machine Dreams
This is a controversial book because it demolishes not just the conventional history of the discipline, but its foundational assumptions. For example, once you start thinking about the information processing requirements that each agent (or even the entire system) must satisfy to find the optimal neoclassical equilibrium points, you realize the task is impossible. In fact, in some cases it has been rigorously shown to be beyond the capability of any universal Turing machine. Certainly, it seems beyond the plausible capabilities of a primitive species like homo sapiens. Once this bounded rationality (see also here) is taken into account, the whole notion of optimality of market equilibrium becomes far-fetched and speculative. It cannot be justified in any formal sense, and therefore cries out for experimental justification, which is not to be found.

I like this quote: This polymath who prognosticated that "science and technology would shift from a past emphasis on subjects of motion, force and energy to a future emphasis on subjects of communications, organization, programming and control," was spot on the money.
hsu  scitariat  economics  cs  computation  interdisciplinary  map-territory  models  market-failure  von-neumann  giants  history  quotes  links  debate  critique  review  big-picture  turing  heterodox  complex-systems  lens  s:*  books  🎩  thinking  markets  bounded-cognition 
february 2017 by nhaliday
« earlier      
per page:    204080120160

bundles : academeframe

related tags

80000-hours  aaronson  absolute-relative  abstraction  academia  accretion  accuracy  acm  acmtariat  advanced  adversarial  advice  aesthetics  age-generation  aggregator  aging  agriculture  ai  ai-control  algebra  algorithmic-econ  algorithms  alignment  allodium  alt-inst  amazon  analogy  analysis  analytical-holistic  anglo  anglosphere  announcement  anthropic  anthropology  antidemos  antiquity  aphorism  api  apollonian-dionysian  apple  applicability-prereqs  applications  approximation  arbitrage  aristos  arms  arrows  art  article  asia  assembly  atmosphere  atoms  audio  authoritarianism  automata-languages  automation  average-case  axioms  bare-hands  barons  bayesian  beauty  being-becoming  benevolence  berkeley  best-practices  better-explained  biases  big-list  big-peeps  big-picture  big-surf  bio  biodet  biohacking  bioinformatics  biophysical-econ  biotech  bitcoin  bits  blockchain  blog  boaz-barak  books  bostrom  bounded-cognition  brain-scan  brands  britain  buddhism  build-packaging  business  business-models  c(pp)  caching  calculation  california  caltech  canada  cancer  canon  capital  capitalism  career  carmack  cartoons  CAS  certificates-recognition  chart  checking  checklists  chemistry  chicago  china  christianity  circuits  civil-liberty  civilization  cjones-like  class  classic  clever-rats  climate-change  cloud  cmu  coarse-fine  cocktail  code-dive  code-organizing  coding-theory  cog-psych  cold-war  collaboration  columbia  comics  commentary  communication  community  comparison  compensation  competition  compilers  complement-substitute  complex-systems  complexity  composition-decomposition  computation  computational-geometry  computer-vision  concept  conceptual-vocab  concrete  concurrency  conference  confluence  constraint-satisfaction  contest  contrarianism  convexity-curvature  cool  cooperate-defect  core-rats  correctness  correlation  cost-benefit  counterexample  coupling-cohesion  courage  course  cracker-prog  creative  crime  CRISPR  critique  crooked  crosstab  crux  crypto  crypto-anarchy  cryptocurrency  cs  culture  culture-war  current-events  curvature  cybernetics  cycles  cynicism-idealism  dan-luu  dark-arts  darwinian  data  data-science  data-structures  database  dataset  dataviz  dbs  death  debate  debt  debugging  decision-making  decision-theory  deep-learning  deep-materialism  deepgoog  definite-planning  definition  degrees-of-freedom  democracy  demographics  dennett  density  descriptive  design  desktop  detail-architecture  developing-world  devops  devtools  differential  dimensionality  direct-indirect  dirty-hands  discovery  discussion  distributed  distribution  divide-and-conquer  documentation  DP  drama  driving  drugs  DSL  duplication  duty  dynamic  early-modern  econ-productivity  economics  ecosystem  eden-heaven  editors  education  effective-altruism  efficiency  egalitarianism-hierarchy  einstein  electromag  elegance  elite  embedded  embedded-cognition  empirical  ems  encyclopedic  ends-means  energy-resources  engineering  enhancement  enlightenment-renaissance-restoration-reformation  entertainment  entrepreneurialism  environment  envy  epistemic  erik-demaine  error  error-handling  essay  essence-existence  estimate  ethics  europe  evidence-based  evolution  examples  existence  exocortex  expansionism  expert  expert-experience  explanans  explanation  exploratory  exposition  extra-introversion  extrema  facebook  fall-2016  faq  fashun  FDA  fermi  fertility  feudal  fiction  fields  finance  finiteness  flexibility  flux-stasis  focus  form-design  formal-methods  formal-values  forms-instances  forum  fourier  frequency  frontend  frontier  functional  futurism  gallic  game-theory  games  gedanken  generalization  genetics  genomics  geoengineering  geography  geometry  georgia  germanic  giants  gibbon  gnosis-logos  gnxp  god-man-beast-victim  google  government  gowers  grad-school  graph-theory  graphics  graphs  greedy  grokkability  grokkability-clarity  ground-up  growth-econ  GT-101  guessing  guide  gwern  hacker  hanson  hard-tech  hardness  hardware  harvard  hashing  haskell  hci  healthcare  heavy-industry  heavyweights  heterodox  heuristic  hi-order-bits  hidden-motives  high-variance  higher-ed  history  hmm  hn  homepage  homo-hetero  honor  hsu  human-capital  human-ml  humanity  humility  hypocrisy  hypothesis-testing  ide  ideas  identity-politics  ideology  idk  IEEE  impact  impetus  increase-decrease  india  individualism-collectivism  inequality  inference  info-dynamics  info-foraging  infographic  information-theory  init  innovation  input-output  insight  institutions  intel  intelligence  interdisciplinary  interests  internet  interpretability  intersection  intersection-connectedness  interview  intricacy  intuition  invariance  investing  iq  iron-age  is-ought  iteration-recursion  iterative-methods  janus  japan  javascript  jobs  justice  jvm  knowledge  korea  kumbaya-kult  labor  language  large-factor  latex  latin-america  law  leadership  learning  learning-theory  lecture-notes  lectures  legacy  legibility  lens  lesswrong  let-me-see  letters  levers  leviathan  lexical  libraries  limits  linear-algebra  links  linux  lisp  list  literature  llvm  local-global  logic  lol  long-short-run  long-term  longevity  love-hate  low-hanging  lower-bounds  machine-learning  macro  magnitude  malaise  malthus  management  map-territory  maps  marginal  market-failure  market-power  marketing  markets  martial  math  math.AT  math.CA  math.CO  math.CV  math.DS  math.FA  math.MG  math.NT  mathtariat  matrix-factorization  meaningness  measure  measurement  mechanics  mechanism-design  media  medicine  medieval  mediterranean  memes(ew)  meta:math  meta:prediction  meta:reading  meta:research  meta:science  metabuch  metal-to-virtual  metameta  methodology  metric-space  metrics  microsoft  migration  military  miri-cfar  mit  mobile  mobility  models  moments  monetary-fiscal  money  morality  mostly-modern  motivation  move-fast-(and-break-things)  msr  multi  multiplicative  murray  musk  mutation  mystic  myth  n-factor  narrative  nascent-state  nationalism-globalism  nature  near-far  network-structure  networking  neuro  neuro-nitgrit  new-religion  news  nibble  nietzschean  nitty-gritty  nlp  no-go  noble-lie  nonlinearity  northeast  notation  novelty  nuclear  numerics  nutrition  nyc  objective-measure  objektbuch  occam  occident  oceans  ocw  offense-defense  old-anglo  oly  oly-programming  oop  open-closed  open-problems  operational  optimism  optimization  order-disorder  orders  ORFE  org:bleg  org:com  org:edu  org:inst  org:junk  org:lite  org:mag  org:mat  org:nat  org:ngo  org:popup  org:sci  organization  organizing  orient  os  oss  outcome-risk  outliers  overflow  p:*  p:***  p:null  p:someday  p:whenever  papadimitriou  papers  parable  paradox  parallax  parsimony  path-dependence  patience  PCP  pdf  peace-violence  people  performance  personality  pessimism  phalanges  pharma  phd  philosophy  phys-energy  physics  pic  planning  plots  pls  plt  poast  poetry  polanyi-marx  polarization  polisci  politics  poll  polynomials  popsci  positivity  postrat  power  power-law  pragmatic  pre-ww2  prediction  preprint  presentation  primitivism  princeton  prioritizing  privacy  pro-rata  probability  problem-solving  productivity  profile  programming  progression  proof-systems  proofs  properties  proposal  protocol-metadata  psych-architecture  psycho-atoms  psychology  psychometrics  publishing  puzzles  python  q-n-a  quantifiers-sums  quantum  quantum-info  questions  quixotic  quotes  rand-approx  random  randy-ayndy  ranking  rationality  ratty  reading  realness  reason  recommendations  recruiting  reddit  redistribution  reduction  reference  reflection  regularizer  regulation  reinforcement  relativity  religion  rent-seeking  research  research-program  retention  review  revolution  rhetoric  rhythm  right-wing  rigidity  rigor  rigorous-crypto  risk  ritual  robotics  robust  roots  rot  s-factor  s:*  s:***  sampling-bias  scale  scholar  scholar-pack  schools  sci-comp  science  science-anxiety  scifi-fantasy  scitariat  SDP  search  securities  security  sequential  shakespeare  shannon  shift  short-circuit  SIGGRAPH  signal-noise  signaling  signum  similarity  simulation  singularity  sinosphere  skeleton  skunkworks  slides  social  social-capital  social-choice  social-norms  social-science  society  socs-and-mops  soft-question  software  space  spatial  speculation  speed  speedometer  spock  sports  spreading  stackex  stagnation  stanford  startups  state  state-of-art  statesmen  static-dynamic  stats  status  stereotypes  stochastic-processes  stock-flow  stories  strategy  stream  street-fighting  strings  structure  study  studying  stylized-facts  sub-super  subjective-objective  sublinear  success  summary  supply-demand  survey  sv  synchrony  syntax  synthesis  system-design  systems  tactics  tails  talks  tcs  tcstariat  teaching  tech  tech-infrastructure  technical-writing  technology  techtariat  telos-atelos  terminal  texas  the-basilisk  the-bones  the-classics  the-devil  the-founding  the-great-west-whale  the-self  the-trenches  the-watchers  the-west  the-world-is-just-atoms  theory-of-mind  theory-practice  theos  thermo  thesis  thick-thin  thiel  things  thinking  threat-modeling  thurston  tidbits  time  time-complexity  time-preference  todo  toolkit  tools  top-n  topics  topology  traces  track-record  trade  tradeoffs  transitions  transportation  trees  trends  tribalism  tricki  trivia  troll  trump  trust  truth  turing  twitter  types  ubiquity  UGC  uncertainty  unintended-consequences  unit  universalism-particularism  unix  urban-rural  us-them  usa  usaco-ioi  ux  values  vazirani  venture  video  virtu  visual-understanding  visualization  visuo  vitality  volo-avolo  von-neumann  vr  war  washington  water  wealth  web  weird  welfare-state  westminster  white-paper  wiki  winner-take-all  winter-2016  winter-2017  wire-guided  wisdom  within-without  workflow  working-stiff  world  world-war  wormholes  worrydream  worse-is-better/the-right-thing  writing  X-not-about-Y  yak-shaving  yoga  zeitgeist  zero-positive-sum  zooming  🎓  🎩  👳  🔬  🖥  🤖 

Copy this bookmark:



description:


tags: