nhaliday + worrydream   145

How I Choose What To Read — David Perell
READING HEURISTICS
1. TRUST RECOMMENDATIONS — BUT NOT TOO MUCH
2. TAME THE THRILLERS
3. BLEND A BIZARRE BOWL
4. TRUST THE LINDY EFFECT
5. FAVOR BIOGRAPHIES OVER SELF-HELP
unaffiliated  advice  reflection  checklists  metabuch  learning  studying  info-foraging  skeleton  books  heuristic  contrarianism  ubiquity  time  track-record  thinking  blowhards  bret-victor  worrydream  list  top-n  recommendations  arbitrage  trust  aphorism  meta:reading  prioritizing  judgement 
4 weeks ago by nhaliday
Ask HN: Favorite note-taking software? | Hacker News
Ask HN: What is your ideal note-taking software and/or hardware?: https://news.ycombinator.com/item?id=13221158

my wishlist as of 2019:
- web + desktop macOS + mobile iOS (at least viewing on the last but ideally also editing)
- sync across all those
- open-source data format that's easy to manipulate for scripting purposes
- flexible organization: mostly tree hierarchical (subsuming linear/unorganized) but with the option for directed (acyclic) graph (possibly a second layer of structure/linking)
- can store plain text, LaTeX, diagrams, and raster/vector images (video prob not necessary except as links to elsewhere)
- full-text search
- somehow digest/import data from Pinboard, Workflowy, Papers 3/Bookends, and Skim, ideally absorbing most of their functionality
- so, eg, track notes/annotations side-by-side w/ original PDF/DjVu/ePub documents (to replace Papers3/Bookends/Skim), and maybe web pages too (to replace Pinboard)
- OCR of handwritten notes (how to handle equations/diagrams?)
- various forms of NLP analysis of everything (topic models, clustering, etc)
- maybe version control (less important than export)

candidates?:
- Evernote prob ruled out do to heavy use of proprietary data formats (unless I can find some way to export with tolerably clean output)
- Workflowy/Dynalist are good but only cover a subset of functionality I want
- org-mode doesn't interact w/ mobile well (and I haven't evaluated it in detail otherwise)
- TiddlyWiki/Zim are in the running, but not sure about mobile
- idk about vimwiki but I'm not that wedded to vim and it seems less widely used than org-mode/TiddlyWiki/Zim so prob pass on that
- Quiver/Joplin/Inkdrop look similar and cover a lot of bases, TODO: evaluate more
- Trilium looks especially promising, tho read-only mobile and for macOS desktop look at this: https://github.com/zadam/trilium/issues/511
- RocketBook is interesting scanning/OCR solution but prob not sufficient due to proprietary data format
- TODO: many more candidates, eg, TreeSheets, Gingko, OneNote (macOS?...), Notion (proprietary data format...), Zotero, Nodebook (https://nodebook.io/landing), Polar (https://getpolarized.io), Roam (looks very promising)

Ask HN: What do you use for you personal note taking activity?: https://news.ycombinator.com/item?id=15736102

Ask HN: What are your note-taking techniques?: https://news.ycombinator.com/item?id=9976751

Ask HN: How do you take notes (useful note-taking strategies)?: https://news.ycombinator.com/item?id=13064215

Ask HN: How to get better at taking notes?: https://news.ycombinator.com/item?id=21419478

Ask HN: How did you build up your personal knowledge base?: https://news.ycombinator.com/item?id=21332957
nice comment from math guy on structure and difference between math and CS: https://news.ycombinator.com/item?id=21338628
useful comment collating related discussions: https://news.ycombinator.com/item?id=21333383
highlights:
Designing a Personal Knowledge base: https://news.ycombinator.com/item?id=8270759
Ask HN: How to organize personal knowledge?: https://news.ycombinator.com/item?id=17892731
Do you use a personal 'knowledge base'?: https://news.ycombinator.com/item?id=21108527
Ask HN: How do you share/organize knowledge at work and life?: https://news.ycombinator.com/item?id=21310030

other stuff:
plain text: https://news.ycombinator.com/item?id=21685660

https://www.getdnote.com/blog/how-i-built-personal-knowledge-base-for-myself/
Tiago Forte: https://www.buildingasecondbrain.com

hn search: https://hn.algolia.com/?query=notetaking&type=story

Slant comparison commentary: https://news.ycombinator.com/item?id=7011281

good comparison of options here in comments here (and Trilium itself looks good): https://news.ycombinator.com/item?id=18840990

https://en.wikipedia.org/wiki/Comparison_of_note-taking_software

wikis:
https://www.slant.co/versus/5116/8768/~tiddlywiki_vs_zim
https://www.wikimatrix.org/compare/tiddlywiki+zim
http://tiddlymap.org/
https://www.zim-wiki.org/manual/Plugins/BackLinks_Pane.html
https://zim-wiki.org/manual/Plugins/Link_Map.html

apps:
Roam: https://news.ycombinator.com/item?id=21440289

intriguing but probably not appropriate for my needs: https://www.sophya.ai/

Inkdrop: https://news.ycombinator.com/item?id=20103589

Joplin: https://news.ycombinator.com/item?id=15815040
https://news.ycombinator.com/item?id=21555238

https://wreeto.com/

Leo Editor (combines tree outlining w/ literate programming/scripting, I think?): https://news.ycombinator.com/item?id=17769892

Frame: https://news.ycombinator.com/item?id=18760079

https://www.reddit.com/r/TheMotte/comments/cb18sy/anyone_use_a_personal_wiki_software_to_catalog/
https://archive.is/xViTY
Notion: https://news.ycombinator.com/item?id=18904648

https://www.reddit.com/r/slatestarcodex/comments/ap437v/modified_cornell_method_the_optimal_notetaking/
https://archive.is/e9oHu
https://www.reddit.com/r/slatestarcodex/comments/bt8a1r/im_about_to_start_a_one_month_journaling_test/
https://www.reddit.com/r/slatestarcodex/comments/9cot3m/question_how_do_you_guys_learn_things/
https://archive.is/HUH8V
https://www.reddit.com/r/slatestarcodex/comments/d7bvcp/how_to_read_a_book_for_understanding/
https://archive.is/VL2mi

Anki:
https://www.reddit.com/r/Anki/comments/as8i4t/use_anki_for_technical_books/
https://www.freecodecamp.org/news/how-anki-saved-my-engineering-career-293a90f70a73/
https://www.reddit.com/r/slatestarcodex/comments/ch24q9/anki_is_it_inferior_to_the_3x5_index_card_an/
https://archive.is/OaGc5
maybe not the best source for a review/advice

interesting comment(s) about tree outliners and spreadsheets: https://news.ycombinator.com/item?id=21170434

tablet:
https://www.inkandswitch.com/muse-studio-for-ideas.html
https://www.inkandswitch.com/capstone-manuscript.html
https://news.ycombinator.com/item?id=20255457
hn  discussion  recommendations  software  tools  desktop  app  notetaking  exocortex  wkfly  wiki  productivity  multi  comparison  crosstab  properties  applicability-prereqs  nlp  info-foraging  chart  webapp  reference  q-n-a  retention  workflow  reddit  social  ratty  ssc  learning  studying  commentary  structure  thinking  network-structure  things  collaboration  ocr  trees  graphs  LaTeX  search  todo  project  money-for-time  synchrony  pinboard  state  duplication  worrydream  simplification-normalization  links  minimalism  design  neurons  ai-control  openai  miri-cfar  parsimony  intricacy 
9 weeks ago by nhaliday
Zettelkästen? | Hacker News
Here’s a LessWrong post that describes it (including the insight “I honestly didn’t think Zettelkasten sounded like a good idea before I tried it” which I also felt).

yeah doesn't sound like a good idea to me either. idk
hn  commentary  techtariat  germanic  productivity  workflow  notetaking  exocortex  gtd  explore-exploit  business  comparison  academia  tech  ratty  lesswrong  idk  thinking  neurons  network-structure  software  tools  app  metabuch  writing  trees  graphs  skeleton  meta:reading  wkfly  worrydream 
9 weeks ago by nhaliday
Python Tutor - Visualize Python, Java, C, C++, JavaScript, TypeScript, and Ruby code execution
C++ support but not STL

Ten years and nearly ten million users: my experience being a solo maintainer of open-source software in academia: http://www.pgbovine.net/python-tutor-ten-years.htm
I HYPERFOCUS ON ONE SINGLE USE CASE
I (MOSTLY*) DON'T LISTEN TO USER REQUESTS
I (MOSTLY*) REFUSE TO EVEN TALK TO USERS
I DON'T DO ANY MARKETING OR COMMUNITY OUTREACH
I KEEP EVERYTHING STATELESS
I DON'T WORRY ABOUT PERFORMANCE OR RELIABILITY
I USE SUPER OLD AND STABLE TECHNOLOGIES
I DON'T MAKE IT EASY FOR OTHERS TO USE MY CODE
FINALLY, I DON'T LET OTHER PEOPLE CONTRIBUTE CODE
UNINSPIRATIONAL PARTING THOUGHTS
APPENDIX: ON OPEN-SOURCE SOFTWARE MAINTENANCE
tools  devtools  worrydream  ux  hci  research  project  homepage  python  programming  c(pp)  javascript  jvm  visualization  software  internet  web  debugging  techtariat  state  form-design  multi  reflection  oss  shipping  community  collaboration  marketing  ubiquity  robust  worse-is-better/the-right-thing  links  performance  engineering  summary  list  top-n  pragmatic  cynicism-idealism 
september 2019 by nhaliday
The Law of Leaky Abstractions – Joel on Software
[TCP/IP example]

All non-trivial abstractions, to some degree, are leaky.

...

- Something as simple as iterating over a large two-dimensional array can have radically different performance if you do it horizontally rather than vertically, depending on the “grain of the wood” — one direction may result in vastly more page faults than the other direction, and page faults are slow. Even assembly programmers are supposed to be allowed to pretend that they have a big flat address space, but virtual memory means it’s really just an abstraction, which leaks when there’s a page fault and certain memory fetches take way more nanoseconds than other memory fetches.

- The SQL language is meant to abstract away the procedural steps that are needed to query a database, instead allowing you to define merely what you want and let the database figure out the procedural steps to query it. But in some cases, certain SQL queries are thousands of times slower than other logically equivalent queries. A famous example of this is that some SQL servers are dramatically faster if you specify “where a=b and b=c and a=c” than if you only specify “where a=b and b=c” even though the result set is the same. You’re not supposed to have to care about the procedure, only the specification. But sometimes the abstraction leaks and causes horrible performance and you have to break out the query plan analyzer and study what it did wrong, and figure out how to make your query run faster.

...

- C++ string classes are supposed to let you pretend that strings are first-class data. They try to abstract away the fact that strings are hard and let you act as if they were as easy as integers. Almost all C++ string classes overload the + operator so you can write s + “bar” to concatenate. But you know what? No matter how hard they try, there is no C++ string class on Earth that will let you type “foo” + “bar”, because string literals in C++ are always char*’s, never strings. The abstraction has sprung a leak that the language doesn’t let you plug. (Amusingly, the history of the evolution of C++ over time can be described as a history of trying to plug the leaks in the string abstraction. Why they couldn’t just add a native string class to the language itself eludes me at the moment.)

- And you can’t drive as fast when it’s raining, even though your car has windshield wipers and headlights and a roof and a heater, all of which protect you from caring about the fact that it’s raining (they abstract away the weather), but lo, you have to worry about hydroplaning (or aquaplaning in England) and sometimes the rain is so strong you can’t see very far ahead so you go slower in the rain, because the weather can never be completely abstracted away, because of the law of leaky abstractions.

One reason the law of leaky abstractions is problematic is that it means that abstractions do not really simplify our lives as much as they were meant to. When I’m training someone to be a C++ programmer, it would be nice if I never had to teach them about char*’s and pointer arithmetic. It would be nice if I could go straight to STL strings. But one day they’ll write the code “foo” + “bar”, and truly bizarre things will happen, and then I’ll have to stop and teach them all about char*’s anyway.

...

The law of leaky abstractions means that whenever somebody comes up with a wizzy new code-generation tool that is supposed to make us all ever-so-efficient, you hear a lot of people saying “learn how to do it manually first, then use the wizzy tool to save time.” Code generation tools which pretend to abstract out something, like all abstractions, leak, and the only way to deal with the leaks competently is to learn about how the abstractions work and what they are abstracting. So the abstractions save us time working, but they don’t save us time learning.

https://www.benkuhn.net/hatch
People think a lot about abstractions and how to design them well. Here’s one feature I’ve recently been noticing about well-designed abstractions: they should have simple, flexible and well-integrated escape hatches.
techtariat  org:com  working-stiff  essay  programming  cs  software  abstraction  worrydream  thinking  intricacy  degrees-of-freedom  networking  examples  traces  no-go  volo-avolo  tradeoffs  c(pp)  pls  strings  dbs  transportation  driving  analogy  aphorism  learning  paradox  systems  elegance  nitty-gritty  concrete  cracker-prog  metal-to-virtual  protocol-metadata  design  system-design  multi  ratty  core-rats  integration-extension  composition-decomposition  flexibility  parsimony  interface-compatibility 
july 2019 by nhaliday
Computer latency: 1977-2017
If we look at overall results, the fastest machines are ancient. Newer machines are all over the place. Fancy gaming rigs with unusually high refresh-rate displays are almost competitive with machines from the late 70s and early 80s, but “normal” modern computers can’t compete with thirty to forty year old machines.

...

If we exclude the game boy color, which is a different class of device than the rest, all of the quickest devices are Apple phones or tablets. The next quickest device is the blackberry q10. Although we don’t have enough data to really tell why the blackberry q10 is unusually quick for a non-Apple device, one plausible guess is that it’s helped by having actual buttons, which are easier to implement with low latency than a touchscreen. The other two devices with actual buttons are the gameboy color and the kindle 4.

After that iphones and non-kindle button devices, we have a variety of Android devices of various ages. At the bottom, we have the ancient palm pilot 1000 followed by the kindles. The palm is hamstrung by a touchscreen and display created in an era with much slower touchscreen technology and the kindles use e-ink displays, which are much slower than the displays used on modern phones, so it’s not surprising to see those devices at the bottom.

...

Almost every computer and mobile device that people buy today is slower than common models of computers from the 70s and 80s. Low-latency gaming desktops and the ipad pro can get into the same range as quick machines from thirty to forty years ago, but most off-the-shelf devices aren’t even close.

If we had to pick one root cause of latency bloat, we might say that it’s because of “complexity”. Of course, we all know that complexity is bad. If you’ve been to a non-academic non-enterprise tech conference in the past decade, there’s a good chance that there was at least one talk on how complexity is the root of all evil and we should aspire to reduce complexity.

Unfortunately, it's a lot harder to remove complexity than to give a talk saying that we should remove complexity. A lot of the complexity buys us something, either directly or indirectly. When we looked at the input of a fancy modern keyboard vs. the apple 2 keyboard, we saw that using a relatively powerful and expensive general purpose processor to handle keyboard inputs can be slower than dedicated logic for the keyboard, which would both be simpler and cheaper. However, using the processor gives people the ability to easily customize the keyboard, and also pushes the problem of “programming” the keyboard from hardware into software, which reduces the cost of making the keyboard. The more expensive chip increases the manufacturing cost, but considering how much of the cost of these small-batch artisanal keyboards is the design cost, it seems like a net win to trade manufacturing cost for ease of programming.

...

If you want a reference to compare the kindle against, a moderately quick page turn in a physical book appears to be about 200 ms.

https://twitter.com/gravislizard/status/927593460642615296
almost everything on computers is perceptually slower than it was in 1983
https://archive.is/G3D5K
https://archive.is/vhDTL
https://archive.is/a3321
https://archive.is/imG7S
techtariat  dan-luu  performance  time  hardware  consumerism  objektbuch  data  history  reflection  critique  software  roots  tainter  engineering  nitty-gritty  ui  ux  hci  ios  mobile  apple  amazon  sequential  trends  increase-decrease  measure  analysis  measurement  os  systems  IEEE  intricacy  desktop  benchmarks  rant  carmack  system-design  degrees-of-freedom  keyboard  terminal  editors  links  input-output  networking  world  s:**  multi  twitter  social  discussion  tech  programming  web  internet  speed  backup  worrydream  interface  metal-to-virtual  latency-throughput  workflow  form-design  interface-compatibility 
july 2019 by nhaliday
Frama-C
Frama-C is organized with a plug-in architecture (comparable to that of the Gimp or Eclipse). A common kernel centralizes information and conducts the analysis. Plug-ins interact with each other through interfaces defined by the kernel. This makes for robustness in the development of Frama-C while allowing a wide functionality spectrum.

...

Three heavyweight plug-ins that are used by the other plug-ins:

- Eva (Evolved Value analysis)
This plug-in computes variation domains for variables. It is quite automatic, although the user may guide the analysis in places. It handles a wide spectrum of C constructs. This plug-in uses abstract interpretation techniques.
- Jessie and Wp, two deductive verification plug-ins
These plug-ins are based on weakest precondition computation techniques. They allow to prove that C functions satisfy their specification as expressed in ACSL. These proofs are modular: the specifications of the called functions are used to establish the proof without looking at their code.

For browsing unfamiliar code:
- Impact analysis
This plug-in highlights the locations in the source code that are impacted by a modification.
- Scope & Data-flow browsing
This plug-in allows the user to navigate the dataflow of the program, from definition to use or from use to definition.
- Variable occurrence browsing
Also provided as a simple example for new plug-in development, this plug-in allows the user to reach the statements where a given variable is used.
- Metrics calculation
This plug-in allows the user to compute various metrics from the source code.

For code transformation:
- Semantic constant folding
This plug-in makes use of the results of the evolved value analysis plug-in to replace, in the source code, the constant expressions by their values. Because it relies on EVA, it is able to do more of these simplifications than a syntactic analysis would.
- Slicing
This plug-in slices the code according to a user-provided criterion: it creates a copy of the program, but keeps only those parts which are necessary with respect to the given criterion.
- Spare code: remove "spare code", code that does not contribute to the final results of the program.
- E-ACSL: translate annotations into C code for runtime assertion checking.
For verifying functional specifications:

- Aoraï: verify specifications expressed as LTL (Linear Temporal Logic) formulas
Other functionalities documented together with the EVA plug-in can be considered as verifying low-level functional specifications (inputs, outputs, dependencies,…)
For test-case generation:

- PathCrawler automatically finds test-case inputs to ensure coverage of a C function. It can be used for structural unit testing, as a complement to static analysis or to study the feasible execution paths of the function.
For concurrent programs:

- Mthread
This plug-in automatically analyzes concurrent C programs, using the EVA plug-in, taking into account all possible thread interactions. At the end of its execution, the concurrent behavior of each thread is over-approximated, resulting in precise information about shared variables, which mutex protects a part of the code, etc.
Front-end for other languages

- Frama-Clang
This plug-in provides a C++ front-end to Frama-C, based on the clang compiler. It transforms C++ code into a Frama-C AST, which can then be analyzed by the plug-ins above. Note however that it is very experimental and only supports a subset of C++11
tools  devtools  formal-methods  programming  software  c(pp)  systems  memory-management  ocaml-sml  debugging  checking  rigor  oss  code-dive  graphs  state  metrics  llvm  gallic  cool  worrydream  impact  flux-stasis  correctness  computer-memory  structure  static-dynamic 
may 2019 by nhaliday
One week of bugs
If I had to guess, I'd say I probably work around hundreds of bugs in an average week, and thousands in a bad week. It's not unusual for me to run into a hundred new bugs in a single week. But I often get skepticism when I mention that I run into multiple new (to me) bugs per day, and that this is inevitable if we don't change how we write tests. Well, here's a log of one week of bugs, limited to bugs that were new to me that week. After a brief description of the bugs, I'll talk about what we can do to improve the situation. The obvious answer to spend more effort on testing, but everyone already knows we should do that and no one does it. That doesn't mean it's hopeless, though.

...

Here's where I'm supposed to write an appeal to take testing more seriously and put real effort into it. But we all know that's not going to work. It would take 90k LOC of tests to get Julia to be as well tested as a poorly tested prototype (falsely assuming linear complexity in size). That's two person-years of work, not even including time to debug and fix bugs (which probably brings it closer to four of five years). Who's going to do that? No one. Writing tests is like writing documentation. Everyone already knows you should do it. Telling people they should do it adds zero information1.

Given that people aren't going to put any effort into testing, what's the best way to do it?

Property-based testing. Generative testing. Random testing. Concolic Testing (which was done long before the term was coined). Static analysis. Fuzzing. Statistical bug finding. There are lots of options. Some of them are actually the same thing because the terminology we use is inconsistent and buggy. I'm going to arbitrarily pick one to talk about, but they're all worth looking into.

...

There are a lot of great resources out there, but if you're just getting started, I found this description of types of fuzzers to be one of those most helpful (and simplest) things I've read.

John Regehr has a udacity course on software testing. I haven't worked through it yet (Pablo Torres just pointed to it), but given the quality of Dr. Regehr's writing, I expect the course to be good.

For more on my perspective on testing, there's this.

Everything's broken and nobody's upset: https://www.hanselman.com/blog/EverythingsBrokenAndNobodysUpset.aspx
https://news.ycombinator.com/item?id=4531549

https://hypothesis.works/articles/the-purpose-of-hypothesis/
From the perspective of a user, the purpose of Hypothesis is to make it easier for you to write better tests.

From my perspective as the primary author, that is of course also a purpose of Hypothesis. I write a lot of code, it needs testing, and the idea of trying to do that without Hypothesis has become nearly unthinkable.

But, on a large scale, the true purpose of Hypothesis is to drag the world kicking and screaming into a new and terrifying age of high quality software.

Software is everywhere. We have built a civilization on it, and it’s only getting more prevalent as more services move online and embedded and “internet of things” devices become cheaper and more common.

Software is also terrible. It’s buggy, it’s insecure, and it’s rarely well thought out.

This combination is clearly a recipe for disaster.

The state of software testing is even worse. It’s uncontroversial at this point that you should be testing your code, but it’s a rare codebase whose authors could honestly claim that they feel its testing is sufficient.

Much of the problem here is that it’s too hard to write good tests. Tests take up a vast quantity of development time, but they mostly just laboriously encode exactly the same assumptions and fallacies that the authors had when they wrote the code, so they miss exactly the same bugs that you missed when they wrote the code.

Preventing the Collapse of Civilization [video]: https://news.ycombinator.com/item?id=19945452
- Jonathan Blow

NB: DevGAMM is a game industry conference

- loss of technological knowledge (Antikythera mechanism, aqueducts, etc.)
- hardware driving most gains, not software
- software's actually less robust, often poorly designed and overengineered these days
- *list of bugs he's encountered recently*:
https://youtu.be/pW-SOdj4Kkk?t=1387
- knowledge of trivia becomes more than general, deep knowledge
- does at least acknowledge value of DRY, reusing code, abstraction saving dev time
techtariat  dan-luu  tech  software  error  list  debugging  linux  github  robust  checking  oss  troll  lol  aphorism  webapp  email  google  facebook  games  julia  pls  compilers  communication  mooc  browser  rust  programming  engineering  random  jargon  formal-methods  expert-experience  prof  c(pp)  course  correctness  hn  commentary  video  presentation  carmack  pragmatic  contrarianism  pessimism  sv  unix  rhetoric  critique  worrydream  hardware  performance  trends  multiplicative  roots  impact  comparison  history  iron-age  the-classics  mediterranean  conquest-empire  gibbon  technology  the-world-is-just-atoms  flux-stasis  increase-decrease  graphics  hmm  idk  systems  os  abstraction  intricacy  worse-is-better/the-right-thing  build-packaging  microsoft  osx  apple  reflection  assembly  things  knowledge  detail-architecture  thick-thin  trivia  info-dynamics  caching  frameworks  generalization  systematic-ad-hoc  universalism-particularism  analytical-holistic  structure  tainter  libraries  tradeoffs  prepping  threat-modeling  network-structure  writing  risk  local-glob 
may 2019 by nhaliday
Why is reverse debugging rarely used? - Software Engineering Stack Exchange
(time travel)

For one, running in debug mode with recording on is very expensive compared to even normal debug mode; it also consumes a lot more memory.

It is easier to decrease the granularity from line level to function call level. For example, the standard debugger in eclipse allows you to "drop to frame," which is essentially a jump back to the start of the function with a reset of all the parameters (nothing done on the heap is reverted, and finally blocks are not executed, so it is not a true reverse debugger; be careful about that).

Note that this has been available for several years now and works hand in hand with hot-code replacement.
--
As mentioned already, performance is key e.g. with gdb's reversible debugging, running something like gzip sees a slowdown of 50,000x compared to running natively. There are commercial alternatives however: I work for Undo undo.io, and our UndoDB product does the same but with a slowdown of less than 2x. There are other commercial reversible debuggers available too.

https://undo.io
Based on GDB, UndoDB supports source-level debugging for applications written in any language supported by GDB, including C/C++, Rust and Ada.
q-n-a  stackex  programming  engineering  impetus  debugging  time  increase-decrease  worrydream  hci  devtools  direction  roots  money-for-time  review  comparison  critique  tools  software  multi  systems  c(pp)  rust  state 
may 2019 by nhaliday
maintenance - Why do dynamic languages make it more difficult to maintain large codebases? - Software Engineering Stack Exchange
Now here is the key point I have been building up to: there is a strong correlation between a language being dynamically typed and a language also lacking all the other facilities that make lowering the cost of maintaining a large codebase easier, and that is the key reason why it is more difficult to maintain a large codebase in a dynamic language. And similarly there is a correlation between a language being statically typed and having facilities that make programming in the larger easier.
programming  worrydream  plt  hmm  comparison  pls  carmack  techtariat  types  engineering  productivity  pro-rata  input-output  correlation  best-practices  composition-decomposition  error  causation  confounding  devtools  jvm  scala  open-closed  cost-benefit  static-dynamic  design  system-design 
may 2019 by nhaliday
Cultural group selection plays an essential role in explaining human cooperation: A sketch of the evidence
Pursuing Darwin’s curious parallel: Prospects for a science of cultural evolution: http://www.pnas.org/content/early/2017/07/18/1620741114.full

Axelrod model: http://ncase.me/trust/

Peer punishment promotes enforcement of bad social norms: https://www.nature.com/articles/s41467-017-00731-0
Social norms are an important element in explaining how humans achieve very high levels of cooperative activity. It is widely observed that, when norms can be enforced by peer punishment, groups are able to resolve social dilemmas in prosocial, cooperative ways. Here we show that punishment can also encourage participation in destructive behaviours that are harmful to group welfare, and that this phenomenon is mediated by a social norm. In a variation of a public goods game, in which the return to investment is negative for both group and individual, we find that the opportunity to punish led to higher levels of contribution, thereby harming collective payoffs. A second experiment confirmed that, independently of whether punishment is available, a majority of subjects regard the efficient behaviour of non-contribution as socially inappropriate. The results show that simply providing a punishment opportunity does not guarantee that punishment will be used for socially beneficial ends, because the social norms that influence punishment behaviour may themselves be destructive.

https://twitter.com/Peter_Turchin/status/911886386051108864
Peer punishment can stabilize anything, both good and bad norms. This is why you need group selection to select good social norms.
pdf  study  article  survey  sociology  anthropology  sapiens  cultural-dynamics  🌞  cooperate-defect  GT-101  EGT  deep-materialism  group-selection  coordination  religion  theos  social-norms  morality  coalitions  s:**  turchin  decision-making  microfoundations  multi  better-explained  techtariat  visualization  dynamic  worrydream  simulation  operational  let-me-see  trust  garett-jones  polarization  media  internet  zero-positive-sum  axelrod  eden  honor  org:nat  unintended-consequences  public-goodish  broad-econ  twitter  social  commentary  summary  slippery-slope  selection  competition  organizing  war  henrich  evolution  darwinian  tribalism  hari-seldon  cybernetics  reinforcement  ecology  sociality 
june 2017 by nhaliday
Reading | West Hunter
Reading speed and comprehension interest me, but I don’t have as much information as I would like.  I would like to see the distribution of reading speeds ( in the general population, and also in college graduates).  I have looked a bit at discussions of this, and there’s something wrong.  Or maybe a lot wrong.  Researchers apparently say that nobody reads 900 words a minute with full comprehension, but I’ve seen it done.  I would also like to know if anyone has statistically validated methods that  increase reading speed.

On related topics, I wonder how many serious readers  there are, here and also in other countries.  Are they as common in Japan or China, with their very different scripts?   Are reading speeds higher or lower there?

How many people have  their houses really, truly stuffed with books?  Here and elsewhere?  Last time I checked we had about 5000 books around the house: I figure that’s serious, verging on the pathological.

To what extent do people remember what they read?  Judging from the general results of  adult knowledge studies, not very much of what they took in school, but maybe voluntary reading is different.

https://westhunt.wordpress.com/2012/06/05/reading/#comment-3187
The researchers claim that the range of high-comprehension reading speed doesn’t go up anywhere near 900 wpm. But my daughter routinely reads at that speed. In high school, I took a reading speed test and scored a bit over 1000 wpm, with perfect comprehension.

I have suggested that the key to high reading speed is the experience of trying to finish a entire science fiction paperback in a drugstore before the proprietor tells you to buy the damn thing or get out. Helps if you can hide behind the bookrack.

https://westhunt.wordpress.com/2019/03/31/early-reading/
There are a few small children, mostly girls, that learn to read very early. You read stories to them and before you know they’re reading by themselves. By very early, I men age 3 or 4.

Does this happen in China ?

hmm:
Beijingers' average daily reading time exceeds an hour: report: http://www.chinadaily.com.cn/a/201712/07/WS5a293e1aa310fcb6fafd44c0.html

Free Speed Reading Test by AceReader: http://www.freereadingtest.com/
time+comprehension

http://www.readingsoft.com/
claims: 1000 wpm with 85% comprehension at top 1%, 200 wpm at 60% for average

https://www.wsj.com/articles/speed-reading-returns-1395874723
http://projects.wsj.com/speedread/

https://news.ycombinator.com/item?id=929753
Take a look at "Reading Rate: A Review of Research and Theory" by Ronald P. Carver
http://www.amazon.com/Reading-Rate-Review-Research-Theory/dp...
The conclusion is, basically, that speed reading courses don't work.
You can teach people to skim at a faster rate than they'd read with maximum comprehension and retention. And you can teach people study skills, such as how to summarize salient points, and take notes.
But all these skills are not at all the same as what speed reading usually promises, which is to drastically increase the rate at which you read with full comprehension and retention. According to Carver's book, it can't be done, at least not drastically past about the rate you'd naturally read at the college level.
west-hunter  scitariat  discussion  speculation  ideas  rant  critique  learning  studying  westminster  error  realness  language  japan  china  asia  sinosphere  retention  foreign-lang  info-foraging  scale  speed  innovation  explanans  creative  multi  data  urban-rural  time  time-use  europe  the-great-west-whale  occident  orient  people  track-record  trivia  books  number  knowledge  poll  descriptive  distribution  tools  quiz  neurons  anglo  hn  poast  news  org:rec  metrics  density  writing  meta:reading  thinking  worrydream 
june 2017 by nhaliday
In the first place | West Hunter
We hear a lot about innovative educational approaches, and since these silly people have been at this for a long time now, we hear just as often about the innovative approaches that some idiot started up a few years ago and are now crashing in flames.  We’re in steady-state.

I’m wondering if it isn’t time to try something archaic.  In particular, mnemonic techniques, such as the method of loci.  As far as I know, nobody has actually tried integrating the more sophisticated mnemonic techniques into a curriculum.  Sure, we all know useful acronyms, like the one for resistor color codes, but I’ve not heard of anyone teaching kids how to build a memory palace.

https://westhunt.wordpress.com/2013/12/28/in-the-first-place/#comment-20106
I have never used formal mnemonic techniques, but life has recently tested me on how well I remember material from my college days. Turns out that I can still do the sorts of math and physics problems that I could then, in subjects like classical mechanics, real analysis, combinatorics, complex variables, quantum mechanics, statistical mechanics, etc. I usually have to crack the book though. Some of that material I have used from time to time, or even fairly often (especially linear algebra), most not. I’m sure I’m slower than I was then, at least on the stuff I haven’t used.

https://westhunt.wordpress.com/2013/12/28/in-the-first-place/#comment-20109
Long-term memory capacity must be finite, but I know of no evidence that anyone has ever run out of it. As for the idea that you don’t really need a lot of facts in your head to come up with new ideas: pretty much the opposite of the truth, in a lot of fields.

https://en.wikipedia.org/wiki/Method_of_loci

Mental Imagery > Ancient Imagery Mnemonics: https://plato.stanford.edu/entries/mental-imagery/ancient-imagery-mnemonics.html
In the Middle Ages and the Renaissance, very elaborate versions of the method evolved, using specially learned imaginary spaces (Memory Theaters or Palaces), and complex systems of predetermined symbolic images, often imbued with occult or spiritual significances. However, modern experimental research has shown that even a simple and easily learned form of the method of loci can be highly effective (Ross & Lawrence, 1968; Maguire et al., 2003), as are several other imagery based mnemonic techniques (see section 4.2 of the main entry).

The advantages of organizing knowledge in terms of country and place: http://marginalrevolution.com/marginalrevolution/2018/02/advantages-organizing-knowledge-terms-country-place.html

https://www.quora.com/What-are-the-best-books-on-Memory-Palace

fascinating aside:
US vs Nazi army, Vietnam, the draft: https://westhunt.wordpress.com/2013/12/28/in-the-first-place/#comment-20136
You think I know more about this than a retired major general and former head of the War College? I do, of course, but that fact itself should worry you.

He’s not all wrong, but a lot of what he says is wrong. For example, the Germany Army was a conscript army, so conscription itself can’t explain why the Krauts were about 25% more effective than the average American unit. Nor is it true that the draft in WWII was corrupt.

The US had a different mix of armed forces – more air forces and a much larger Navy than Germany. Those services have higher technical requirements and sucked up a lot of the smarter guys. That was just a product of the strategic situation.

The Germans had better officers, partly because of better training and doctrine, partly the fruit of a different attitude towards the army. The US, much of the time, thought of the Army as a career for losers, but Germans did not.

The Germans had an enormous amount of relevant combat experience, much more than anyone in the US. Spend a year or two on the Eastern Front and you learn.

And the Germans had better infantry weapons.

The US tooth-to-tail ratio was , I think, worse than that of the Germans: some of that was a natural consequence of being an expeditionary force, but some was just a mistake. You want supply sergeants to be literate, but it is probably true that we put too many of the smarter guys into non-combat positions. That changed some when we ran into manpower shortages in late 1944 and combed out the support positions.

This guy is back-projecting Vietnam problems into WWII – he’s mostly wrong.

more (more of a focus on US Marines than Army): https://www.quora.com/Were-US-Marines-tougher-than-elite-German-troops-in-WW2/answer/Joseph-Scott-13
west-hunter  scitariat  speculation  ideas  proposal  education  learning  retention  neurons  the-classics  nitty-gritty  visuo  spatial  psych-architecture  multi  poast  history  mostly-modern  world-war  war  military  strategy  usa  europe  germanic  cold-war  visual-understanding  cartoons  narrative  wordlessness  comparison  asia  developing-world  knowledge  metabuch  econotariat  marginal-rev  discussion  world  thinking  government  local-global  humility  wire-guided  policy  iron-age  mediterranean  wiki  reference  checklists  exocortex  early-modern  org:edu  philosophy  enlightenment-renaissance-restoration-reformation  qra  q-n-a  books  recommendations  list  links  ability-competence  leadership  elite  higher-ed  math  physics  linear-algebra  cost-benefit  prioritizing  defense  martial  war-nerd  worrydream 
may 2017 by nhaliday
soft question - Thinking and Explaining - MathOverflow
- good question from Bill Thurston
- great answers by Terry Tao, fedja, Minhyong Kim, gowers, etc.

Terry Tao:
- symmetry as blurring/vibrating/wobbling, scale invariance
- anthropomorphization, adversarial perspective for estimates/inequalities/quantifiers, spending/economy

fedja walks through his though-process from another answer

Minhyong Kim: anthropology of mathematical philosophizing

Per Vognsen: normality as isotropy
comment: conjugate subgroup gHg^-1 ~ "H but somewhere else in G"

gowers: hidden things in basic mathematics/arithmetic
comment by Ryan Budney: x sin(x) via x -> (x, sin(x)), (x, y) -> xy
I kinda get what he's talking about but needed to use Mathematica to get the initial visualization down.
To remind myself later:
- xy can be easily visualized by juxtaposing the two parabolae x^2 and -x^2 diagonally
- x sin(x) can be visualized along that surface by moving your finger along the line (x, 0) but adding some oscillations in y direction according to sin(x)
q-n-a  soft-question  big-list  intuition  communication  teaching  math  thinking  writing  thurston  lens  overflow  synthesis  hi-order-bits  👳  insight  meta:math  clarity  nibble  giants  cartoons  gowers  mathtariat  better-explained  stories  the-trenches  problem-solving  homogeneity  symmetry  fedja  examples  philosophy  big-picture  vague  isotropy  reflection  spatial  ground-up  visual-understanding  polynomials  dimensionality  math.GR  worrydream  scholar  🎓  neurons  metabuch  yoga  retrofit  mental-math  metameta  wisdom  wordlessness  oscillation  operational  adversarial  quantifiers-sums  exposition  explanation  tricki  concrete  s:***  manifolds  invariance  dynamical  info-dynamics  cool  direction  elegance  heavyweights  analysis  guessing  grokkability-clarity  technical-writing 
january 2017 by nhaliday
Dgsh – Directed graph shell | Hacker News
I've worked with and looked at a lot of data processing helpers. Tools, that try to help you build data pipelines, for the sake of performance, reproducibility or simply code uniformity.
What I found so far: Most tools, that invent a new language or try to cram complex processes into lesser suited syntactical environments are not loved too much.

...

I'll give dgsh a try. The tool reuse approach and the UNIX spirit seems nice. But my initial impression of the "C code metrics" example from the site is mixed: It reminds me of awk, about which one of the authors said, that it's a beautiful language, but if your programs getting longer than hundred lines, you might want to switch to something else.

Two libraries which have a great grip at the plumbing aspect of data processing systems are airflow and luigi. They are python libraries and with it you have a concise syntax and basically all python libraries plus non-python tools with a command line interface at you fingertips.

I am curious, what kind of process orchestration tools people use and can recommend?

--

Exactly our experience too, from complex machine learning workflows in various aspects of drug discovery.
We basically did not really find any of the popular DSL-based bioinformatics pipeline tools (snakemake, bpipe etc) to fit the bill. Nextflow came close, but in fact allows quite some custom code too.

What worked for us was to use Spotify's Luigi, which is a python library rather than DSL.

The only thing was that we had to develop a flow-based inspired API on top of Luigi's more functional programming based one, in order to make defining dependencies fluent and easy enough to specify for our complex workflows.

Our flow-based inspired Luigi API (SciLuigi) for complex workflows, is available at:

https://github.com/pharmbio/sciluigi

--

We have measured many of the examples against the use of temporary files and the web report one against (single-threaded) implementations in Perl and Java. In almost all cases dgsh takes less wall clock time, but often consumes more CPU resources.
commentary  project  programming  terminal  worrydream  pls  plt  unix  hn  graphs  tools  devtools  let-me-see  composition-decomposition  yak-shaving  workflow  exocortex  hmm  cool  software  desktop  sci-comp  stock-flow  performance  comparison  links  libraries  python 
january 2017 by nhaliday
gt.geometric topology - Intuitive crutches for higher dimensional thinking - MathOverflow
Terry Tao:
I can't help you much with high-dimensional topology - it's not my field, and I've not picked up the various tricks topologists use to get a grip on the subject - but when dealing with the geometry of high-dimensional (or infinite-dimensional) vector spaces such as R^n, there are plenty of ways to conceptualise these spaces that do not require visualising more than three dimensions directly.

For instance, one can view a high-dimensional vector space as a state space for a system with many degrees of freedom. A megapixel image, for instance, is a point in a million-dimensional vector space; by varying the image, one can explore the space, and various subsets of this space correspond to various classes of images.

One can similarly interpret sound waves, a box of gases, an ecosystem, a voting population, a stream of digital data, trials of random variables, the results of a statistical survey, a probabilistic strategy in a two-player game, and many other concrete objects as states in a high-dimensional vector space, and various basic concepts such as convexity, distance, linearity, change of variables, orthogonality, or inner product can have very natural meanings in some of these models (though not in all).

It can take a bit of both theory and practice to merge one's intuition for these things with one's spatial intuition for vectors and vector spaces, but it can be done eventually (much as after one has enough exposure to measure theory, one can start merging one's intuition regarding cardinality, mass, length, volume, probability, cost, charge, and any number of other "real-life" measures).

For instance, the fact that most of the mass of a unit ball in high dimensions lurks near the boundary of the ball can be interpreted as a manifestation of the law of large numbers, using the interpretation of a high-dimensional vector space as the state space for a large number of trials of a random variable.

More generally, many facts about low-dimensional projections or slices of high-dimensional objects can be viewed from a probabilistic, statistical, or signal processing perspective.

Scott Aaronson:
Here are some of the crutches I've relied on. (Admittedly, my crutches are probably much more useful for theoretical computer science, combinatorics, and probability than they are for geometry, topology, or physics. On a related note, I personally have a much easier time thinking about R^n than about, say, R^4 or R^5!)

1. If you're trying to visualize some 4D phenomenon P, first think of a related 3D phenomenon P', and then imagine yourself as a 2D being who's trying to visualize P'. The advantage is that, unlike with the 4D vs. 3D case, you yourself can easily switch between the 3D and 2D perspectives, and can therefore get a sense of exactly what information is being lost when you drop a dimension. (You could call this the "Flatland trick," after the most famous literary work to rely on it.)
2. As someone else mentioned, discretize! Instead of thinking about R^n, think about the Boolean hypercube {0,1}^n, which is finite and usually easier to get intuition about. (When working on problems, I often find myself drawing {0,1}^4 on a sheet of paper by drawing two copies of {0,1}^3 and then connecting the corresponding vertices.)
3. Instead of thinking about a subset S⊆R^n, think about its characteristic function f:R^n→{0,1}. I don't know why that trivial perspective switch makes such a big difference, but it does ... maybe because it shifts your attention to the process of computing f, and makes you forget about the hopeless task of visualizing S!
4. One of the central facts about R^n is that, while it has "room" for only n orthogonal vectors, it has room for exp⁡(n) almost-orthogonal vectors. Internalize that one fact, and so many other properties of R^n (for example, that the n-sphere resembles a "ball with spikes sticking out," as someone mentioned before) will suddenly seem non-mysterious. In turn, one way to internalize the fact that R^n has so many almost-orthogonal vectors is to internalize Shannon's theorem that there exist good error-correcting codes.
5. To get a feel for some high-dimensional object, ask questions about the behavior of a process that takes place on that object. For example: if I drop a ball here, which local minimum will it settle into? How long does this random walk on {0,1}^n take to mix?

Gil Kalai:
This is a slightly different point, but Vitali Milman, who works in high-dimensional convexity, likes to draw high-dimensional convex bodies in a non-convex way. This is to convey the point that if you take the convex hull of a few points on the unit sphere of R^n, then for large n very little of the measure of the convex body is anywhere near the corners, so in a certain sense the body is a bit like a small sphere with long thin "spikes".
q-n-a  intuition  math  visual-understanding  list  discussion  thurston  tidbits  aaronson  tcs  geometry  problem-solving  yoga  👳  big-list  metabuch  tcstariat  gowers  mathtariat  acm  overflow  soft-question  levers  dimensionality  hi-order-bits  insight  synthesis  thinking  models  cartoons  coding-theory  information-theory  probability  concentration-of-measure  magnitude  linear-algebra  boolean-analysis  analogy  arrows  lifts-projections  measure  markov  sampling  shannon  conceptual-vocab  nibble  degrees-of-freedom  worrydream  neurons  retrofit  oscillation  paradox  novelty  tricki  concrete  high-dimension  s:***  manifolds  direction  curvature  convexity-curvature  elegance  guessing 
december 2016 by nhaliday
the-perfect-bug-report
Reproducing bugs is awful. You get an issue like “Problem with Sidebar” that vaguely describes some odd behavior. Now you must somehow reproduce it exactly. Was it the specific timing of events? Was it bad data from the server? Was it specific to a certain user? Was it a recently updated dependency? As you slog through all these possibilities, the most annoying thing is that the person who opened the bug report already had all this information! In an ideal world, you could just replay their exact session.

Elm 0.18 lets you do exactly that! In debug mode, Elm lets you import and export the exact sequence of events from a program. You get all the information necessary to reproduce the session exactly, from mouse clicks to HTTP requests.
worrydream  functional  pls  announcement  debugging  frontend  web  javascript  time  traces  sequential  roots  explanans  replication  duplication  live-coding  state  direction 
november 2016 by nhaliday
« earlier      
per page:    204080120160

bundles : abstractworrydream

related tags

2016-election  :)  aaronson  ability-competence  abstraction  academia  accretion  acm  acmtariat  advanced  adversarial  advice  age-generation  aggregator  ai  ai-control  akrasia  albion  algebra  algorithms  alignment  allodium  alt-inst  ama  amazon  analogy  analysis  analytical-holistic  anglo  announcement  anthropology  antiquity  aphorism  api  app  apple  applicability-prereqs  applications  arbitrage  arrows  art  article  asia  assembly  attention  automation  axelrod  backup  bangbang  bare-hands  barons  beauty  benchmarks  best-practices  better-explained  big-list  big-picture  biophysical-econ  blog  blowhards  books  boolean-analysis  bots  bounded-cognition  bret-victor  britain  broad-econ  browser  build-packaging  business  c(pp)  caching  calculation  calculator  caltech  career  carmack  cartoons  CAS  causation  chart  checking  checklists  chemistry  china  civilization  clarity  classic  clever-rats  climate-change  coalitions  code-dive  code-organizing  coding-theory  cog-psych  cold-war  collaboration  comedy  comics  commentary  communication  community  comparison  competition  compilers  complex-systems  composition-decomposition  computation  computer-memory  computer-vision  concentration-of-measure  concept  conceptual-vocab  concrete  concurrency  conference  confluence  confounding  conquest-empire  consumerism  context  contrarianism  convexity-curvature  cool  cooperate-defect  coordination  core-rats  correctness  correlation  cost-benefit  coupling-cohesion  course  cracker-prog  creative  critique  crosstab  cs  cultural-dynamics  culture  curiosity  curvature  cybernetics  cynicism-idealism  d3  dan-luu  darwinian  data  data-science  data-structures  database  dataviz  dbs  debate  debt  debugging  decision-making  deep-learning  deep-materialism  defense  definite-planning  degrees-of-freedom  density  dependence-independence  descriptive  design  desktop  detail-architecture  developing-world  devtools  differential  dimensionality  diogenes  direct-indirect  direction  dirty-hands  discovery  discussion  distribution  divide-and-conquer  documentation  dotnet  DP  draft  driving  DSL  duplication  dynamic  dynamical  early-modern  earth  ecology  econometrics  econotariat  ecosystem  eden  editors  education  EGT  eh  elections  electromag  elegance  elite  email  embeddings  embodied  empirical  endogenous-exogenous  energy-resources  engineering  enlightenment-renaissance-restoration-reformation  environment  epistemic  error  error-handling  essay  ethical-algorithms  europe  evidence-based  evolution  examples  exocortex  expectancy  experiment  expert-experience  explanans  explanation  explore-exploit  exposition  externalities  extratricky  facebook  features  fedja  feynman  finance  fintech  flexibility  flux-stasis  foreign-lang  form-design  formal-methods  fourier  frameworks  free  frontend  frontier  functional  futurism  gallic  games  garett-jones  generalization  generative  geography  geometry  germanic  giants  gibbon  gif  github  glitch  google  gotchas  government  gowers  graph-theory  graphics  graphs  gravity  greedy  grokkability  grokkability-clarity  ground-up  group-selection  GT-101  gtd  guessing  guide  gwern  habit  hardware  hari-seldon  haskell  hci  heavyweights  henrich  heuristic  hi-order-bits  high-dimension  higher-ed  history  hmm  hn  homepage  homogeneity  honor  hsu  humility  ide  ideas  idk  IEEE  impact  impetus  incentives  increase-decrease  india  inference  info-dynamics  info-foraging  information-theory  infrastructure  inhibition  init  innovation  input-output  insight  institutions  integration-extension  intelligence  interdisciplinary  interface  interface-compatibility  internet  interpretability  interview  intricacy  intuition  invariance  investing  ios  iron-age  islam  isotropy  iteration-recursion  japan  jargon  javascript  judgement  julia  jvm  keyboard  knowledge  language  latency-throughput  latex  law  leadership  learning  lectures  legacy  len:long  lens  lesswrong  let-me-see  levers  lexical  libraries  lifts-projections  linear-algebra  liner-notes  links  linux  lisp  list  live-coding  llvm  local-global  lol  long-short-run  machine-learning  magnitude  management  manifolds  map-territory  maps  marginal-rev  marketing  markov  martial  math  math.AG  math.CT  math.GR  mathtariat  matrix-factorization  meaningness  measure  measurement  mechanics  media  medieval  mediterranean  memory-management  MENA  mental-math  meta-analysis  meta:math  meta:prediction  meta:reading  meta:research  meta:science  metabuch  metal-to-virtual  metameta  methodology  metrics  michael-nielsen  microfoundations  microsoft  military  minimalism  minimum-viable  miri-cfar  mit  mobile  models  moments  money-for-time  monte-carlo  mooc  morality  mostly-modern  motivation  move-fast-(and-break-things)  msr  multi  multiplicative  music  music-theory  mystic  narrative  nascent-state  nature  network-structure  networking  neuro  neuro-nitgrit  neurons  new-religion  news  nibble  nihil  nitty-gritty  nlp  no-go  notation  notetaking  novelty  number  numerics  objektbuch  ocaml-sml  occident  ocr  oly  oop  open-closed  openai  operational  optimism  org:anglo  org:bleg  org:com  org:edge  org:edu  org:junk  org:lite  org:mag  org:mat  org:med  org:nat  org:popup  org:rec  org:sci  organization  organizing  orient  os  oscillation  oss  osx  overflow  p:whenever  papers  paradox  pareto  parsimony  paying-rent  pdf  people  performance  pessimism  philosophy  phys-energy  physics  pic  pinboard  play  pls  plt  poast  polarization  policy  polisci  politics  poll  polynomials  postrat  ppl  pragmatic  prediction-markets  prepping  presentation  prioritizing  pro-rata  probability  problem-solving  productivity  prof  programming  project  proofs  properties  proposal  protocol-metadata  psych-architecture  psychology  public-goodish  python  q-n-a  qra  quantifiers-sums  quantum  quantum-info  questions  quixotic  quiz  quotes  random  rant  rationality  ratty  reading  realness  reason  recommendations  reddit  reduction  reference  reflection  reinforcement  religion  replication  repo  research  research-program  retention  retrofit  review  rhetoric  rhythm  rigor  risk  robust  roots  rust  s:*  s:**  s:***  saas  sampling  sanctity-degradation  sapiens  scala  scale  scholar  scholar-pack  sci-comp  science  scitariat  search  selection  sequential  shannon  shipping  SIGGRAPH  signal-noise  simplification-normalization  simulation  sinosphere  skeleton  skunkworks  slides  slippery-slope  social  social-choice  social-norms  sociality  society  sociology  soft-question  software  space  spatial  speculation  speed  speedometer  ssc  stackex  startups  state  state-of-art  static-dynamic  stats  stock-flow  stories  strategy  stream  street-fighting  strings  stripe  structure  study  studying  subculture  summary  survey  sv  symmetry  synchrony  syntax  synthesis  system-design  systematic-ad-hoc  systems  tainter  talks  tcs  tcstariat  teaching  tech  tech-infrastructure  technical-writing  technocracy  technology  techtariat  terminal  tetlock  the-classics  the-great-west-whale  the-monster  the-trenches  the-world-is-just-atoms  theos  thermo  thick-thin  things  thinking  threat-modeling  thurston  tidbits  time  time-preference  time-use  todo  tools  top-n  traces  track-record  trade  tradeoffs  tradition  transportation  trees  trends  tribalism  tricki  trivia  troll  trump  trust  tumblr  turchin  tutorial  tutoring  twitter  types  ubiquity  ui  unaffiliated  uncertainty  unintended-consequences  unit  universalism-particularism  unix  urban-rural  usa  ux  vague  values  video  virtu  virtualization  visual-understanding  visualization  visuo  vitality  volo-avolo  war  war-nerd  water  waves  web  webapp  west-hunter  westminster  whole-partial-many  wiki  wild-ideas  wire-guided  wisdom  wkfly  woah  wonkish  wordlessness  workflow  working-stiff  world  world-war  wormholes  worrydream  worse-is-better/the-right-thing  writing  yak-shaving  yc  yoga  zero-positive-sum  🌞  🎓  👳  🔬  🖥  🦀  🦉 

Copy this bookmark:



description:


tags: