nhaliday + plt   72

The Future of Mathematics? [video] | Hacker News
https://news.ycombinator.com/item?id=20909404
Kevin Buzzard (the Lean guy)

- general reflection on proof asssistants/theorem provers
- Kevin Hale's formal abstracts project, etc
- thinks of available theorem provers, Lean is "[the only one currently available that may be capable of formalizing all of mathematics eventually]" (goes into more detail right at the end, eg, quotient types)
hn  commentary  discussion  video  talks  presentation  math  formal-methods  expert-experience  msr  frontier  state-of-art  proofs  rigor  education  higher-ed  optimism  prediction  lens  search  meta:research  speculation  exocortex  skunkworks  automation  research  math.NT  big-surf  software  parsimony  cost-benefit  intricacy  correctness  programming  pls  python  functional  haskell  heavyweights  research-program  review  reflection  multi  pdf  slides  oly  experiment  span-cover  git  vcs  teaching  impetus  academia  composition-decomposition  coupling-cohesion  database  trust  types  plt  lifts-projections  induction  critique  beauty  truth  elegance  aesthetics 
13 days ago by nhaliday
A Formal Verification of Rust's Binary Search Implementation
Part of the reason for this is that it’s quite complicated to apply mathematical tools to something unmathematical like a functionally unpure language (which, unfortunately, most programs tend to be written in). In mathematics, you don’t expect a variable to suddenly change its value, and it only gets more complicated when you have pointers to those dang things:

“Dealing with aliasing is one of the key challenges for the verification of imperative programs. For instance, aliases make it difficult to determine which abstractions are potentially affected by a heap update and to determine which locks need to be acquired to avoid data races.” 1

While there are whole logics focused on trying to tackle these problems, a master’s thesis wouldn’t be nearly enough time to model a formal Rust semantics on top of these, so I opted for a more straightforward solution: Simply make Rust a purely functional language!

Electrolysis: Simple Verification of Rust Programs via Functional Purification
If you know a bit about Rust, you may have noticed something about that quote in the previous section: There actually are no data races in (safe) Rust, precisely because there is no mutable aliasing. Either all references to some datum are immutable, or there is a single mutable reference. This means that mutability in Rust is much more localized than in most other imperative languages, and that it is sound to replace a destructive update like

p.x += 1
with a functional one – we know there’s no one else around observing p:

let p = Point { x = p.x + 1, ..p };
techtariat  plt  programming  formal-methods  rust  arrows  reduction  divide-and-conquer  correctness  project  state  functional  concurrency  direct-indirect  pls  examples  simplification-normalization 
9 weeks ago by nhaliday
CakeML
some interesting job openings in Sydney listed here
programming  pls  plt  functional  ocaml-sml  formal-methods  rigor  compilers  types  numerics  accuracy  estimate  research-program  homepage  anglo  jobs  tech  cool 
9 weeks ago by nhaliday
Modules Matter Most | Existential Type
note comment from gasche (significant OCaml contributor) critiquing modules vs typeclasses: https://existentialtype.wordpress.com/2011/04/16/modules-matter-most/#comment-735
I also think you’re unfair to type classes. You’re right that they are not completely satisfying as a modularity tool, but your presentation make them sound bad in all aspects, which is certainly not true. The limitation of only having one instance per type may be a strong one, but it allows for a level of impliciteness that is just nice. There is a reason why, for example, monads are relatively nice to use in Haskell, while using monads represented as modules in a SML/OCaml programs is a real pain.

It’s a fact that type-classes are widely adopted and used in the Haskell circles, while modules/functors are only used for relatively coarse-gained modularity in the ML community. It should tell you something useful about those two features: they’re something that current modules miss (or maybe a trade-off between flexibility and implicitness that plays against modules for “modularity in the small”), and it’s dishonest and rude to explain the adoption difference by “people don’t know any better”.
nibble  org:bleg  techtariat  programming  pls  plt  ocaml-sml  functional  haskell  types  composition-decomposition  coupling-cohesion  engineering  structure  intricacy  arrows  matching  network-structure  degrees-of-freedom  linearity  nonlinearity  span-cover  direction  multi  poast  expert-experience  blowhards  static-dynamic  protocol-metadata  cmu 
12 weeks ago by nhaliday
Laurence Tratt: What Challenges and Trade-Offs do Optimising Compilers Face?
Summary
It's important to be realistic: most people don't care about program performance most of the time. Modern computers are so fast that most programs run fast enough even with very slow language implementations. In that sense, I agree with Daniel's premise: optimising compilers are often unimportant. But “often” is often unsatisfying, as it is here. Users find themselves transitioning from not caring at all about performance to suddenly really caring, often in the space of a single day.

This, to me, is where optimising compilers come into their own: they mean that even fewer people need care about program performance. And I don't mean that they get us from, say, 98 to 99 people out of 100 not needing to care: it's probably more like going from 80 to 99 people out of 100 not needing to care. This is, I suspect, more significant than it seems: it means that many people can go through an entire career without worrying about performance. Martin Berger reminded me of A N Whitehead’s wonderful line that “civilization advances by extending the number of important operations which we can perform without thinking about them” and this seems a classic example of that at work. Even better, optimising compilers are widely tested and thus generally much more reliable than the equivalent optimisations performed manually.

But I think that those of us who work on optimising compilers need to be honest with ourselves, and with users, about what performance improvement one can expect to see on a typical program. We have a tendency to pick the maximum possible improvement and talk about it as if it's the mean, when there's often a huge difference between the two. There are many good reasons for that gap, and I hope in this blog post I've at least made you think about some of the challenges and trade-offs that optimising compilers are subject to.

[1]
Most readers will be familiar with Knuth’s quip that “premature optimisation is the root of all evil.” However, I doubt that any of us have any real idea what proportion of time is spent in the average part of the average program. In such cases, I tend to assume that Pareto’s principle won't be far too wrong (i.e. that 80% of execution time is spent in 20% of code). In 1971 a study by Knuth and others of Fortran programs, found that 50% of execution time was spent in 4% of code. I don't know of modern equivalents of this study, and for them to be truly useful, they'd have to be rather big. If anyone knows of something along these lines, please let me know!
techtariat  programming  compilers  performance  tradeoffs  cost-benefit  engineering  yak-shaving  pareto  plt  c(pp)  rust  golang  trivia  data  objektbuch  street-fighting  estimate  distribution  pro-rata 
july 2019 by nhaliday
C++ Core Guidelines
This document is a set of guidelines for using C++ well. The aim of this document is to help people to use modern C++ effectively. By “modern C++” we mean effective use of the ISO C++ standard (currently C++17, but almost all of our recommendations also apply to C++14 and C++11). In other words, what would you like your code to look like in 5 years’ time, given that you can start now? In 10 years’ time?

https://isocpp.github.io/CppCoreGuidelines/
“Within C++ is a smaller, simpler, safer language struggling to get out.” – Bjarne Stroustrup

...

The guidelines are focused on relatively higher-level issues, such as interfaces, resource management, memory management, and concurrency. Such rules affect application architecture and library design. Following the rules will lead to code that is statically type safe, has no resource leaks, and catches many more programming logic errors than is common in code today. And it will run fast - you can afford to do things right.

We are less concerned with low-level issues, such as naming conventions and indentation style. However, no topic that can help a programmer is out of bounds.

Our initial set of rules emphasize safety (of various forms) and simplicity. They may very well be too strict. We expect to have to introduce more exceptions to better accommodate real-world needs. We also need more rules.

...

The rules are designed to be supported by an analysis tool. Violations of rules will be flagged with references (or links) to the relevant rule. We do not expect you to memorize all the rules before trying to write code.

contrary:
https://aras-p.info/blog/2018/12/28/Modern-C-Lamentations/
This will be a long wall of text, and kinda random! My main points are:
1. C++ compile times are important,
2. Non-optimized build performance is important,
3. Cognitive load is important. I don’t expand much on this here, but if a programming language or a library makes me feel stupid, then I’m less likely to use it or like it. C++ does that a lot :)
programming  engineering  pls  best-practices  systems  c(pp)  guide  metabuch  objektbuch  reference  cheatsheet  elegance  frontier  libraries  intricacy  advanced  advice  recommendations  big-picture  novelty  lens  philosophy  state  error  types  concurrency  memory-management  performance  abstraction  plt  compilers  expert-experience  multi  checking  devtools  flux-stasis  safety  system-design  techtariat  time  measure  dotnet  comparison  examples  build-packaging  thinking  worse-is-better/the-right-thing  cost-benefit  tradeoffs  essay  commentary  oop  correctness  computer-memory  error-handling  resources-effects  latency-throughput 
june 2019 by nhaliday
What makes Java easier to parse than C? - Stack Overflow
Parsing C++ is getting hard. Parsing Java is getting to be just as hard

cf the Linked questions too, lotsa good stuff
q-n-a  stackex  compilers  pls  plt  jvm  c(pp)  intricacy  syntax  automata-languages  cost-benefit  incentives  legacy 
may 2019 by nhaliday
oop - Functional programming vs Object Oriented programming - Stack Overflow
When you anticipate a different kind of software evolution:
- Object-oriented languages are good when you have a fixed set of operations on things, and as your code evolves, you primarily add new things. This can be accomplished by adding new classes which implement existing methods, and the existing classes are left alone.
- Functional languages are good when you have a fixed set of things, and as your code evolves, you primarily add new operations on existing things. This can be accomplished by adding new functions which compute with existing data types, and the existing functions are left alone.

When evolution goes the wrong way, you have problems:
- Adding a new operation to an object-oriented program may require editing many class definitions to add a new method.
- Adding a new kind of thing to a functional program may require editing many function definitions to add a new case.

This problem has been well known for many years; in 1998, Phil Wadler dubbed it the "expression problem". Although some researchers think that the expression problem can be addressed with such language features as mixins, a widely accepted solution has yet to hit the mainstream.

What are the typical problem definitions where functional programming is a better choice?

Functional languages excel at manipulating symbolic data in tree form. A favorite example is compilers, where source and intermediate languages change seldom (mostly the same things), but compiler writers are always adding new translations and code improvements or optimizations (new operations on things). Compilation and translation more generally are "killer apps" for functional languages.
q-n-a  stackex  programming  engineering  nitty-gritty  comparison  best-practices  cost-benefit  functional  data-structures  arrows  flux-stasis  atoms  compilers  examples  pls  plt  oop  types 
may 2019 by nhaliday
Cilk Hub
looks like this is run by Billy Moses and Leiserson (the L in CLRS)
mit  tools  programming  pls  plt  systems  c(pp)  libraries  compilers  performance  homepage  concurrency 
may 2019 by nhaliday
maintenance - Why do dynamic languages make it more difficult to maintain large codebases? - Software Engineering Stack Exchange
Now here is the key point I have been building up to: there is a strong correlation between a language being dynamically typed and a language also lacking all the other facilities that make lowering the cost of maintaining a large codebase easier, and that is the key reason why it is more difficult to maintain a large codebase in a dynamic language. And similarly there is a correlation between a language being statically typed and having facilities that make programming in the larger easier.
programming  worrydream  plt  hmm  comparison  pls  carmack  techtariat  types  engineering  productivity  pro-rata  input-output  correlation  best-practices  composition-decomposition  error  causation  confounding  devtools  jvm  scala  open-closed  cost-benefit  static-dynamic  design  system-design 
may 2019 by nhaliday
Dgsh – Directed graph shell | Hacker News
I've worked with and looked at a lot of data processing helpers. Tools, that try to help you build data pipelines, for the sake of performance, reproducibility or simply code uniformity.
What I found so far: Most tools, that invent a new language or try to cram complex processes into lesser suited syntactical environments are not loved too much.

...

I'll give dgsh a try. The tool reuse approach and the UNIX spirit seems nice. But my initial impression of the "C code metrics" example from the site is mixed: It reminds me of awk, about which one of the authors said, that it's a beautiful language, but if your programs getting longer than hundred lines, you might want to switch to something else.

Two libraries which have a great grip at the plumbing aspect of data processing systems are airflow and luigi. They are python libraries and with it you have a concise syntax and basically all python libraries plus non-python tools with a command line interface at you fingertips.

I am curious, what kind of process orchestration tools people use and can recommend?

--

Exactly our experience too, from complex machine learning workflows in various aspects of drug discovery.
We basically did not really find any of the popular DSL-based bioinformatics pipeline tools (snakemake, bpipe etc) to fit the bill. Nextflow came close, but in fact allows quite some custom code too.

What worked for us was to use Spotify's Luigi, which is a python library rather than DSL.

The only thing was that we had to develop a flow-based inspired API on top of Luigi's more functional programming based one, in order to make defining dependencies fluent and easy enough to specify for our complex workflows.

Our flow-based inspired Luigi API (SciLuigi) for complex workflows, is available at:

https://github.com/pharmbio/sciluigi

--

We have measured many of the examples against the use of temporary files and the web report one against (single-threaded) implementations in Perl and Java. In almost all cases dgsh takes less wall clock time, but often consumes more CPU resources.
commentary  project  programming  terminal  worrydream  pls  plt  unix  hn  graphs  tools  devtools  let-me-see  composition-decomposition  yak-shaving  workflow  exocortex  hmm  cool  software  desktop  sci-comp  stock-flow  performance  comparison  links  libraries  python 
january 2017 by nhaliday
Ask HN: What is the state of C++ vs. Rust? | Hacker News
https://www.reddit.com/r/rust/comments/2mwpie/what_are_the_advantages_of_rust_over_modern_c/
https://www.reddit.com/r/rust/comments/bya8k6/programming_with_rust_vs_c_c/

Writing C++ from a Rust developers perspective: https://www.reddit.com/r/cpp/comments/b5wkw7/writing_c_from_a_rust_developers_perspective/

https://www.reddit.com/r/rust/comments/1gs93k/rust_for_game_development/
https://www.reddit.com/r/rust/comments/acjcbp/rust_2019_beat_c/
https://www.reddit.com/r/rust/comments/62sewn/did_you_ever_hear_the_tragedy_of_darth_stroustrup/

Rust from C++ perspective: https://www.reddit.com/r/cpp/comments/611811/have_you_used_rust_do_you_prefer_it_over_modern_c/

mostly discussion of templates:
What can C++ do that Rust cant?: https://www.reddit.com/r/rust/comments/5ti9fc/what_can_c_do_that_rust_cant/
Templates are a big part of C++, It's kind of unfair to exclude them. Type-level integers and variadic templates are not to be underestimated.
...
C++ has the benefit of many competing compilers, each with some of the best compiler architects in the industry (and the backing of extremely large companies). rust so far has only rustc for viable compilers.
--
A language specification.
--
Rust has principled overloading, while C++ has wild-wild-west overloading.

Ok rustaceans, here's a question for you. Is there anything that C++ templates can do that you can't do in rust?: https://www.reddit.com/r/rust/comments/7q7nn0/ok_rustaceans_heres_a_question_for_you_is_there/
I think you won't get the best answer about templates in the Rust community. People don't like them here, and there's... not an insignificant amount of FUD going around.

You can do most things with templates, and I think they're an elegant solution to the problem of creating generic code in an un-GC'd language. However, C++'s templates are hard to understand without the context of the design of C++'s types. C++'s class system is about the closest thing you can get to a duck typed ML module system.

I dunno, I'm not sure exactly where I'm going with this. There's a lot about the philosophy of C++'s type system that I think would be good to talk about, but it really requires a full on blog post or a talk or something. I don't think you'll get a good answer on reddit. Learning an ML will get you pretty close to understanding the philosophy behind the C++ type system though - functors are equivalent to templates, modules equivalent to classes.
--
...
You're making a greater distinction than is necessary. Aside from/given const generics, SFINAE and impl where clauses have similar power, and monomorphization is substitution.

Both C++ templates and Rust generics are turing-complete (via associated types), the difference lies in the explicitness of bounds and ad-hoc polymorphism.
In Rust we have implicit bounds out of the "WF" (well-formed) requirements of signatures, so you can imagine C++ as having WF over the entire body of a function (even if I don't think current-generation C++ compilers take advantage of this).

While the template expansion may seem more like a macro-by-example, it's still type/value-driven, just in a more ad-hoc and implicit way.

https://gist.github.com/brendanzab/9220415
discussion  hn  summary  comparison  pls  rust  pragmatic  c(pp)  performance  safety  memory-management  build-packaging  types  community  culture  coupling-cohesion  oop  multi  reddit  social  engineering  games  memes(ew)  troll  scifi-fantasy  plt  chart  paste  computer-memory  code-organizing  ecosystem  q-n-a 
october 2016 by nhaliday
Rob Pike: Notes on Programming in C
Issues of typography
...
Sometimes they care too much: pretty printers mechanically produce pretty output that accentuates irrelevant detail in the program, which is as sensible as putting all the prepositions in English text in bold font. Although many people think programs should look like the Algol-68 report (and some systems even require you to edit programs in that style), a clear program is not made any clearer by such presentation, and a bad program is only made laughable.
Typographic conventions consistently held are important to clear presentation, of course - indentation is probably the best known and most useful example - but when the ink obscures the intent, typography has taken over.

...

Naming
...
Finally, I prefer minimum-length but maximum-information names, and then let the context fill in the rest. Globals, for instance, typically have little context when they are used, so their names need to be relatively evocative. Thus I say maxphysaddr (not MaximumPhysicalAddress) for a global variable, but np not NodePointer for a pointer locally defined and used. This is largely a matter of taste, but taste is relevant to clarity.

...

Pointers
C is unusual in that it allows pointers to point to anything. Pointers are sharp tools, and like any such tool, used well they can be delightfully productive, but used badly they can do great damage (I sunk a wood chisel into my thumb a few days before writing this). Pointers have a bad reputation in academia, because they are considered too dangerous, dirty somehow. But I think they are powerful notation, which means they can help us express ourselves clearly.
Consider: When you have a pointer to an object, it is a name for exactly that object and no other.

...

Comments
A delicate matter, requiring taste and judgement. I tend to err on the side of eliminating comments, for several reasons. First, if the code is clear, and uses good type names and variable names, it should explain itself. Second, comments aren't checked by the compiler, so there is no guarantee they're right, especially after the code is modified. A misleading comment can be very confusing. Third, the issue of typography: comments clutter code.
But I do comment sometimes. Almost exclusively, I use them as an introduction to what follows.

...

Complexity
Most programs are too complicated - that is, more complex than they need to be to solve their problems efficiently. Why? Mostly it's because of bad design, but I will skip that issue here because it's a big one. But programs are often complicated at the microscopic level, and that is something I can address here.
Rule 1. You can't tell where a program is going to spend its time. Bottlenecks occur in surprising places, so don't try to second guess and put in a speed hack until you've proven that's where the bottleneck is.

Rule 2. Measure. Don't tune for speed until you've measured, and even then don't unless one part of the code overwhelms the rest.

Rule 3. Fancy algorithms are slow when n is small, and n is usually small. Fancy algorithms have big constants. Until you know that n is frequently going to be big, don't get fancy. (Even if n does get big, use Rule 2 first.) For example, binary trees are always faster than splay trees for workaday problems.

Rule 4. Fancy algorithms are buggier than simple ones, and they're much harder to implement. Use simple algorithms as well as simple data structures.

The following data structures are a complete list for almost all practical programs:

array
linked list
hash table
binary tree
Of course, you must also be prepared to collect these into compound data structures. For instance, a symbol table might be implemented as a hash table containing linked lists of arrays of characters.
Rule 5. Data dominates. If you've chosen the right data structures and organized things well, the algorithms will almost always be self-evident. Data structures, not algorithms, are central to programming. (See The Mythical Man-Month: Essays on Software Engineering by F. P. Brooks, page 102.)

Rule 6. There is no Rule 6.

Programming with data.
...
One of the reasons data-driven programs are not common, at least among beginners, is the tyranny of Pascal. Pascal, like its creator, believes firmly in the separation of code and data. It therefore (at least in its original form) has no ability to create initialized data. This flies in the face of the theories of Turing and von Neumann, which define the basic principles of the stored-program computer. Code and data are the same, or at least they can be. How else can you explain how a compiler works? (Functional languages have a similar problem with I/O.)

Function pointers
Another result of the tyranny of Pascal is that beginners don't use function pointers. (You can't have function-valued variables in Pascal.) Using function pointers to encode complexity has some interesting properties.
Some of the complexity is passed to the routine pointed to. The routine must obey some standard protocol - it's one of a set of routines invoked identically - but beyond that, what it does is its business alone. The complexity is distributed.

There is this idea of a protocol, in that all functions used similarly must behave similarly. This makes for easy documentation, testing, growth and even making the program run distributed over a network - the protocol can be encoded as remote procedure calls.

I argue that clear use of function pointers is the heart of object-oriented programming. Given a set of operations you want to perform on data, and a set of data types you want to respond to those operations, the easiest way to put the program together is with a group of function pointers for each type. This, in a nutshell, defines class and method. The O-O languages give you more of course - prettier syntax, derived types and so on - but conceptually they provide little extra.

...

Include files
Simple rule: include files should never include include files. If instead they state (in comments or implicitly) what files they need to have included first, the problem of deciding which files to include is pushed to the user (programmer) but in a way that's easy to handle and that, by construction, avoids multiple inclusions. Multiple inclusions are a bane of systems programming. It's not rare to have files included five or more times to compile a single C source file. The Unix /usr/include/sys stuff is terrible this way.
There's a little dance involving #ifdef's that can prevent a file being read twice, but it's usually done wrong in practice - the #ifdef's are in the file itself, not the file that includes it. The result is often thousands of needless lines of code passing through the lexical analyzer, which is (in good compilers) the most expensive phase.

Just follow the simple rule.

cf https://stackoverflow.com/questions/1101267/where-does-the-compiler-spend-most-of-its-time-during-parsing
First, I don't think it actually is true: in many compilers, most time is not spend in lexing source code. For example, in C++ compilers (e.g. g++), most time is spend in semantic analysis, in particular in overload resolution (trying to find out what implicit template instantiations to perform). Also, in C and C++, most time is often spend in optimization (creating graph representations of individual functions or the whole translation unit, and then running long algorithms on these graphs).

When comparing lexical and syntactical analysis, it may indeed be the case that lexical analysis is more expensive. This is because both use state machines, i.e. there is a fixed number of actions per element, but the number of elements is much larger in lexical analysis (characters) than in syntactical analysis (tokens).

https://news.ycombinator.com/item?id=7728207
programming  systems  philosophy  c(pp)  summer-2014  intricacy  engineering  rhetoric  contrarianism  diogenes  parsimony  worse-is-better/the-right-thing  data-structures  list  algorithms  stylized-facts  essay  ideas  performance  functional  state  pls  oop  gotchas  blowhards  duplication  compilers  syntax  lexical  checklists  metabuch  lens  notation  thinking  neurons  guide  pareto  heuristic  time  cost-benefit  multi  q-n-a  stackex  plt  hn  commentary  minimalism  techtariat  rsc  writing  technical-writing  cracker-prog  code-organizing  grokkability  protocol-metadata  direct-indirect  grokkability-clarity  latency-throughput 
august 2014 by nhaliday

bundles : engtechie

related tags

:)  abstraction  academia  accretion  accuracy  acmtariat  advanced  advice  aesthetics  aggregator  ai  algorithms  analogy  analysis  anglo  announcement  aphorism  apple  arrows  assembly  atoms  automata-languages  automation  backup  bare-hands  barons  beauty  best-practices  big-picture  big-surf  blog  blowhards  books  branches  build-packaging  business  c(pp)  calculation  carmack  causation  characterization  chart  cheatsheet  checking  checklists  classification  cloud  cmu  cocoa  code-dive  code-organizing  cog-psych  collaboration  commentary  communication  community  comparison  compilers  complexity  composition-decomposition  computation  computer-memory  concept  concurrency  confounding  constraint-satisfaction  contracts  contrarianism  cool  core-rats  cornell  correctness  correlation  cost-benefit  coupling-cohesion  course  cracker-prog  creative  critique  cryptocurrency  cs  culture  d-lang  dan-luu  data  data-structures  database  dbs  debate  debugging  degrees-of-freedom  design  desktop  devtools  diogenes  direct-indirect  direction  discussion  distributed  distribution  divide-and-conquer  documentation  dotnet  draft  dropbox  DSL  duplication  econometrics  ecosystem  editors  education  egalitarianism-hierarchy  eh  elegance  empirical  endogenous-exogenous  ends-means  engineering  error  error-handling  essay  estimate  evidence-based  examples  exocortex  expectancy  experiment  expert-experience  explanans  explanation  extrema  flexibility  flux-stasis  form-design  formal-methods  forum  frontend  frontier  functional  futurism  game-theory  games  gedanken  git  gnon  gnu  golang  google  gotchas  graphs  grokkability  grokkability-clarity  guide  hardware  harvard  haskell  hci  heavyweights  heuristic  hi-order-bits  higher-ed  history  hmm  hn  homepage  ide  ideas  ideology  idk  impact  impetus  incentives  increase-decrease  induction  infographic  init  input-output  insight  interests  interface-compatibility  internet  interview  intricacy  invariance  ios  iteration-recursion  javascript  jobs  jvm  language  latency-throughput  learning  legacy  len:short  lens  let-me-see  lexical  libraries  lifts-projections  linearity  links  lisp  list  live-coding  llvm  local-global  logic  lol  magnitude  marginal  matching  math  math.NT  measure  measurement  memes(ew)  memory-management  mental-math  meta-analysis  meta:research  metabuch  metal-to-virtual  metaprogramming  methodology  minimalism  mit  moments  mostly-modern  move-fast-(and-break-things)  msr  multi  multiplicative  musk  network-structure  neurons  news  nibble  nitty-gritty  nonlinearity  notation  novelty  numerics  objektbuch  ocaml-sml  oly  oop  open-closed  optimism  org:bleg  org:com  org:edu  org:junk  org:mag  org:med  org:sci  organization  os  oss  osx  papers  pareto  parsimony  paste  path-dependence  pdf  people  performance  philosophy  pic  pls  plt  poast  ppl  pragmatic  prediction  presentation  pro-rata  productivity  programming  project  proofs  protocol-metadata  psychology  python  q-n-a  quixotic  quotes  randy-ayndy  rant  ratty  recommendations  reddit  reduction  reference  reflection  research  research-program  resources-effects  review  rhetoric  rigor  roots  rsc  rust  s:**  safety  scala  sci-comp  science  scifi-fantasy  search  security  sequential  shipping  SIGGRAPH  simplification-normalization  skunkworks  slides  smart-contracts  social  software  space-complexity  span-cover  speculation  stackex  stanford  state  state-of-art  static-dynamic  stats  stock-flow  stories  stream  street-fighting  strings  structure  study  stylized-facts  subculture  summary  summer-2014  sv  syntax  synthesis  system-design  systems  szabo  talks  teaching  tech  technical-writing  techtariat  terminal  thiel  thinking  time  time-complexity  tip-of-tongue  tools  traces  tradeoffs  tradition  trees  trends  trivia  troll  trust  truth  turing  tutorial  tutoring  types  ubiquity  ui  unit  universalism-particularism  unix  urbit  ux  vcs  video  virtualization  visual-understanding  visualization  visuo  volo-avolo  von-neumann  web  whole-partial-many  wiki  woah  workflow  working-stiff  worrydream  worse-is-better/the-right-thing  writing  yak-shaving  yarvin  yc  🖥 

Copy this bookmark:



description:


tags: