intricacy   166

« earlier    

What is your tale of lasagna code? (Code with too many layers) - DEV Community 👩‍💻👨‍💻
“In the one and only true way. The object-oriented version of 'Spaghetti code' is, of course, 'Lasagna code'. (Too many layers)." - Roberto Waltman
org:com  techtariat  quotes  aphorism  oop  jvm  programming  abstraction  intricacy  direct-indirect  engineering  structure  tip-of-tongue  degrees-of-freedom  coupling-cohesion  scala  error 
3 days ago by nhaliday
Panel: Systems Programming in 2014 and Beyond | Lang.NEXT 2014 | Channel 9
- Bjarne Stroustrup, Niko Matsakis, Andrei Alexandrescu, Rob Pike
- 2014 so pretty outdated but rare to find a discussion with people like this together
- pretty sure Jonathan Blow asked a couple questions
- Rob Pike compliments Rust at one point. Also kinda softly rags on dynamic typing at one point ("unit testing is what they have instead of static types").
video  presentation  debate  programming  pls  c(pp)  systems  os  rust  d-lang  golang  computer-memory  legacy  devtools  formal-methods  concurrency  compilers  syntax  parsimony  google  intricacy  thinking  cost-benefit  degrees-of-freedom  facebook  performance  people  rsc  cracker-prog  critique  types  checking  api  flux-stasis  engineering  time  wire-guided  worse-is-better/the-right-thing 
5 days ago by nhaliday
The Law of Leaky Abstractions – Joel on Software
[TCP/IP example]

All non-trivial abstractions, to some degree, are leaky.

...

- Something as simple as iterating over a large two-dimensional array can have radically different performance if you do it horizontally rather than vertically, depending on the “grain of the wood” — one direction may result in vastly more page faults than the other direction, and page faults are slow. Even assembly programmers are supposed to be allowed to pretend that they have a big flat address space, but virtual memory means it’s really just an abstraction, which leaks when there’s a page fault and certain memory fetches take way more nanoseconds than other memory fetches.

- The SQL language is meant to abstract away the procedural steps that are needed to query a database, instead allowing you to define merely what you want and let the database figure out the procedural steps to query it. But in some cases, certain SQL queries are thousands of times slower than other logically equivalent queries. A famous example of this is that some SQL servers are dramatically faster if you specify “where a=b and b=c and a=c” than if you only specify “where a=b and b=c” even though the result set is the same. You’re not supposed to have to care about the procedure, only the specification. But sometimes the abstraction leaks and causes horrible performance and you have to break out the query plan analyzer and study what it did wrong, and figure out how to make your query run faster.

...

- C++ string classes are supposed to let you pretend that strings are first-class data. They try to abstract away the fact that strings are hard and let you act as if they were as easy as integers. Almost all C++ string classes overload the + operator so you can write s + “bar” to concatenate. But you know what? No matter how hard they try, there is no C++ string class on Earth that will let you type “foo” + “bar”, because string literals in C++ are always char*’s, never strings. The abstraction has sprung a leak that the language doesn’t let you plug. (Amusingly, the history of the evolution of C++ over time can be described as a history of trying to plug the leaks in the string abstraction. Why they couldn’t just add a native string class to the language itself eludes me at the moment.)

- And you can’t drive as fast when it’s raining, even though your car has windshield wipers and headlights and a roof and a heater, all of which protect you from caring about the fact that it’s raining (they abstract away the weather), but lo, you have to worry about hydroplaning (or aquaplaning in England) and sometimes the rain is so strong you can’t see very far ahead so you go slower in the rain, because the weather can never be completely abstracted away, because of the law of leaky abstractions.

One reason the law of leaky abstractions is problematic is that it means that abstractions do not really simplify our lives as much as they were meant to. When I’m training someone to be a C++ programmer, it would be nice if I never had to teach them about char*’s and pointer arithmetic. It would be nice if I could go straight to STL strings. But one day they’ll write the code “foo” + “bar”, and truly bizarre things will happen, and then I’ll have to stop and teach them all about char*’s anyway.

...

The law of leaky abstractions means that whenever somebody comes up with a wizzy new code-generation tool that is supposed to make us all ever-so-efficient, you hear a lot of people saying “learn how to do it manually first, then use the wizzy tool to save time.” Code generation tools which pretend to abstract out something, like all abstractions, leak, and the only way to deal with the leaks competently is to learn about how the abstractions work and what they are abstracting. So the abstractions save us time working, but they don’t save us time learning.
techtariat  org:com  working-stiff  essay  programming  cs  software  abstraction  worrydream  thinking  intricacy  degrees-of-freedom  networking  protocol  examples  traces  no-go  volo-avolo  tradeoffs  c(pp)  pls  strings  dbs  transportation  driving  analogy  aphorism  learning  paradox  systems  elegance  nitty-gritty  concrete  cracker-prog 
12 days ago by nhaliday
Cleaner, more elegant, and harder to recognize | The Old New Thing
Really easy
Writing bad error-code-based code
Writing bad exception-based code

Hard
Writing good error-code-based code

Really hard
Writing good exception-based code

--

Really easy
Recognizing that error-code-based code is badly-written
Recognizing the difference between bad error-code-based code and
not-bad error-code-based code.

Hard
Recognizing that error-code-base code is not badly-written

Really hard
Recognizing that exception-based code is badly-written
Recognizing that exception-based code is not badly-written
Recognizing the difference between bad exception-based code
and not-bad exception-based code

https://ra3s.com/wordpress/dysfunctional-programming/2009/07/15/return-code-vs-exception-handling/
https://nedbatchelder.com/blog/200501/more_exception_handling_debate.html
techtariat  org:com  microsoft  working-stiff  pragmatic  carmack  error  error-handling  programming  rhetoric  debate  critique  pls  search  structure  cost-benefit  comparison  summary  intricacy  certificates-recognition  commentary  multi  contrarianism  correctness  quality  code-dive  cracker-prog 
12 days ago by nhaliday
c++ - Which is faster: Stack allocation or Heap allocation - Stack Overflow
On my machine, using g++ 3.4.4 on Windows, I get "0 clock ticks" for both stack and heap allocation for anything less than 100000 allocations, and even then I get "0 clock ticks" for stack allocation and "15 clock ticks" for heap allocation. When I measure 10,000,000 allocations, stack allocation takes 31 clock ticks and heap allocation takes 1562 clock ticks.

so maybe around 100x difference? what does that work out to in terms of total workload?

hmm:
http://vlsiarch.eecs.harvard.edu/wp-content/uploads/2017/02/asplos17mallacc.pdf
Recent work shows that dynamic memory allocation consumes nearly 7% of all cycles in Google datacenters.

That's not too bad actually. Seems like I shouldn't worry about shifting from heap to stack/globals unless profiling says it's important, particularly for non-oly stuff.

edit: Actually, factor x100 for 7% is pretty high, could be increase constant factor by almost an order of magnitude.
q-n-a  stackex  programming  c(pp)  systems  memory-management  performance  intricacy  comparison  benchmarks  data  objektbuch  empirical  google  papers  nibble  time  measure  pro-rata  distribution  multi  pdf  oly-programming  computer-memory 
20 days ago by nhaliday
C++ Core Guidelines
This document is a set of guidelines for using C++ well. The aim of this document is to help people to use modern C++ effectively. By “modern C++” we mean effective use of the ISO C++ standard (currently C++17, but almost all of our recommendations also apply to C++14 and C++11). In other words, what would you like your code to look like in 5 years’ time, given that you can start now? In 10 years’ time?

https://isocpp.github.io/CppCoreGuidelines/
“Within C++ is a smaller, simpler, safer language struggling to get out.” – Bjarne Stroustrup

...

The guidelines are focused on relatively higher-level issues, such as interfaces, resource management, memory management, and concurrency. Such rules affect application architecture and library design. Following the rules will lead to code that is statically type safe, has no resource leaks, and catches many more programming logic errors than is common in code today. And it will run fast - you can afford to do things right.

We are less concerned with low-level issues, such as naming conventions and indentation style. However, no topic that can help a programmer is out of bounds.

Our initial set of rules emphasize safety (of various forms) and simplicity. They may very well be too strict. We expect to have to introduce more exceptions to better accommodate real-world needs. We also need more rules.

...

The rules are designed to be supported by an analysis tool. Violations of rules will be flagged with references (or links) to the relevant rule. We do not expect you to memorize all the rules before trying to write code.

contrary:
https://aras-p.info/blog/2018/12/28/Modern-C-Lamentations/
This will be a long wall of text, and kinda random! My main points are:
1. C++ compile times are important,
2. Non-optimized build performance is important,
3. Cognitive load is important. I don’t expand much on this here, but if a programming language or a library makes me feel stupid, then I’m less likely to use it or like it. C++ does that a lot :)
programming  engineering  pls  best-practices  systems  c(pp)  guide  metabuch  objektbuch  reference  cheatsheet  elegance  frontier  libraries  intricacy  advanced  advice  recommendations  big-picture  novelty  lens  philosophy  state  error  types  concurrency  memory-management  performance  abstraction  plt  compilers  expert-experience  multi  checking  devtools  flux-stasis  safety  system-design  techtariat  time  measure  dotnet  comparison  examples  build-packaging  thinking  worse-is-better/the-right-thing  cost-benefit  tradeoffs  essay  commentary  oop  correctness  computer-memory  error-handling  resources-effects 
25 days ago by nhaliday
Regex cheatsheet
Many programs use regular expression to find & replace text. However, they tend to come with their own different flavor.

You can probably expect most modern software and programming languages to be using some variation of the Perl flavor, "PCRE"; however command-line tools (grep, less, ...) will often use the POSIX flavor (sometimes with an extended variant, e.g. egrep or sed -r). ViM also comes with its own syntax (a superset of what Vi accepts).

This cheatsheet lists the respective syntax of each flavor, and the software that uses it.

accidental complexity galore
techtariat  reference  cheatsheet  documentation  howto  yak-shaving  editors  strings  syntax  examples  crosstab  objektbuch  python  comparison  gotchas  tip-of-tongue  automata-languages  pls  trivia  properties  libraries  nitty-gritty  intricacy  degrees-of-freedom 
26 days ago by nhaliday
What every computer scientist should know about floating-point arithmetic
Floating-point arithmetic is considered as esoteric subject by many people. This is rather surprising, because floating-point is ubiquitous in computer systems: Almost every language has a floating-point datatype; computers from PCs to supercomputers have floating-point accelerators; most compilers will be called upon to compile floating-point algorithms from time to time; and virtually every operating system must respond to floating-point exceptions such as overflow. This paper presents a tutorial on the aspects of floating-point that have a direct impact on designers of computer systems. It begins with background on floating-point representation and rounding error, continues with a discussion of the IEEE floating point standard, and concludes with examples of how computer system builders can better support floating point.

https://stackoverflow.com/questions/2729637/does-epsilon-really-guarantees-anything-in-floating-point-computations
"you must use an epsilon when dealing with floats" is a knee-jerk reaction of programmers with a superficial understanding of floating-point computations, for comparisons in general (not only to zero).

This is usually unhelpful because it doesn't tell you how to minimize the propagation of rounding errors, it doesn't tell you how to avoid cancellation or absorption problems, and even when your problem is indeed related to the comparison of two floats, it doesn't tell you what value of epsilon is right for what you are doing.

...

Regarding the propagation of rounding errors, there exists specialized analyzers that can help you estimate it, because it is a tedious thing to do by hand.

https://www.di.ens.fr/~cousot/projects/DAEDALUS/synthetic_summary/CEA/Fluctuat/index.html

This was part of HW1 of CS24:
https://en.wikipedia.org/wiki/Kahan_summation_algorithm
In particular, simply summing n numbers in sequence has a worst-case error that grows proportional to n, and a root mean square error that grows as {\displaystyle {\sqrt {n}}} {\sqrt {n}} for random inputs (the roundoff errors form a random walk).[2] With compensated summation, the worst-case error bound is independent of n, so a large number of values can be summed with an error that only depends on the floating-point precision.[2]

cf:
https://en.wikipedia.org/wiki/Pairwise_summation
In numerical analysis, pairwise summation, also called cascade summation, is a technique to sum a sequence of finite-precision floating-point numbers that substantially reduces the accumulated round-off error compared to naively accumulating the sum in sequence.[1] Although there are other techniques such as Kahan summation that typically have even smaller round-off errors, pairwise summation is nearly as good (differing only by a logarithmic factor) while having much lower computational cost—it can be implemented so as to have nearly the same cost (and exactly the same number of arithmetic operations) as naive summation.

In particular, pairwise summation of a sequence of n numbers xn works by recursively breaking the sequence into two halves, summing each half, and adding the two sums: a divide and conquer algorithm. Its worst-case roundoff errors grow asymptotically as at most O(ε log n), where ε is the machine precision (assuming a fixed condition number, as discussed below).[1] In comparison, the naive technique of accumulating the sum in sequence (adding each xi one at a time for i = 1, ..., n) has roundoff errors that grow at worst as O(εn).[1] Kahan summation has a worst-case error of roughly O(ε), independent of n, but requires several times more arithmetic operations.[1] If the roundoff errors are random, and in particular have random signs, then they form a random walk and the error growth is reduced to an average of {\displaystyle O(\varepsilon {\sqrt {\log n}})} O(\varepsilon {\sqrt {\log n}}) for pairwise summation.[2]

A very similar recursive structure of summation is found in many fast Fourier transform (FFT) algorithms, and is responsible for the same slow roundoff accumulation of those FFTs.[2][3]

https://eng.libretexts.org/Bookshelves/Electrical_Engineering/Book%3A_Fast_Fourier_Transforms_(Burrus)/10%3A_Implementing_FFTs_in_Practice/10.8%3A_Numerical_Accuracy_in_FFTs
However, these encouraging error-growth rates only apply if the trigonometric “twiddle” factors in the FFT algorithm are computed very accurately. Many FFT implementations, including FFTW and common manufacturer-optimized libraries, therefore use precomputed tables of twiddle factors calculated by means of standard library functions (which compute trigonometric constants to roughly machine precision). The other common method to compute twiddle factors is to use a trigonometric recurrence formula—this saves memory (and cache), but almost all recurrences have errors that grow as O(n‾√) , O(n) or even O(n2) which lead to corresponding errors in the FFT.

...

There are, in fact, trigonometric recurrences with the same logarithmic error growth as the FFT, but these seem more difficult to implement efficiently; they require that a table of Θ(logn) values be stored and updated as the recurrence progresses. Instead, in order to gain at least some of the benefits of a trigonometric recurrence (reduced memory pressure at the expense of more arithmetic), FFTW includes several ways to compute a much smaller twiddle table, from which the desired entries can be computed accurately on the fly using a bounded number (usually <3) of complex multiplications. For example, instead of a twiddle table with n entries ωkn , FFTW can use two tables with Θ(n‾√) entries each, so that ωkn is computed by multiplying an entry in one table (indexed with the low-order bits of k ) by an entry in the other table (indexed with the high-order bits of k ).

[ed.: Nicholas Higham's "Accuracy and Stability of Numerical Algorithms" seems like a good reference for this kind of analysis.]
nibble  pdf  papers  programming  systems  numerics  nitty-gritty  intricacy  approximation  accuracy  types  sci-comp  multi  q-n-a  stackex  hmm  oly-programming  accretion  formal-methods  yak-shaving  wiki  reference  algorithms  yoga  ground-up  divide-and-conquer  fourier  books  tidbits  chart  caltech  nostalgia 
7 weeks ago by nhaliday

« earlier    

related tags

absolute-relative  abstraction  academia  accretion  accuracy  acm  acmtariat  additive  adna  advanced  advice  aesthetics  africa  age-generation  aging  ai-control  ai  albion  alesina  algorithms  alien-character  altruism  analogy  analysis  analytical-holistic  anglo  anomie  anthropic  anthropology  antidemos  antiquity  aphorism  api  apollonian-dionysian  apple  approximation  archaeology  archaics  art  article  ascetic  asia  assortative-mating  atoms  attention  authoritarianism  autism  automata-languages  automation  aversion  axelrod  axioms  backup  barons  bayesian  behavioral-econ  behavioral-gen  being-becoming  being-right  benchmarks  best-practices  bias-variance  biases  big-peeps  big-picture  big-surf  big-yud  bio  biodet  bioinformatics  biomechanics  bits  blog  blowhards  books  bostrom  brain-scan  britain  broad-econ  build-packaging  business  c(pp)  calculation  california  caltech  canada  cancer  capital  career  carmack  causation  certificates-recognition  characterization  chart  cheatsheet  checking  checklists  china  christianity  civil-liberty  civilization  clarity  class-warfare  class  classic  classification  coalitions  coarse-fine  code-dive  cog-psych  cohesion  collaboration  comedy  commentary  common-case  communication  communism  community  comparison  compensation  competition  compilers  complement-substitute  complex-systems  complexity  composition-decomposition  computation  computer-memory  concentration-of-measure  concept  conceptual-vocab  concrete  concurrency  confidence  confluence  confounding  conquest-empire  constraint-satisfaction  context  contracts  contradiction  contrarianism  control  convexity-curvature  cool  cooperate-defect  coordination  core-rats  corporation  correctness  correlation  cost-benefit  counter-revolution  counterexample  counterfactual  coupling-cohesion  course  cracker-econ  cracker-prog  creative  crime  criminal-justice  critique  crooked  crosstab  crypto  cs  cultural-dynamics  culture-war  culture  curiosity  current-events  curvature  cybernetics  cycles  d-lang  dan-luu  dark-arts  darwinian  data-science  data  database  dataset  dbs  death  debate  debt  debugging  decision-making  decision-theory  deep-learning  deep-materialism  deepgoog  definition  degrees-of-freedom  demographics  dennett  descriptive  design  desktop  detail-architecture  developing-world  developmental  devtools  diet  dignity  dimensionality  direct-indirect  direction  dirty-hands  discipline  discovery  discrete  discrimination  discussion  disease  distribution  diversity  divide-and-conquer  documentation  dotnet  douthatish  drama  driving  dropbox  duality  duplication  duty  early-modern  earth  eastern-europe  econ-metrics  econ-productivity  econometrics  economics  econotariat  eden  editors  education  efficiency  egalitarianism-hierarchy  egt  einstein  electromag  elegance  elite  embodied  emergent  emotion  empirical  ems  endo-exo  endogenous-exogenous  ends-means  endurance  engineering  enlightenment-renaissance-restoration-reformation  environmental-effects  envy  epidemiology  epistemic  equilibrium  ergodic  eric-kaufmann  error-handling  error  essay  essence-existence  estimate  ethics  ethnocentrism  eu  europe  events  evidence-based  evolution  evopsych  examples  exegesis-hermeneutics  existence  exit-voice  exocortex  expansionism  expectancy  experiment  expert-experience  expert  explanans  explanation  exposition  facebook  farmers-and-foragers  fashun  features  fermi  field-study  finance  fisher  fitness  fitsci  fixed-point  flexibility  flux-stasis  flynn  food  foreign-policy  formal-methods  formal-values  forms-instances  fourier  free-riding  frequentist  frontier  functional  futurism  gallic  garett-jones  gavisti  gedanken  gender-diff  gender  gene-flow  general-survey  generalization  generative  genetic-correlation  genetic-load  genetics  genomics  geography  geometry  giants  git  gnon  gnosis-logos  gnxp  god-man-beast-victim  golang  google  gotchas  government  graphs  gravity  gray-econ  ground-up  group-level  growth-econ  gt-101  guide  guilt-shame  gwas  gwern  gxe  haidt  hanson  hard-tech  hardware  hari-seldon  harvard  haskell  hci  health  healthcare  heuristic  hg  higher-ed  history  hive-mind  hmm  hn  homo-hetero  honor  housing  howto  human-bean  human-capital  humanity  hypothesis-testing  ideas  identity-politics  ideology  idk  ieee  immune  impact  impetus  incentives  increase-decrease  india  individualism-collectivism  industrial-org  inequality  inference  info-dynamics  information-theory  init  innovation  input-output  insight  instinct  institutions  insurance  integrity  intelligence  interdisciplinary  interests  interface  intersection-connectedness  intersection  intervention  investing  ioannidis  iq  iron-age  is-ought  israel  iteration-recursion  janus  japan  jargon  javascript  judaism  judgement  justice  jvm  kinship  knowledge  korea  labor  language  large-factor  latent-variables  latin-america  law  leadership  learning  lectures  legacy  lens  lesswrong  letters  leviathan  lexical  libraries  life-history  limits  linearity  liner-notes  links  lisp  list  literature  llvm  local-global  logic  logos  long-short-run  long-term  longitudinal  love-hate  low-hanging  machiavelli  machine-learning  macro  madisonian  magnitude  malaise  management  managerial-state  manifolds  map-territory  maps  marginal-rev  marginal  markets  math.at  math  mathtariat  measure  measurement  mechanics  medicine  medieval  mediterranean  memory-management  mena  mendel-randomization  meta:medicine  meta:prediction  meta:rhetoric  meta:science  metabolic  metabuch  metameta  methodology  metrics  micro  microfoundations  microsoft  midwest  migration  missing-heritability  model-organism  models  modernity  moloch  moments  monetary-fiscal  money  morality  mostly-modern  multi  multiplicative  music  mutation  mystic  myth  n-factor  nationalism-globalism  natural-experiment  nature  network-structure  networking  neuro-nitgrit  neuro  neurons  new-religion  news  nibble  nihil  nitty-gritty  no-go  nonlinearity  northeast  nostalgia  novelty  number  numerics  nutrition  nyc  objective-measure  objektbuch  ocaml-sml  occam  occident  off-convex  old-anglo  oly-programming  oly  oop  open-closed  optimization  order-disorder  org:anglo  org:biz  org:bleg  org:com  org:data  org:econlib  org:edu  org:gov  org:health  org:junk  org:lite  org:mag  org:mat  org:med  org:nat  org:ngo  org:rec  org:sci  org:theos  organizing  orient  os  oscillation  osx  overflow  paganism  papers  paradox  parallax  parasites-microbiome  parenting  pareto  parsimony  paternal-age  path-dependence  patience  pdf  peace-violence  people  performance  personal-finance  personality  pessimism  phalanges  philosophy  phys-energy  physics  piracy  play  plots  pls  plt  poast  poetry  policy  polis  polisci  political-econ  politics  poll  pop-diff  pop-structure  population-genetics  population  pragmatic  pre-ww2  prediction  predictive-processing  preprint  presentation  priors-posteriors  pro-rata  probability  problem-solving  programming  proofs  properties  proposal  protestant-catholic  protocol  pseudoe  psych-architecture  psychiatry  psychology  psychometrics  public-goodish  putnam-like  puzzles  python  q-n-a  qtl  quality  questions  quotes  race  random  randy-ayndy  ranking  rant  rationality  ratty  real-nominal  realness  realpolitik  reason  recommendations  red-queen  reddit  redistribution  reference  reflection  regression  regularization  regularizer  regulation  reinforcement  relativity  religion  rent-seeking  replication  reputation  resources-effects  retrofit  review  rhetoric  right-wing  rigidity  rigor  risk  ritual  robotics  robust  roots  rot  rsc  ruby  running  russia  rust-lang  rust  s-factor  s:*  saas  safety  sampling  sapiens  scala  scale  scaling-tech  schelling  sci-comp  science  scitariat  scott-sumner  search  securities  selection  self-report  sequential  sex  sexuality  shalizi  shipping  sib-study  signal-noise  signaling  signum  similarity  simulation  singularity  sinosphere  skeleton  skunkworks  slides  smoothness  soccer  social-capital  social-norms  social-psych  social-science  social-structure  social  sociality  society  sociology  software  space  span-cover  spatial  spearhead  speculation  speed  speedometer  spock  sports  spreading  ssc  stackex  stagnation  stanford  state  statesmen  stats  status  stereotypes  stock-flow  stories  stream  street-fighting  strings  structure  study  stylized-facts  subculture  subjective-objective  success  summary  supply-demand  survey  symmetry  synchrony  syntax  synthesis  system-design  systematic-ad-hoc  systems  talks  taxes  technical-writing  technology  techtariat  telos-atelos  temperance  terminal  the-bones  the-classics  the-devil  the-great-west-whale  the-self  the-trenches  the-west  the-world-is-just-atoms  theory-of-mind  theos  thick-thin  thiel  things  thinking  threat-modeling  tidbits  time-preference  time-series  time  tip-of-tongue  tools  top-n  topology  traces  track-record  trade  tradeoffs  tradition  transportation  trends  tribalism  trivia  troll  trust  truth  tumblr  turing  tutorial  twin-study  twitter  types  unaffiliated  uncertainty  unintended-consequences  uniqueness  unit  universalism-particularism  unix  unsupervised  urban-rural  urban  us-them  usa  utopia-dystopia  ux  vague  values  variance-components  vcs  video  visualization  visuo  volo-avolo  von-neumann  war  wealth-of-nations  wealth  web  welfare-state  west-hunter  westminster  white-paper  wiki  wire-guided  within-group  within-without  wonkish  workflow  working-stiff  world  worrydream  worse-is-better/the-right-thing  writing  xenobio  yak-shaving  yoga  yvain  zeitgeist  zero-positive-sum  🌞  🎩  👽  🔬  🖥  🤖 

Copy this bookmark:



description:


tags: