nhaliday + arrows   93

A Formal Verification of Rust's Binary Search Implementation
Part of the reason for this is that it’s quite complicated to apply mathematical tools to something unmathematical like a functionally unpure language (which, unfortunately, most programs tend to be written in). In mathematics, you don’t expect a variable to suddenly change its value, and it only gets more complicated when you have pointers to those dang things:

“Dealing with aliasing is one of the key challenges for the verification of imperative programs. For instance, aliases make it difficult to determine which abstractions are potentially affected by a heap update and to determine which locks need to be acquired to avoid data races.” 1

While there are whole logics focused on trying to tackle these problems, a master’s thesis wouldn’t be nearly enough time to model a formal Rust semantics on top of these, so I opted for a more straightforward solution: Simply make Rust a purely functional language!

Electrolysis: Simple Verification of Rust Programs via Functional Purification
If you know a bit about Rust, you may have noticed something about that quote in the previous section: There actually are no data races in (safe) Rust, precisely because there is no mutable aliasing. Either all references to some datum are immutable, or there is a single mutable reference. This means that mutability in Rust is much more localized than in most other imperative languages, and that it is sound to replace a destructive update like

p.x += 1
with a functional one – we know there’s no one else around observing p:

let p = Point { x = p.x + 1, ..p };
techtariat  plt  programming  formal-methods  rust  arrows  reduction  divide-and-conquer  correctness  project  state  functional  concurrency  direct-indirect  pls  examples  simplification-normalization  compilers 
august 2019 by nhaliday
Three best practices for building successful data pipelines - O'Reilly Media
Drawn from their experiences and my own, I’ve identified three key areas that are often overlooked in data pipelines, and those are making your analysis:
1. Reproducible
2. Consistent
3. Productionizable

...

Science that cannot be reproduced by an external third party is just not science — and this does apply to data science. One of the benefits of working in data science is the ability to apply the existing tools from software engineering. These tools let you isolate all the dependencies of your analyses and make them reproducible.

Dependencies fall into three categories:
1. Analysis code ...
2. Data sources ...
3. Algorithmic randomness ...

...

Establishing consistency in data
...

There are generally two ways of establishing the consistency of data sources. The first is by checking-in all code and data into a single revision control repository. The second method is to reserve source control for code and build a pipeline that explicitly depends on external data being in a stable, consistent format and location.

Checking data into version control is generally considered verboten for production software engineers, but it has a place in data analysis. For one thing, it makes your analysis very portable by isolating all dependencies into source control. Here are some conditions under which it makes sense to have both code and data in source control:
Small data sets ...
Regular analytics ...
Fixed source ...

Productionizability: Developing a common ETL
...

1. Common data format ...
2. Isolating library dependencies ...

https://blog.koresoftware.com/blog/etl-principles
Rigorously enforce the idempotency constraint
For efficiency, seek to load data incrementally
Always ensure that you can efficiently process historic data
Partition ingested data at the destination
Rest data between tasks
Pool resources for efficiency
Store all metadata together in one place
Manage login details in one place
Specify configuration details once
Parameterize sub flows and dynamically run tasks where possible
Execute conditionally
Develop your own workflow framework and reuse workflow components

more focused on details of specific technologies:
https://medium.com/@rchang/a-beginners-guide-to-data-engineering-part-i-4227c5c457d7

https://www.cloudera.com/documentation/director/cloud/topics/cloud_de_best_practices.html
techtariat  org:com  best-practices  engineering  code-organizing  machine-learning  data-science  yak-shaving  nitty-gritty  workflow  config  vcs  replication  homo-hetero  multi  org:med  design  system-design  links  shipping  minimalism  volo-avolo  causation  random  invariance  structure  arrows  protocol-metadata  interface-compatibility 
august 2019 by nhaliday
Modules Matter Most | Existential Type
note comment from gasche (significant OCaml contributor) critiquing modules vs typeclasses: https://existentialtype.wordpress.com/2011/04/16/modules-matter-most/#comment-735
I also think you’re unfair to type classes. You’re right that they are not completely satisfying as a modularity tool, but your presentation make them sound bad in all aspects, which is certainly not true. The limitation of only having one instance per type may be a strong one, but it allows for a level of impliciteness that is just nice. There is a reason why, for example, monads are relatively nice to use in Haskell, while using monads represented as modules in a SML/OCaml programs is a real pain.

It’s a fact that type-classes are widely adopted and used in the Haskell circles, while modules/functors are only used for relatively coarse-gained modularity in the ML community. It should tell you something useful about those two features: they’re something that current modules miss (or maybe a trade-off between flexibility and implicitness that plays against modules for “modularity in the small”), and it’s dishonest and rude to explain the adoption difference by “people don’t know any better”.
nibble  org:bleg  techtariat  programming  pls  plt  ocaml-sml  functional  haskell  types  composition-decomposition  coupling-cohesion  engineering  structure  intricacy  arrows  matching  network-structure  degrees-of-freedom  linearity  nonlinearity  span-cover  direction  multi  poast  expert-experience  blowhards  static-dynamic  protocol-metadata  cmu 
july 2019 by nhaliday
Integrated vs type based shrinking - Hypothesis
The big difference is whether shrinking is integrated into generation.

In Haskell’s QuickCheck, shrinking is defined based on types: Any value of a given type shrinks the same way, regardless of how it is generated. In Hypothesis, test.check, etc. instead shrinking is part of the generation, and the generator controls how the values it produces shrinks (this works differently in Hypothesis and test.check, and probably differently again in EQC, but the user visible result is largely the same)

This is not a trivial distinction. Integrating shrinking into generation has two large benefits:
- Shrinking composes nicely, and you can shrink anything you can generate regardless of whether there is a defined shrinker for the type produced.
- You can _guarantee that shrinking satisfies the same invariants as generation_.
The first is mostly important from a convenience point of view: Although there are some things it let you do that you can’t do in the type based approach, they’re mostly of secondary importance. It largely just saves you from the effort of having to write your own shrinkers.

But the second is really important, because the lack of it makes your test failures potentially extremely confusing.

...

[example: even_numbers = integers().map(lambda x: x * 2)]

...

In this example the problem was relatively obvious and so easy to work around, but as your invariants get more implicit and subtle it becomes really problematic: In Hypothesis it’s easy and convenient to generate quite complex data, and trying to recreate the invariants that are automatically satisfied with that in your tests and/or your custom shrinkers would quickly become a nightmare.

I don’t think it’s an accident that the main systems to get this right are in dynamic languages. It’s certainly not essential - the original proposal that lead to the implementation for test.check was for Haskell, and Jack is an alternative property based system for Haskell that does this - but you feel the pain much more quickly in dynamic languages because the typical workaround for this problem in Haskell is to define a newtype, which lets you turn off the default shrinking for your types and possibly define your own.

But that’s a workaround for a problem that shouldn’t be there in the first place, and using it will still result in your having to encode the invariants into your your shrinkers, which is more work and more brittle than just having it work automatically.

So although (as far as I know) none of the currently popular property based testing systems for statically typed languages implement this behaviour correctly, they absolutely can and they absolutely should. It will improve users’ lives significantly.

https://hypothesis.works/articles/compositional-shrinking/
In my last article about shrinking, I discussed the problems with basing shrinking on the type of the values to be shrunk.

In writing it though I forgot that there was a halfway house which is also somewhat bad (but significantly less so) that you see in a couple of implementations.

This is when the shrinking is not type based, but still follows the classic shrinking API that takes a value and returns a lazy list of shrinks of that value. Examples of libraries that do this are theft and QuickTheories.

This works reasonably well and solves the major problems with type directed shrinking, but it’s still somewhat fragile and importantly does not compose nearly as well as the approaches that Hypothesis or test.check take.

Ideally, as well as not being based on the types of the values being generated, shrinking should not be based on the actual values generated at all.

This may seem counter-intuitive, but it actually works pretty well.

...

We took a strategy and composed it with a function mapping over the values that that strategy produced to get a new strategy.

Suppose the Hypothesis strategy implementation looked something like the following:
...
i.e. we can generate a value and we can shrink a value that we’ve previously generated. By default we don’t know how to generate values (subclasses have to implement that) and we can’t shrink anything, which subclasses are able to fix if they want or leave as is if they’re fine with that.

(This is in fact how a very early implementation of it looked)

This is essentially the approach taken by theft or QuickTheories, and the problem with it is that under this implementation the ‘map’ function we used above is impossible to define in a way that preserves shrinking: In order to shrink a generated value, you need some way to invert the function you’re composing with (which is in general impossible even if your language somehow exposed the facilities to do it, which it almost certainly doesn’t) so you could take the generated value, map it back to the value that produced it, shrink that and then compose with the mapping function.

...

The key idea for fixing this is as follows: In order to shrink outputs it almost always suffices to shrink inputs. Although in theory you can get functions where simpler input leads to more complicated output, in practice this seems to be rare enough that it’s OK to just shrug and accept more complicated test output in those cases.

Given that, the _way to shrink the output of a mapped strategy is to just shrink the value generated from the first strategy and feed it to the mapping function_.

Which means that you need an API that can support that sort of shrinking.

https://hypothesis.works/articles/types-and-properties/
This happens a lot: Frequently there are properties that only hold in some restricted domain, and so you want more specific tests for that domain to complement your other tests for the larger range of data.

When this happens you need tools to generate something more specific, and those requirements don’t map naturally to types.

[ed.: Some examples of how this idea can be useful:
Have a type but want to test different distributions on it for different purposes. Eg, comparing worst-case and average-case guarantees for benchmarking time/memory complexity. Comparing a slow and fast implementation on small input sizes, then running some sanity checks for the fast implementation on large input sizes beyond what the slow implementation can handle.]

...

In Haskell, traditionally we would fix this with a newtype declaration which wraps the type. We could find a newtype NonEmptyList and a newtype FiniteFloat and then say that we actually wanted a NonEmptyList[FiniteFloat] there.

...

But why should we bother? Especially if we’re only using these in one test, we’re not actually interested in these types at all, and it just adds a whole bunch of syntactic noise when you could just pass the data generators directly. Defining new types for the data you want to generate is purely a workaround for a limitation of the API.

If you were working in a dependently typed language where you could already naturally express this in the type system it might be OK (I don’t have any direct experience of working in type systems that strong), but I’m sceptical of being able to make it work well - you’re unlikely to be able to automatically derive data generators in the general case, because the needs of data generation “go in the opposite direction” from types (a type is effectively a predicate which consumes a value, where a data generator is a function that produces a value, so in order to produce a generator for a type automatically you need to basically invert the predicate). I suspect most approaches here will leave you with a bunch of sharp edges, but I would be interested to see experiments in this direction.

https://www.reddit.com/r/haskell/comments/646k3d/ann_hedgehog_property_testing/dg1485c/
techtariat  rhetoric  rant  programming  libraries  pls  types  functional  haskell  python  random  checking  design  critique  multi  composition-decomposition  api  reddit  social  commentary  system-design  arrows  lifts-projections  DSL  static-dynamic 
july 2019 by nhaliday
Why is Google Translate so bad for Latin? A longish answer. : latin
hmm:
> All it does its correlate sequences of up to five consecutive words in texts that have been manually translated into two or more languages.
That sort of system ought to be perfect for a dead language, though. Dump all the Cicero, Livy, Lucretius, Vergil, and Oxford Latin Course into a database and we're good.

We're not exactly inundated with brand new Latin to translate.
--
> Dump all the Cicero, Livy, Lucretius, Vergil, and Oxford Latin Course into a database and we're good.
What makes you think that the Google folks haven't done so and used that to create the language models they use?
> That sort of system ought to be perfect for a dead language, though.
Perhaps. But it will be bad at translating novel English sentences to Latin.
foreign-lang  reddit  social  discussion  language  the-classics  literature  dataset  measurement  roots  traces  syntax  anglo  nlp  stackex  links  q-n-a  linguistics  lexical  deep-learning  sequential  hmm  project  arrows  generalization  state-of-art  apollonian-dionysian  machine-learning  google 
june 2019 by nhaliday
Which of Haskell and OCaml is more practical? For example, in which aspect will each play a key role? - Quora
- Tikhon Jelvis,

Haskell.

This is a question I'm particularly well-placed to answer because I've spent quite a bit of time with both Haskell and OCaml, seeing both in the real world (including working at Jane Street for a bit). I've also seen the languages in academic settings and know many people at startups using both languages. This gives me a good perspective on both languages, with a fairly similar amount of experience in the two (admittedly biased towards Haskell).

And so, based on my own experience rather than the languages' reputations, I can confidently say it's Haskell.

Parallelism and Concurrency

...

Libraries

...

Typeclasses vs Modules

...

In some sense, OCaml modules are better behaved and founded on a sounder theory than Haskell typeclasses, which have some serious drawbacks. However, the fact that typeclasses can be reliably inferred whereas modules have to be explicitly used all the time more than makes up for this. Moreover, extensions to the typeclass system enable much of the power provided by OCaml modules.

...

Of course, OCaml has some advantages of its own as well. It has a performance profile that's much easier to predict. The module system is awesome and often missed in Haskell. Polymorphic variants can be very useful for neatly representing certain situations, and don't have an obvious Haskell analog.

While both languages have a reasonable C FFI, OCaml's seems a bit simpler. It's hard for me to say this with any certainty because I've only used the OCaml FFI myself, but it was quite easy to use—a hard bar for Haskell's to clear. One really nice use of modules in OCaml is to pass around values directly from C as abstract types, which can help avoid extra marshalling/unmarshalling; that seemed very nice in OCaml.

However, overall, I still think Haskell is the more practical choice. Apart from the reasoning above, I simply have my own observations: my Haskell code tends to be clearer, simpler and shorter than my OCaml code. I'm also more productive in Haskell. Part of this is certainly a matter of having more Haskell experience, but the delta is limited especially as I'm working at my third OCaml company. (Of course, the first two were just internships.)

Both Haskell and OCaml are uniquivocally superb options—miles ahead of any other languages I know. While I do prefer Haskell, I'd choose either one in a pinch.

--
I've looked at F# a bit, but it feels like it makes too many tradeoffs to be on .NET. You lose the module system, which is probably OCaml's best feature, in return for an unfortunate, nominally typed OOP layer.

I'm also not invested in .NET at all: if anything, I'd prefer to avoid it in favor of simplicity. I exclusively use Linux and, from the outside, Mono doesn't look as good as it could be. I'm also far more likely to interoperate with a C library than a .NET library.

If I had some additional reason to use .NET, I'd definitely go for F#, but right now I don't.

https://www.reddit.com/r/haskell/comments/3huexy/what_are_haskellers_critiques_of_f_and_ocaml/
https://www.reddit.com/r/haskell/comments/3huexy/what_are_haskellers_critiques_of_f_and_ocaml/cub5mmb/
Thinking about it now, it boils down to a single word: expressiveness. When I'm writing OCaml, I feel more constrained than when I'm writing Haskell. And that's important: unlike so many others, what first attracted me to Haskell was expressiveness, not safety. It's easier for me to write code that looks how I want it to look in Haskell. The upper bound on code quality is higher.

...

Perhaps it all boils down to OCaml and its community feeling more "worse is better" than Haskell, something I highly disfavor.

...

Laziness or, more strictly, non-strictness is big. A controversial start, perhaps, but I stand by it. Unlike some, I do not see non-strictness as a design mistake but as a leap in abstraction. Perhaps a leap before its time, but a leap nonetheless. Haskell lets me program without constantly keeping the code's order in my head. Sure, it's not perfect and sometimes performance issues jar the illusion, but they are the exception not the norm. Coming from imperative languages where order is omnipresent (I can't even imagine not thinking about execution order as I write an imperative program!) it's incredibly liberating, even accounting for the weird issues and jinks I'd never see in a strict language.

This is what I imagine life felt like with the first garbage collectors: they may have been slow and awkward, the abstraction might have leaked here and there, but, for all that, it was an incredible advance. You didn't have to constantly think about memory allocation any more. It took a lot of effort to get where we are now and garbage collectors still aren't perfect and don't fit everywhere, but it's hard to imagine the world without them. Non-strictness feels like it has the same potential, without anywhere near the work garbage collection saw put into it.

...

The other big thing that stands out are typeclasses. OCaml might catch up on this front with implicit modules or it might not (Scala implicits are, by many reports, awkward at best—ask Edward Kmett about it, not me) but, as it stands, not having them is a major shortcoming. Not having inference is a bigger deal than it seems: it makes all sorts of idioms we take for granted in Haskell awkward in OCaml which means that people simply don't use them. Haskell's typeclasses, for all their shortcomings (some of which I find rather annoying), are incredibly expressive.

In Haskell, it's trivial to create your own numeric type and operators work as expected. In OCaml, while you can write code that's polymorphic over numeric types, people simply don't. Why not? Because you'd have to explicitly convert your literals and because you'd have to explicitly open a module with your operators—good luck using multiple numeric types in a single block of code! This means that everyone uses the default types: (63/31-bit) ints and doubles. If that doesn't scream "worse is better", I don't know what does.

...

There's more. Haskell's effect management, brought up elsewhere in this thread, is a big boon. It makes changing things more comfortable and makes informal reasoning much easier. Haskell is the only language where I consistently leave code I visit better than I found it. Even if I hadn't worked on the project in years. My Haskell code has better longevity than my OCaml code, much less other languages.

http://blog.ezyang.com/2011/02/ocaml-gotchas/
One observation about purity and randomness: I think one of the things people frequently find annoying in Haskell is the fact that randomness involves mutation of state, and thus be wrapped in a monad. This makes building probabilistic data structures a little clunkier, since you can no longer expose pure interfaces. OCaml is not pure, and as such you can query the random number generator whenever you want.

However, I think Haskell may get the last laugh in certain circumstances. In particular, if you are using a random number generator in order to generate random test cases for your code, you need to be able to reproduce a particular set of random tests. Usually, this is done by providing a seed which you can then feed back to the testing script, for deterministic behavior. But because OCaml's random number generator manipulates global state, it's very easy to accidentally break determinism by asking for a random number for something unrelated. You can work around it by manually bracketing the global state, but explicitly handling the randomness state means providing determinism is much more natural.
q-n-a  qra  programming  pls  engineering  nitty-gritty  pragmatic  functional  haskell  ocaml-sml  dotnet  types  arrows  cost-benefit  tradeoffs  concurrency  libraries  performance  expert-experience  composition-decomposition  comparison  critique  multi  reddit  social  discussion  techtariat  reflection  review  random  data-structures  numerics  rand-approx  sublinear  syntax  volo-avolo  causation  scala  jvm  ecosystem  metal-to-virtual 
june 2019 by nhaliday
Lindy effect - Wikipedia
The Lindy effect is a theory that the future life expectancy of some non-perishable things like a technology or an idea is proportional to their current age, so that every additional period of survival implies a longer remaining life expectancy.[1] Where the Lindy effect applies, mortality rate decreases with time. In contrast, living creatures and mechanical things follow a bathtub curve where, after "childhood", the mortality rate increases with time. Because life expectancy is probabilistically derived, a thing may become extinct before its "expected" survival. In other words, one needs to gauge both the age and "health" of the thing to determine continued survival.
wiki  reference  concept  metabuch  ideas  street-fighting  planning  comparison  time  distribution  flux-stasis  history  measure  correlation  arrows  branches  pro-rata  manifolds  aging  stylized-facts  age-generation  robust  technology  thinking  cost-benefit  conceptual-vocab  methodology  threat-modeling  efficiency  neurons  tools  track-record  ubiquity 
june 2019 by nhaliday
oop - Functional programming vs Object Oriented programming - Stack Overflow
When you anticipate a different kind of software evolution:
- Object-oriented languages are good when you have a fixed set of operations on things, and as your code evolves, you primarily add new things. This can be accomplished by adding new classes which implement existing methods, and the existing classes are left alone.
- Functional languages are good when you have a fixed set of things, and as your code evolves, you primarily add new operations on existing things. This can be accomplished by adding new functions which compute with existing data types, and the existing functions are left alone.

When evolution goes the wrong way, you have problems:
- Adding a new operation to an object-oriented program may require editing many class definitions to add a new method.
- Adding a new kind of thing to a functional program may require editing many function definitions to add a new case.

This problem has been well known for many years; in 1998, Phil Wadler dubbed it the "expression problem". Although some researchers think that the expression problem can be addressed with such language features as mixins, a widely accepted solution has yet to hit the mainstream.

What are the typical problem definitions where functional programming is a better choice?

Functional languages excel at manipulating symbolic data in tree form. A favorite example is compilers, where source and intermediate languages change seldom (mostly the same things), but compiler writers are always adding new translations and code improvements or optimizations (new operations on things). Compilation and translation more generally are "killer apps" for functional languages.
q-n-a  stackex  programming  engineering  nitty-gritty  comparison  best-practices  cost-benefit  functional  data-structures  arrows  flux-stasis  atoms  compilers  examples  pls  plt  oop  types 
may 2019 by nhaliday
Fortifications and Democracy in the Ancient Greek World by Josiah Ober, Barry Weingast :: SSRN
- Joshiah Ober, Barry Weingast

In the modern world, access-limiting fortification walls are not typically regarded as promoting democracy. But in Greek antiquity, increased investment in fortifications was correlated with the prevalence and stability of democracy. This paper sketches the background conditions of the Greek city-state ecology, analyzes a passage in Aristotle’s Politics, and assesses the choices of Hellenistic kings, Greek citizens, and urban elites, as modeled in a simple game. The paper explains how city walls promoted democracy and helps to explain several other puzzles: why Hellenistic kings taxed Greek cities at lower than expected rates; why elites in Greek cities supported democracy; and why elites were not more heavily taxed by democratic majorities. The relationship between walls, democracy, and taxes promoted continued economic growth into the late classical and Hellenistic period (4th-2nd centuries BCE), and ultimately contributed to the survival of Greek culture into the Roman era, and thus modernity. We conclude with a consideration of whether the walls-democracy relationship holds in modernity.

'Rulers Ruled by Women': An Economic Analysis of the Rise and Fall of Women's Rights in Ancient Sparta by Robert K. Fleck, F. Andrew Hanssen: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=788106
Throughout most of history, women as a class have possessed relatively few formal rights. The women of ancient Sparta were a striking exception. Although they could not vote, Spartan women reportedly owned 40 percent of Sparta's agricultural land and enjoyed other rights that were equally extraordinary. We offer a simple economic explanation for the Spartan anomaly. The defining moment for Sparta was its conquest of a neighboring land and people, which fundamentally changed the marginal products of Spartan men's and Spartan women's labor. To exploit the potential gains from a reallocation of labor - specifically, to provide the appropriate incentives and the proper human capital formation - men granted women property (and other) rights. Consistent with our explanation for the rise of women's rights, when Sparta lost the conquered land several centuries later, the rights for women disappeared. Two conclusions emerge that may help explain why women's rights have been so rare for most of history. First, in contrast to the rest of the world, the optimal (from the men's perspective) division of labor among Spartans involved women in work that was not easily monitored by men. Second, the rights held by Spartan women may have been part of an unstable equilibrium, which contained the seeds of its own destruction.
study  broad-econ  economics  polisci  political-econ  institutions  government  north-weingast-like  democracy  walls  correlation  polis  history  mediterranean  iron-age  the-classics  microfoundations  modernity  comparison  architecture  military  public-goodish  elite  civic  taxes  redistribution  canon  literature  big-peeps  conquest-empire  rent-seeking  defense  models  GT-101  incentives  urban  urban-rural  speculation  interdisciplinary  cliometrics  multi  civil-liberty  gender  gender-diff  equilibrium  cycles  branches  labor  interests  property-rights  unintended-consequences  explanation  explanans  analysis  econ-productivity  context  arrows  micro  natural-experiment 
november 2017 by nhaliday
Variance of product of multiple random variables - Cross Validated
prod_i (var[X_i] + (E[X_i])^2) - prod_i (E[X_i])^2

two variable case: var[X] var[Y] + var[X] (E[Y])^2 + (E[X])^2 var[Y]
nibble  q-n-a  overflow  stats  probability  math  identity  moments  arrows  multiplicative  iidness  dependence-independence 
october 2017 by nhaliday
New Theory Cracks Open the Black Box of Deep Learning | Quanta Magazine
A new idea called the “information bottleneck” is helping to explain the puzzling success of today’s artificial-intelligence algorithms — and might also explain how human brains learn.

sounds like he's just talking about autoencoders?
news  org:mag  org:sci  popsci  announcement  research  deep-learning  machine-learning  acm  information-theory  bits  neuro  model-class  big-surf  frontier  nibble  hmm  signal-noise  deepgoog  expert  ideas  wild-ideas  summary  talks  video  israel  roots  physics  interdisciplinary  ai  intelligence  shannon  giants  arrows  preimage  lifts-projections  composition-decomposition  characterization  markov  gradient-descent  papers  liner-notes  experiment  hi-order-bits  generalization  expert-experience  explanans  org:inst  speedometer 
september 2017 by nhaliday
Is the economy illegible? | askblog
In the model of the economy as a GDP factory, the most fundamental equation is the production function, Y = f(K,L).

This says that total output (Y) is determined by the total amount of capital (K) and the total amount of labor (L).

Let me stipulate that the economy is legible to the extent that this model can be applied usefully to explain economic developments. I want to point out that the economy, while never as legible as economists might have thought, is rapidly becoming less legible.
econotariat  cracker-econ  economics  macro  big-picture  empirical  legibility  let-me-see  metrics  measurement  econ-metrics  volo-avolo  securities  markets  amazon  business-models  business  tech  sv  corporation  inequality  compensation  polarization  econ-productivity  stagnation  monetary-fiscal  models  complex-systems  map-territory  thinking  nationalism-globalism  time-preference  cost-disease  education  healthcare  composition-decomposition  econometrics  methodology  lens  arrows  labor  capital  trends  intricacy  🎩  moments  winner-take-all  efficiency  input-output 
august 2017 by nhaliday
Controversial New Theory Suggests Life Wasn't a Fluke of Biology—It Was Physics | WIRED
First Support for a Physics Theory of Life: https://www.quantamagazine.org/first-support-for-a-physics-theory-of-life-20170726/
Take chemistry, add energy, get life. The first tests of Jeremy England’s provocative origin-of-life hypothesis are in, and they appear to show how order can arise from nothing.
news  org:mag  profile  popsci  bio  xenobio  deep-materialism  roots  eden  physics  interdisciplinary  applications  ideas  thermo  complex-systems  cybernetics  entropy-like  order-disorder  arrows  phys-energy  emergent  empirical  org:sci  org:inst  nibble  chemistry  fixed-point  wild-ideas  multi 
august 2017 by nhaliday
Is the U.S. Aggregate Production Function Cobb-Douglas? New Estimates of the Elasticity of Substitution∗
world-wide: http://www.socsci.uci.edu/~duffy/papers/jeg2.pdf
https://www.weforum.org/agenda/2016/01/is-the-us-labour-share-as-constant-as-we-thought
https://www.economicdynamics.org/meetpapers/2015/paper_844.pdf
We find that IPP capital entirely explains the observed decline of the US labor share, which otherwise is secularly constant over the past 65 years for structures and equipment capital. The labor share decline simply reflects the fact that the US economy is undergoing a transition toward a larger IPP sector.
https://ideas.repec.org/p/red/sed015/844.html
http://www.robertdkirkby.com/blog/2015/summary-of-piketty-i/
https://www.brookings.edu/bpea-articles/deciphering-the-fall-and-rise-in-the-net-capital-share/
The Fall of the Labor Share and the Rise of Superstar Firms: http://www.nber.org/papers/w23396
The Decline of the U.S. Labor Share: https://www.brookings.edu/wp-content/uploads/2016/07/2013b_elsby_labor_share.pdf
Table 2 has industry disaggregation
Estimating the U.S. labor share: https://www.bls.gov/opub/mlr/2017/article/estimating-the-us-labor-share.htm

Why Workers Are Losing to Capitalists: https://www.bloomberg.com/view/articles/2017-09-20/why-workers-are-losing-to-capitalists
Automation and offshoring may be conspiring to reduce labor's share of income.
pdf  study  economics  growth-econ  econometrics  usa  data  empirical  analysis  labor  capital  econ-productivity  manifolds  magnitude  multi  world  🎩  piketty  econotariat  compensation  inequality  winner-take-all  org:ngo  org:davos  flexibility  distribution  stylized-facts  regularizer  hmm  history  mostly-modern  property-rights  arrows  invariance  industrial-org  trends  wonkish  roots  synthesis  market-power  efficiency  variance-components  business  database  org:gov  article  model-class  models  automation  nationalism-globalism  trade  news  org:mag  org:biz  org:bv  noahpinion  explanation  summary  methodology  density  polarization  map-territory  input-output 
july 2017 by nhaliday
Main Page - Competitive Programming Algorithms: E-Maxx Algorithms in English
original russian version: http://e-maxx.ru/algo/

some notable stuff:
- O(N) factorization sieve
- discrete logarithm
- factorial N! (mod P) in O(P log N)
- flow algorithms
- enumerating submasks
- bridges, articulation points
- Ukkonen algorithm
- sqrt(N) trick, eg, for range mode query
explanation  programming  algorithms  russia  foreign-lang  oly  oly-programming  problem-solving  accretion  math.NT  graphs  graph-theory  optimization  data-structures  yoga  tidbits  multi  anglo  language  arrows  strings 
february 2017 by nhaliday
inequalities - Is the Jaccard distance a distance? - MathOverflow
Steinhaus Transform
the referenced survey: http://kenclarkson.org/nn_survey/p.pdf

It's known that this transformation produces a metric from a metric. Now if you take as the base metric D the symmetric difference between two sets, what you end up with is the Jaccard distance (which actually is known by many other names as well).
q-n-a  overflow  nibble  math  acm  sublinear  metrics  metric-space  proofs  math.CO  tcstariat  arrows  reduction  measure  math.MG  similarity  multi  papers  survey  computational-geometry  cs  algorithms  pdf  positivity  msr  tidbits  intersection  curvature  convexity-curvature  intersection-connectedness  signum 
february 2017 by nhaliday
Structure theorem for finitely generated modules over a principal ideal domain - Wikipedia
- finitely generative modules over PID isomorphic to sum of quotients by decreasing sequences of proper ideals
- never really understood the proof of this in Ma5b
math  algebra  characterization  levers  math.AC  wiki  reference  nibble  proofs  additive  arrows 
february 2017 by nhaliday
Covering space - Wikipedia
A covering space of X is a topological space C together with a continuous surjective map p: C -> X such that for every x ∈ X, there exists an open neighborhood U of x, such that p^−1(U) (the inverse image of U under p) is a union of disjoint open sets in C, each of which is mapped homeomorphically onto U by p.
concept  math  topology  arrows  lifts-projections  wiki  reference  fiber  math.AT  nibble  preimage 
january 2017 by nhaliday
Performance Trends in AI | Otium
Deep learning has revolutionized the world of artificial intelligence. But how much does it improve performance? How have computers gotten better at different tasks over time, since the rise of deep learning?

In games, what the data seems to show is that exponential growth in data and computation power yields exponential improvements in raw performance. In other words, you get out what you put in. Deep learning matters, but only because it provides a way to turn Moore’s Law into corresponding performance improvements, for a wide class of problems. It’s not even clear it’s a discontinuous advance in performance over non-deep-learning systems.

In image recognition, deep learning clearly is a discontinuous advance over other algorithms. But the returns to scale and the improvements over time seem to be flattening out as we approach or surpass human accuracy.

In speech recognition, deep learning is again a discontinuous advance. We are still far away from human accuracy, and in this regime, accuracy seems to be improving linearly over time.

In machine translation, neural nets seem to have made progress over conventional techniques, but it’s not yet clear if that’s a real phenomenon, or what the trends are.

In natural language processing, trends are positive, but deep learning doesn’t generally seem to do better than trendline.

...

The learned agent performs much better than the hard-coded agent, but moves more jerkily and “randomly” and doesn’t know the law of reflection. Similarly, the reports of AlphaGo producing “unusual” Go moves are consistent with an agent that can do pattern-recognition over a broader space than humans can, but which doesn’t find the “laws” or “regularities” that humans do.

Perhaps, contrary to the stereotype that contrasts “mechanical” with “outside-the-box” thinking, reinforcement learners can “think outside the box” but can’t find the box?

http://slatestarcodex.com/2017/08/02/where-the-falling-einstein-meets-the-rising-mouse/
ratty  core-rats  summary  prediction  trends  analysis  spock  ai  deep-learning  state-of-art  🤖  deepgoog  games  nlp  computer-vision  nibble  reinforcement  model-class  faq  org:bleg  shift  chart  technology  language  audio  accuracy  speaking  foreign-lang  definite-planning  china  asia  microsoft  google  ideas  article  speedometer  whiggish-hegelian  yvain  ssc  smoothness  data  hsu  scitariat  genetics  iq  enhancement  genetic-load  neuro  neuro-nitgrit  brain-scan  time-series  multiplicative  iteration-recursion  additive  multi  arrows 
january 2017 by nhaliday
Shtetl-Optimized » Blog Archive » Why I Am Not An Integrated Information Theorist (or, The Unconscious Expander)
In my opinion, how to construct a theory that tells us which physical systems are conscious and which aren’t—giving answers that agree with “common sense” whenever the latter renders a verdict—is one of the deepest, most fascinating problems in all of science. Since I don’t know a standard name for the problem, I hereby call it the Pretty-Hard Problem of Consciousness. Unlike with the Hard Hard Problem, I don’t know of any philosophical reason why the Pretty-Hard Problem should be inherently unsolvable; but on the other hand, humans seem nowhere close to solving it (if we had solved it, then we could reduce the abortion, animal rights, and strong AI debates to “gentlemen, let us calculate!”).

Now, I regard IIT as a serious, honorable attempt to grapple with the Pretty-Hard Problem of Consciousness: something concrete enough to move the discussion forward. But I also regard IIT as a failed attempt on the problem. And I wish people would recognize its failure, learn from it, and move on.

In my view, IIT fails to solve the Pretty-Hard Problem because it unavoidably predicts vast amounts of consciousness in physical systems that no sane person would regard as particularly “conscious” at all: indeed, systems that do nothing but apply a low-density parity-check code, or other simple transformations of their input data. Moreover, IIT predicts not merely that these systems are “slightly” conscious (which would be fine), but that they can be unboundedly more conscious than humans are.

To justify that claim, I first need to define Φ. Strikingly, despite the large literature about Φ, I had a hard time finding a clear mathematical definition of it—one that not only listed formulas but fully defined the structures that the formulas were talking about. Complicating matters further, there are several competing definitions of Φ in the literature, including ΦDM (discrete memoryless), ΦE (empirical), and ΦAR (autoregressive), which apply in different contexts (e.g., some take time evolution into account and others don’t). Nevertheless, I think I can define Φ in a way that will make sense to theoretical computer scientists. And crucially, the broad point I want to make about Φ won’t depend much on the details of its formalization anyway.

We consider a discrete system in a state x=(x1,…,xn)∈Sn, where S is a finite alphabet (the simplest case is S={0,1}). We imagine that the system evolves via an “updating function” f:Sn→Sn. Then the question that interests us is whether the xi‘s can be partitioned into two sets A and B, of roughly comparable size, such that the updates to the variables in A don’t depend very much on the variables in B and vice versa. If such a partition exists, then we say that the computation of f does not involve “global integration of information,” which on Tononi’s theory is a defining aspect of consciousness.
aaronson  tcstariat  philosophy  dennett  interdisciplinary  critique  nibble  org:bleg  within-without  the-self  neuro  psychology  cog-psych  metrics  nitty-gritty  composition-decomposition  complex-systems  cybernetics  bits  information-theory  entropy-like  forms-instances  empirical  walls  arrows  math.DS  structure  causation  quantitative-qualitative  number  extrema  optimization  abstraction  explanation  summary  degrees-of-freedom  whole-partial-many  network-structure  systematic-ad-hoc  tcs  complexity  hardness  no-go  computation  measurement  intricacy  examples  counterexample  coding-theory  linear-algebra  fields  graphs  graph-theory  expanders  math  math.CO  properties  local-global  intuition  error  definition  coupling-cohesion 
january 2017 by nhaliday
soft question - Why does Fourier analysis of Boolean functions "work"? - Theoretical Computer Science Stack Exchange
Here is my point of view, which I learned from Guy Kindler, though someone more experienced can probably give a better answer: Consider the linear space of functions f: {0,1}^n -> R and consider a linear operator of the form σ_w (for w in {0,1}^n), that maps a function f(x) as above to the function f(x+w). In many of the questions of TCS, there is an underlying need to analyze the effects that such operators have on certain functions.

Now, the point is that the Fourier basis is the basis that diagonalizes all those operators at the same time, which makes the analysis of those operators much simpler. More generally, the Fourier basis diagonalizes the convolution operator, which also underlies many of those questions. Thus, Fourier analysis is likely to be effective whenever one needs to analyze those operators.
q-n-a  math  tcs  synthesis  boolean-analysis  fourier  👳  tidbits  motivation  intuition  linear-algebra  overflow  hi-order-bits  insight  curiosity  ground-up  arrows  nibble  s:*  elegance  guessing 
december 2016 by nhaliday
gt.geometric topology - Intuitive crutches for higher dimensional thinking - MathOverflow
Terry Tao:
I can't help you much with high-dimensional topology - it's not my field, and I've not picked up the various tricks topologists use to get a grip on the subject - but when dealing with the geometry of high-dimensional (or infinite-dimensional) vector spaces such as R^n, there are plenty of ways to conceptualise these spaces that do not require visualising more than three dimensions directly.

For instance, one can view a high-dimensional vector space as a state space for a system with many degrees of freedom. A megapixel image, for instance, is a point in a million-dimensional vector space; by varying the image, one can explore the space, and various subsets of this space correspond to various classes of images.

One can similarly interpret sound waves, a box of gases, an ecosystem, a voting population, a stream of digital data, trials of random variables, the results of a statistical survey, a probabilistic strategy in a two-player game, and many other concrete objects as states in a high-dimensional vector space, and various basic concepts such as convexity, distance, linearity, change of variables, orthogonality, or inner product can have very natural meanings in some of these models (though not in all).

It can take a bit of both theory and practice to merge one's intuition for these things with one's spatial intuition for vectors and vector spaces, but it can be done eventually (much as after one has enough exposure to measure theory, one can start merging one's intuition regarding cardinality, mass, length, volume, probability, cost, charge, and any number of other "real-life" measures).

For instance, the fact that most of the mass of a unit ball in high dimensions lurks near the boundary of the ball can be interpreted as a manifestation of the law of large numbers, using the interpretation of a high-dimensional vector space as the state space for a large number of trials of a random variable.

More generally, many facts about low-dimensional projections or slices of high-dimensional objects can be viewed from a probabilistic, statistical, or signal processing perspective.

Scott Aaronson:
Here are some of the crutches I've relied on. (Admittedly, my crutches are probably much more useful for theoretical computer science, combinatorics, and probability than they are for geometry, topology, or physics. On a related note, I personally have a much easier time thinking about R^n than about, say, R^4 or R^5!)

1. If you're trying to visualize some 4D phenomenon P, first think of a related 3D phenomenon P', and then imagine yourself as a 2D being who's trying to visualize P'. The advantage is that, unlike with the 4D vs. 3D case, you yourself can easily switch between the 3D and 2D perspectives, and can therefore get a sense of exactly what information is being lost when you drop a dimension. (You could call this the "Flatland trick," after the most famous literary work to rely on it.)
2. As someone else mentioned, discretize! Instead of thinking about R^n, think about the Boolean hypercube {0,1}^n, which is finite and usually easier to get intuition about. (When working on problems, I often find myself drawing {0,1}^4 on a sheet of paper by drawing two copies of {0,1}^3 and then connecting the corresponding vertices.)
3. Instead of thinking about a subset S⊆R^n, think about its characteristic function f:R^n→{0,1}. I don't know why that trivial perspective switch makes such a big difference, but it does ... maybe because it shifts your attention to the process of computing f, and makes you forget about the hopeless task of visualizing S!
4. One of the central facts about R^n is that, while it has "room" for only n orthogonal vectors, it has room for exp⁡(n) almost-orthogonal vectors. Internalize that one fact, and so many other properties of R^n (for example, that the n-sphere resembles a "ball with spikes sticking out," as someone mentioned before) will suddenly seem non-mysterious. In turn, one way to internalize the fact that R^n has so many almost-orthogonal vectors is to internalize Shannon's theorem that there exist good error-correcting codes.
5. To get a feel for some high-dimensional object, ask questions about the behavior of a process that takes place on that object. For example: if I drop a ball here, which local minimum will it settle into? How long does this random walk on {0,1}^n take to mix?

Gil Kalai:
This is a slightly different point, but Vitali Milman, who works in high-dimensional convexity, likes to draw high-dimensional convex bodies in a non-convex way. This is to convey the point that if you take the convex hull of a few points on the unit sphere of R^n, then for large n very little of the measure of the convex body is anywhere near the corners, so in a certain sense the body is a bit like a small sphere with long thin "spikes".
q-n-a  intuition  math  visual-understanding  list  discussion  thurston  tidbits  aaronson  tcs  geometry  problem-solving  yoga  👳  big-list  metabuch  tcstariat  gowers  mathtariat  acm  overflow  soft-question  levers  dimensionality  hi-order-bits  insight  synthesis  thinking  models  cartoons  coding-theory  information-theory  probability  concentration-of-measure  magnitude  linear-algebra  boolean-analysis  analogy  arrows  lifts-projections  measure  markov  sampling  shannon  conceptual-vocab  nibble  degrees-of-freedom  worrydream  neurons  retrofit  oscillation  paradox  novelty  tricki  concrete  high-dimension  s:***  manifolds  direction  curvature  convexity-curvature  elegance  guessing 
december 2016 by nhaliday
Quarter-Turns | The n-Category Café
In other words, call an operator T a quarter-turn if ⟨Tx,x⟩=0 for all x. Then the real quarter-turns correspond to the skew symmetric matrices — but apart from the zero operator, there are no complex quarter turns at all.
tidbits  math  linear-algebra  hmm  mathtariat  characterization  atoms  inner-product  arrows  org:bleg  nibble 
december 2016 by nhaliday
« earlier      
per page:    204080120160

bundles : abstractmath

related tags

:/  aaronson  abstraction  academia  accretion  accuracy  acm  acmtariat  additive  advanced  advice  africa  age-generation  aggregator  aging  ai  alesina  algebra  algorithms  allodium  amazon  AMT  analogy  analysis  anglo  announcement  anthropology  aphorism  api  apollonian-dionysian  applications  approximation  arbitrage  architecture  arrows  article  asia  assembly  atoms  audio  authoritarianism  automata-languages  automation  axioms  backup  bangbang  bare-hands  bayesian  benchmarks  best-practices  big-list  big-peeps  big-picture  big-surf  bio  bits  blowhards  books  boolean-analysis  brain-scan  branches  britain  broad-econ  browser  build-packaging  business  business-models  c(pp)  caching  calculation  canon  capital  capitalism  career  cartoons  causation  characterization  chart  cheatsheet  checking  checklists  chemistry  china  civic  civil-liberty  cjones-like  clarity  classification  clever-rats  cliometrics  cmu  coalitions  cocktail  cocoa  code-dive  code-organizing  coding-theory  cog-psych  cohesion  cold-war  commentary  common-case  communism  comparison  compensation  competition  compilers  complex-systems  complexity  composition-decomposition  computation  computational-geometry  computer-memory  computer-vision  concentration-of-measure  concept  conceptual-vocab  concrete  concurrency  config  confluence  conquest-empire  context  contracts  contradiction  convexity-curvature  cooperate-defect  coordination  core-rats  corporation  correctness  correlation  cost-benefit  cost-disease  counterexample  coupling-cohesion  course  cracker-econ  cracker-prog  critique  crosstab  crypto  cs  culture  curiosity  curvature  cybernetics  cycles  d-lang  data  data-science  data-structures  database  dataset  dbs  deep-learning  deep-materialism  deepgoog  defense  definite-planning  definition  degrees-of-freedom  democracy  dennett  density  dependence-independence  design  developing-world  devtools  differential  dimensionality  diogenes  direct-indirect  direction  discussion  distribution  divergence  diversity  divide-and-conquer  dotnet  DSL  dumb-ML  dynamic  early-modern  ecology  econ-metrics  econ-productivity  econometrics  economics  econotariat  ecosystem  eden  education  efficiency  egalitarianism-hierarchy  elegance  elite  embeddings  emergent  empirical  ends-means  engineering  enhancement  entanglement  entropy-like  equilibrium  error  essay  estimate  ethnocentrism  EU  europe  evidence  examples  exocortex  expanders  experiment  expert  expert-experience  explanans  explanation  exposition  extratricky  extrema  facebook  faq  features  fiber  fields  finiteness  fixed-point  flexibility  flux-stasis  foreign-lang  formal-methods  forms-instances  fourier  frameworks  french  frequency  frontend  frontier  functional  futurism  game-theory  games  gedanken  gender  gender-diff  generalization  genetic-load  genetics  geometry  germanic  giants  golang  google  gotchas  government  gowers  gradient-descent  graph-theory  graphical-models  graphs  greedy  gregory-clark  grokkability  grokkability-clarity  ground-up  growth-econ  GT-101  guessing  guide  hard-core  hardness  harvard  haskell  hci  healthcare  heavyweights  heuristic  hi-order-bits  high-dimension  history  hmm  hn  homo-hetero  homogeneity  hsu  huge-data-the-biggest  human-capital  ideas  identification-equivalence  identity  ideology  idk  IEEE  iidness  impact  impetus  incentives  increase-decrease  india  industrial-org  industrial-revolution  inequality  info-econ  info-foraging  infographic  information-theory  init  inner-product  innovation  input-output  insight  institutions  intelligence  interdisciplinary  interests  interface-compatibility  internet  intersection  intersection-connectedness  intricacy  intuition  invariance  ios  iq  iron-age  israel  iteration-recursion  japan  jargon  javascript  jobs  judgement  jvm  knowledge  labor  language  latency-throughput  latex  latin-america  learning  lecture-notes  legibility  lens  let-me-see  levers  lexical  libraries  life-history  lifts-projections  linear-algebra  linearity  liner-notes  linguistics  links  lisp  list  literature  local-global  lol  long-term  machine-learning  macro  magnitude  malthus  manifolds  map-territory  marginal  market-power  markets  markov  martingale  matching  math  math.AC  math.AT  math.CA  math.CO  math.CT  math.DS  math.FA  math.GN  math.GR  math.MG  math.NT  mathtariat  measure  measurement  medicine  medieval  mediterranean  MENA  meta:math  metabuch  metal-to-virtual  metameta  methodology  metric-space  metrics  micro  microfoundations  microsoft  military  minimalism  minimum-viable  mobile  model-class  models  modernity  moments  monetary-fiscal  money  mostly-modern  motivation  move-fast-(and-break-things)  msr  multi  multiplicative  nationalism-globalism  natural-experiment  naturality  network-structure  neuro  neuro-nitgrit  neurons  news  nibble  nitty-gritty  nlp  no-go  noahpinion  nonlinearity  norms  north-weingast-like  novelty  number  numerics  objektbuch  ocaml-sml  occam  oly  oly-programming  oop  open-problems  optimization  order-disorder  orders  org:biz  org:bleg  org:bv  org:com  org:davos  org:edu  org:gov  org:inst  org:junk  org:mag  org:med  org:ngo  org:rec  org:sci  os  oscillation  overflow  p:***  p:someday  papers  paradox  parsimony  path-dependence  pdf  performance  philosophy  phys-energy  physics  pic  pigeonhole-markov  piketty  planning  pls  plt  poast  polarization  polis  polisci  political-econ  polynomials  popsci  positivity  pragmatic  pre-2013  prediction  preimage  presentation  princeton  prioritizing  privacy  pro-rata  probabilistic-method  probability  problem-solving  profile  programming  project  proofs  properties  property-rights  protocol-metadata  pseudorandomness  psychology  public-goodish  putnam-like  python  q-n-a  qra  quality  quantitative-qualitative  quantum  quantum-info  questions  quixotic  quotes  r-lang  race  rand-approx  random  ranking  rant  ratty  reading  reddit  redistribution  reduction  reference  reflection  regularizer  reinforcement  rent-seeking  replication  repo  research  resources-effects  responsibility  retrofit  review  rhetoric  rigidity  rigorous-crypto  roadmap  robust  roots  rsc  russia  rust  s:*  s:***  salil-vadhan  sampling  scala  scaling-tech  scholar-pack  sci-comp  science  scitariat  search  securities  security  sequential  series  shannon  shift  shipping  SIGGRAPH  signal-noise  signum  similarity  simplex  simplification-normalization  skeleton  smoothness  social  sociology  soft-question  software  span-cover  spanish  spatial  speaking  spearhead  spectral  speculation  speedometer  spock  ssc  stackex  stagnation  startups  stat-mech  state  state-of-art  static-dynamic  stats  stereotypes  stochastic-processes  stock-flow  stories  strategy  street-fighting  strings  structure  study  studying  stylized-facts  subculture  sublinear  summary  summer-2014  supply-demand  survey  sv  syntax  synthesis  system-design  systematic-ad-hoc  systems  talks  taxes  tcs  tcstariat  tech  technology  techtariat  telos-atelos  the-classics  the-great-west-whale  the-self  thermo  thinking  threat-modeling  thurston  tidbits  time  time-complexity  time-preference  time-series  toolkit  tools  top-n  topics  topology  traces  track-record  trade  tradeoffs  trees  trends  tribalism  tricki  trivia  turing  tutorial  types  ubiquity  ui  unintended-consequences  uniqueness  unit  universalism-particularism  urban  urban-rural  usa  ux  vague  values  variance-components  vcs  video  virtualization  visual-understanding  visualization  visuo  volo-avolo  walls  wealth  wealth-of-nations  web  welfare-state  whiggish-hegelian  white-paper  whole-partial-many  wiki  wild-ideas  winner-take-all  within-without  wonkish  workflow  working-stiff  world  wormholes  worrydream  worse-is-better/the-right-thing  xenobio  yak-shaving  yoga  yvain  zooming  🌞  🎓  🎩  👳  🤖 

Copy this bookmark:



description:


tags: