nhaliday + protocol-metadata   72

REST is the new SOAP | Hacker News
hn  commentary  techtariat  org:ngo  programming  engineering  web  client-server  networking  rant  rhetoric  contrarianism  idk  org:med  best-practices  working-stiff  api  models  protocol-metadata  internet  state  structure  chart  multi  q-n-a  discussion  expert-experience  track-record  reflection  cost-benefit  design  system-design  comparison  code-organizing  flux-stasis  interface-compatibility  trends  gotchas  stackex  state-of-art  distributed  concurrency  abstraction  concept  conceptual-vocab  python  ubiquity  list  top-n  duplication  synchrony  performance  caching 
22 days ago by nhaliday
The Definitive Guide To Website Authentication | Hacker News
hn  commentary  q-n-a  stackex  programming  identification-equivalence  security  web  client-server  crypto  checklists  best-practices  objektbuch  api  multi  cheatsheet  chart  system-design  nitty-gritty  yak-shaving  comparison  explanation  summary  jargon  state  networking  protocol-metadata  time 
4 weeks ago by nhaliday
Unix philosophy - Wikipedia
1. Make each program do one thing well. To do a new job, build afresh rather than complicate old programs by adding new "features".
2. Expect the output of every program to become the input to another, as yet unknown, program. Don't clutter output with extraneous information. Avoid stringently columnar or binary input formats. Don't insist on interactive input.
3. Design and build software, even operating systems, to be tried early, ideally within weeks. Don't hesitate to throw away the clumsy parts and rebuild them.
4. Use tools in preference to unskilled help to lighten a programming task, even if you have to detour to build the tools and expect to throw some of them out after you've finished using them.
wiki  concept  philosophy  lens  ideas  design  system-design  programming  engineering  systems  unix  subculture  composition-decomposition  coupling-cohesion  metabuch  skeleton  hi-order-bits  summary  list  top-n  quotes  aphorism  minimalism  minimum-viable  best-practices  intricacy  parsimony  protocol-metadata 
august 2019 by nhaliday
Three best practices for building successful data pipelines - O'Reilly Media
Drawn from their experiences and my own, I’ve identified three key areas that are often overlooked in data pipelines, and those are making your analysis:
1. Reproducible
2. Consistent
3. Productionizable


Science that cannot be reproduced by an external third party is just not science — and this does apply to data science. One of the benefits of working in data science is the ability to apply the existing tools from software engineering. These tools let you isolate all the dependencies of your analyses and make them reproducible.

Dependencies fall into three categories:
1. Analysis code ...
2. Data sources ...
3. Algorithmic randomness ...


Establishing consistency in data

There are generally two ways of establishing the consistency of data sources. The first is by checking-in all code and data into a single revision control repository. The second method is to reserve source control for code and build a pipeline that explicitly depends on external data being in a stable, consistent format and location.

Checking data into version control is generally considered verboten for production software engineers, but it has a place in data analysis. For one thing, it makes your analysis very portable by isolating all dependencies into source control. Here are some conditions under which it makes sense to have both code and data in source control:
Small data sets ...
Regular analytics ...
Fixed source ...

Productionizability: Developing a common ETL

1. Common data format ...
2. Isolating library dependencies ...

Rigorously enforce the idempotency constraint
For efficiency, seek to load data incrementally
Always ensure that you can efficiently process historic data
Partition ingested data at the destination
Rest data between tasks
Pool resources for efficiency
Store all metadata together in one place
Manage login details in one place
Specify configuration details once
Parameterize sub flows and dynamically run tasks where possible
Execute conditionally
Develop your own workflow framework and reuse workflow components

more focused on details of specific technologies:

techtariat  org:com  best-practices  engineering  code-organizing  machine-learning  data-science  yak-shaving  nitty-gritty  workflow  config  vcs  replication  homo-hetero  multi  org:med  design  system-design  links  shipping  minimalism  volo-avolo  causation  random  invariance  structure  arrows  protocol-metadata  interface-compatibility 
august 2019 by nhaliday
Modules Matter Most | Existential Type
note comment from gasche (significant OCaml contributor) critiquing modules vs typeclasses: https://existentialtype.wordpress.com/2011/04/16/modules-matter-most/#comment-735
I also think you’re unfair to type classes. You’re right that they are not completely satisfying as a modularity tool, but your presentation make them sound bad in all aspects, which is certainly not true. The limitation of only having one instance per type may be a strong one, but it allows for a level of impliciteness that is just nice. There is a reason why, for example, monads are relatively nice to use in Haskell, while using monads represented as modules in a SML/OCaml programs is a real pain.

It’s a fact that type-classes are widely adopted and used in the Haskell circles, while modules/functors are only used for relatively coarse-gained modularity in the ML community. It should tell you something useful about those two features: they’re something that current modules miss (or maybe a trade-off between flexibility and implicitness that plays against modules for “modularity in the small”), and it’s dishonest and rude to explain the adoption difference by “people don’t know any better”.
nibble  org:bleg  techtariat  programming  pls  plt  ocaml-sml  functional  haskell  types  composition-decomposition  coupling-cohesion  engineering  structure  intricacy  arrows  matching  network-structure  degrees-of-freedom  linearity  nonlinearity  span-cover  direction  multi  poast  expert-experience  blowhards  static-dynamic  protocol-metadata  cmu 
july 2019 by nhaliday
Errors in Math Functions (The GNU C Library)
For C99, there are no specific requirements. But most implementations try to support Annex F: IEC 60559 floating-point arithmetic as good as possible. It says:

An implementation that defines __STDC_IEC_559__ shall conform to the specifications in this annex.


The sqrt functions in <math.h> provide the IEC 60559 square root operation.

IEC 60559 (equivalent to IEEE 754) says about basic operations like sqrt:

Except for binary <-> decimal conversion, each of the operations shall be performed as if it first produced an intermediate result correct to infinite precision and with unbounded range, and then coerced this intermediate result to fit in the destination's format.

The final step consists of rounding according to several rounding modes but the result must always be the closest representable value in the target precision.

[ed.: The list of other such correctly rounded functions is included in the IEEE-754 standard (which I've put w/ the C1x and C++2x standard drafts) under section 9.2, and it mainly consists of stuff that can be expressed in terms of exponentials (exp, log, trig functions, powers) along w/ sqrt/hypot functions.

Fun fact: this question was asked by Yeputons who has a codeforces profile.]
oss  libraries  systems  c(pp)  numerics  documentation  objektbuch  list  linux  unix  multi  q-n-a  stackex  programming  nitty-gritty  sci-comp  accuracy  types  approximation  IEEE  protocol-metadata  gnu 
july 2019 by nhaliday
An Eye Tracking Study on camelCase and under_score Identifier Styles - IEEE Conference Publication
One main difference is that subjects were trained mainly in the underscore style and were all programmers. While results indicate no difference in accuracy between the two styles, subjects recognize identifiers in the underscore style more quickly.

ToCamelCaseorUnderscore: https://citeseerx.ist.psu.edu/viewdoc/summary?doi=
An empirical study of 135 programmers and non-programmers was conducted to better understand the impact of identifier style on code readability. The experiment builds on past work of others who study how readers of natural language perform such tasks. Results indicate that camel casing leads to higher accuracy among all subjects regardless of training, and those trained in camel casing are able to recognize identifiers in the camel case style faster than identifiers in the underscore style.

A 2009 study comparing snake case to camel case found that camel case identifiers could be recognised with higher accuracy among both programmers and non-programmers, and that programmers already trained in camel case were able to recognise those identifiers faster than underscored snake-case identifiers.[35]

A 2010 follow-up study, under the same conditions but using an improved measurement method with use of eye-tracking equipment, indicates: "While results indicate no difference in accuracy between the two styles, subjects recognize identifiers in the underscore style more quickly."[36]
study  psychology  cog-psych  hci  programming  best-practices  stylized-facts  null-result  multi  wiki  reference  concept  empirical  evidence-based  efficiency  accuracy  time  code-organizing  grokkability  protocol-metadata  form-design  grokkability-clarity 
july 2019 by nhaliday
The Law of Leaky Abstractions – Joel on Software
[TCP/IP example]

All non-trivial abstractions, to some degree, are leaky.


- Something as simple as iterating over a large two-dimensional array can have radically different performance if you do it horizontally rather than vertically, depending on the “grain of the wood” — one direction may result in vastly more page faults than the other direction, and page faults are slow. Even assembly programmers are supposed to be allowed to pretend that they have a big flat address space, but virtual memory means it’s really just an abstraction, which leaks when there’s a page fault and certain memory fetches take way more nanoseconds than other memory fetches.

- The SQL language is meant to abstract away the procedural steps that are needed to query a database, instead allowing you to define merely what you want and let the database figure out the procedural steps to query it. But in some cases, certain SQL queries are thousands of times slower than other logically equivalent queries. A famous example of this is that some SQL servers are dramatically faster if you specify “where a=b and b=c and a=c” than if you only specify “where a=b and b=c” even though the result set is the same. You’re not supposed to have to care about the procedure, only the specification. But sometimes the abstraction leaks and causes horrible performance and you have to break out the query plan analyzer and study what it did wrong, and figure out how to make your query run faster.


- C++ string classes are supposed to let you pretend that strings are first-class data. They try to abstract away the fact that strings are hard and let you act as if they were as easy as integers. Almost all C++ string classes overload the + operator so you can write s + “bar” to concatenate. But you know what? No matter how hard they try, there is no C++ string class on Earth that will let you type “foo” + “bar”, because string literals in C++ are always char*’s, never strings. The abstraction has sprung a leak that the language doesn’t let you plug. (Amusingly, the history of the evolution of C++ over time can be described as a history of trying to plug the leaks in the string abstraction. Why they couldn’t just add a native string class to the language itself eludes me at the moment.)

- And you can’t drive as fast when it’s raining, even though your car has windshield wipers and headlights and a roof and a heater, all of which protect you from caring about the fact that it’s raining (they abstract away the weather), but lo, you have to worry about hydroplaning (or aquaplaning in England) and sometimes the rain is so strong you can’t see very far ahead so you go slower in the rain, because the weather can never be completely abstracted away, because of the law of leaky abstractions.

One reason the law of leaky abstractions is problematic is that it means that abstractions do not really simplify our lives as much as they were meant to. When I’m training someone to be a C++ programmer, it would be nice if I never had to teach them about char*’s and pointer arithmetic. It would be nice if I could go straight to STL strings. But one day they’ll write the code “foo” + “bar”, and truly bizarre things will happen, and then I’ll have to stop and teach them all about char*’s anyway.


The law of leaky abstractions means that whenever somebody comes up with a wizzy new code-generation tool that is supposed to make us all ever-so-efficient, you hear a lot of people saying “learn how to do it manually first, then use the wizzy tool to save time.” Code generation tools which pretend to abstract out something, like all abstractions, leak, and the only way to deal with the leaks competently is to learn about how the abstractions work and what they are abstracting. So the abstractions save us time working, but they don’t save us time learning.

People think a lot about abstractions and how to design them well. Here’s one feature I’ve recently been noticing about well-designed abstractions: they should have simple, flexible and well-integrated escape hatches.
techtariat  org:com  working-stiff  essay  programming  cs  software  abstraction  worrydream  thinking  intricacy  degrees-of-freedom  networking  examples  traces  no-go  volo-avolo  tradeoffs  c(pp)  pls  strings  dbs  transportation  driving  analogy  aphorism  learning  paradox  systems  elegance  nitty-gritty  concrete  cracker-prog  metal-to-virtual  protocol-metadata  design  system-design  multi  ratty  core-rats  integration-extension  composition-decomposition  flexibility  parsimony  interface-compatibility 
july 2019 by nhaliday
ellipsis - Why is the subject omitted in sentences like "Thought you'd never ask"? - English Language & Usage Stack Exchange
This is due to a phenomenon that occurs in intimate conversational spoken English called "Conversational Deletion". It was discussed and exemplified quite thoroughly in a 1974 PhD dissertation in linguistics at the University of Michigan that I had the honor of directing.

Thrasher, Randolph H. Jr. 1974. Shouldn't Ignore These Strings: A Study of Conversational Deletion, Ph.D. Dissertation, Linguistics, University of Michigan, Ann Arbor


"The phenomenon can be viewed as erosion of the beginning of sentences, deleting (some, but not all) articles, dummies, auxiliaries, possessives, conditional if, and [most relevantly for this discussion -jl] subject pronouns. But it only erodes up to a point, and only in some cases.

"Whatever is exposed (in sentence initial position) can be swept away. If erosion of the first element exposes another vulnerable element, this too may be eroded. The process continues until a hard (non-vulnerable) element is encountered." [ibidem p.9]

Dad calls this and some similar omissions "Kiplinger style": https://en.wikipedia.org/wiki/Kiplinger
q-n-a  stackex  anglo  language  writing  speaking  linguistics  thesis  trivia  cocktail  parsimony  compression  multi  wiki  organization  technical-writing  protocol-metadata  simplification-normalization 
march 2019 by nhaliday
Roman naming conventions - Wikipedia
The distinguishing feature of Roman nomenclature was the use of both personal names and regular surnames. Throughout Europe and the Mediterranean, other ancient civilizations distinguished individuals through the use of single personal names, usually dithematic in nature. Consisting of two distinct elements, or "themes", these names allowed for hundreds or even thousands of possible combinations. But a markedly different system of nomenclature arose in Italy, where the personal name was joined by a hereditary surname. Over time, this binomial system expanded to include additional names and designations.[1][2]

In ancient Rome, a gens (/ˈɡɛns/ or /ˈdʒɛnz/), plural gentes, was a family consisting of all those individuals who shared the same nomen and claimed descent from a common ancestor. A branch of a gens was called a stirps (plural stirpes). The gens was an important social structure at Rome and throughout Italy during the period of the Roman Republic. Much of an individual's social standing depended on the gens to which he belonged. Certain gentes were considered patrician, others plebeian, while some had both patrician and plebeian branches. The importance of membership in a gens declined considerably in imperial times.[1][2]


The word gens is sometimes translated as "race" or "nation", meaning a people descended from a common ancestor (rather than sharing a common physical trait). It can also be translated as "clan" or "tribe", although the word tribus has a separate and distinct meaning in Roman culture. A gens could be as small as a single family, or could include hundreds of individuals. According to tradition, in 479 BC the gens Fabia alone were able to field a militia consisting of three hundred and six men of fighting age. The concept of the gens was not uniquely Roman, but was shared with communities throughout Italy, including those who spoke Italic languages such as Latin, Oscan, and Umbrian as well as the Etruscans. All of these peoples were eventually absorbed into the sphere of Roman culture.[1][2][3][4]


Persons could be adopted into a gens and acquire its nomen. A libertus, or "freedman", usually assumed the nomen (and sometimes also the praenomen) of the person who had manumitted him, and a naturalized citizen usually took the name of the patron who granted his citizenship. Freedmen and newly enfranchised citizens were not technically part of the gentes whose names they shared, but within a few generations it often became impossible to distinguish their descendants from the original members. In practice this meant that a gens could acquire new members and even new branches, either by design or by accident.[1][2][7]

Ancient Greek personal names: https://en.wikipedia.org/wiki/Ancient_Greek_personal_names
Ancient Greeks usually had one name, but another element was often added in semi-official contexts or to aid identification: a father’s name (patronym) in the genitive case, or in some regions as an adjectival formulation. A third element might be added, indicating the individual’s membership in a particular kinship or other grouping, or city of origin (when the person in question was away from that city). Thus the orator Demosthenes, while proposing decrees in the Athenian assembly, was known as "Demosthenes, son of Demosthenes of Paiania"; Paiania was the deme or regional sub-unit of Attica to which he belonged by birth. If Americans used that system, Abraham Lincoln would have been called "Abraham, son of Thomas of Kentucky" (where he was born). In some rare occasions, if a person was illegitimate or fathered by a non-citizen, they might use their mother's name (metronym) instead of their father's. Ten days after a birth, relatives on both sides were invited to a sacrifice and feast called dekátē (δεκάτη), 'tenth day'; on this occasion the father formally named the child.[3]


In many contexts, etiquette required that respectable women be spoken of as the wife or daughter of X rather than by their own names.[6] On gravestones or dedications, however, they had to be identified by name. Here, the patronymic formula "son of X" used for men might be replaced by "wife of X", or supplemented as "daughter of X, wife of Y".

Many women bore forms of standard masculine names, with a feminine ending substituted for the masculine. Many standard names related to specific masculine achievements had a common feminine equivalent; the counterpart of Nikomachos, "victorious in battle", would be Nikomachē. The taste mentioned above for giving family members related names was one motive for the creation of such feminine forms. There were also feminine names with no masculine equivalent, such as Glykera "sweet one"; Hedistē "most delightful".
wiki  history  iron-age  mediterranean  the-classics  conquest-empire  culture  language  foreign-lang  social-norms  kinship  class  legacy  democracy  status  multi  gender  syntax  protocol-metadata 
august 2018 by nhaliday
The Constitutional Economics of Autocratic Succession on JSTOR
Abstract. The paper extends and empirically tests Gordon Tullock’s public choice theory of the nature of autocracy. A simple model of the relationship between constitutional rules governing succession in autocratic regimes and the occurrence of coups against autocrats is sketched. The model is applied to a case study of coups against monarchs in Denmark in the period ca. 935–1849. A clear connection is found between the specific constitutional rules governing succession and the frequency of coups. Specifically, the introduction of automatic hereditary succession in an autocracy provides stability and limits the number of coups conducted by contenders.

Table 2. General constitutional rules of succession, Denmark ca. 935–1849

To see this the data may be divided into three categories of constitutional rules of succession: One of open succession (for the periods 935–1165 and 1326–40), one of appointed succession combined with election (for the periods 1165–1326 and 1340–1536), and one of more or less formalized hereditary succession (1536–1849). On the basis of this categorization the data have been summarized in Table 3.

validity of empirics is a little sketchy

The graphic novel it is based on is insightful, illustrates Tullock's game-theoretic, asymmetric information views on autocracy.

Conclusions from Gorton Tullock's book Autocracy, p. 211-215.: https://astro.temple.edu/~bstavis/courses/tulluck.htm
study  polisci  political-econ  economics  cracker-econ  big-peeps  GT-101  info-econ  authoritarianism  antidemos  government  micro  leviathan  elite  power  institutions  garett-jones  multi  econotariat  twitter  social  commentary  backup  art  film  comics  fiction  competition  europe  nordic  empirical  evidence-based  incentives  legacy  peace-violence  order-disorder  🎩  organizing  info-dynamics  history  medieval  law  axioms  stylized-facts  early-modern  data  longitudinal  flux-stasis  shift  revolution  correlation  org:junk  org:edu  summary  military  war  top-n  hi-order-bits  feudal  democracy  sulla  leadership  nascent-state  protocol-metadata 
october 2017 by nhaliday
Two theories of home heat control - ScienceDirect
People routinely develop their own theories to explain the world around them. These theories can be useful even when they contradict conventional technical wisdom. Based on in-depth interviews about home heating and thermostat setting behavior, the present study presents two theories people use to understand and adjust their thermostats. The two theories are here called the feedback theory and the valve theory. The valve theory is inconsistent with engineering knowledge, but is estimated to be held by 25% to 50% of Americans. Predictions of each of the theories are compared with the operations normally performed in home heat control. This comparison suggests that the valve theory may be highly functional in normal day-to-day use. Further data is needed on the ways this theory guides behavior in natural environments.
study  hci  ux  hardware  embodied  engineering  dirty-hands  models  thinking  trivia  cocktail  map-territory  realness  neurons  psychology  cog-psych  social-psych  error  usa  poll  descriptive  temperature  protocol-metadata  form-design 
september 2017 by nhaliday
What is the best way to parse command-line arguments with Python? - Quora
- Anders Kaseorg

Use the standard optparse library.

It’s important to uphold your users’ expectation that your utility will parse arguments in the same way as every other UNIX utility. If you roll your own parsing code, you’ll almost certainly break that expectation in obvious or subtle ways.

Although the documentation claims that optparse has been deprecated in favor of argparse, which supports more features like optional option arguments and configurable prefix characters, I can’t recommend argparse until it’s been fixed to parse required option arguments in the standard UNIX way. Currently, argparse uses an unexpected heuristic which may lead to subtle bugs in other scripts that call your program.

consider also click (which uses the optparse behavior)
q-n-a  qra  oly  best-practices  programming  terminal  unix  python  libraries  gotchas  howto  pls  yak-shaving  integration-extension  protocol-metadata 
august 2017 by nhaliday
Broadcasting — NumPy v1.13 Manual
When operating on two arrays, NumPy compares their shapes element-wise. It starts with the trailing dimensions, and works its way forward. Two dimensions are compatible when

they are equal, or
one of them is 1
If these conditions are not met, a ValueError: frames are not aligned exception is thrown, indicating that the arrays have incompatible shapes. The size of the resulting array is the maximum size along each dimension of the input arrays.

Arrays do not need to have the same number of dimensions. For example, if you have a 256x256x3 array of RGB values, and you want to scale each color in the image by a different value, you can multiply the image by a one-dimensional array with 3 values.
python  libraries  programming  howto  numerics  pls  linear-algebra  sci-comp  protocol-metadata  frameworks 
august 2017 by nhaliday
Bekker numbering - Wikipedia
Bekker numbering or Bekker pagination is the standard form of citation to the works of Aristotle. It is based on the page numbers used in the Prussian Academy of Sciences edition of the complete works of Aristotle and takes its name from the editor of that edition, the classical philologist August Immanuel Bekker (1785-1871); because the Academy was located in Berlin, the system is occasionally referred to by the alternative name Berlin numbering or Berlin pagination.[1]

Bekker numbers take the format of up to four digits, a letter for column 'a' or 'b', then the line number. For example, the beginning of Aristotle's Nicomachean Ethics is 1094a1, which corresponds to page 1094 of Bekker's edition of the Greek text of Aristotle's works, first column, line 1.[2]
history  iron-age  mediterranean  the-classics  literature  jargon  early-modern  publishing  canon  wiki  reference  protocol-metadata 
july 2017 by nhaliday
Merkle tree - Wikipedia
In cryptography and computer science, a hash tree or Merkle tree is a tree in which every non-leaf node is labelled with the hash of the labels or values (in case of leaves) of its child nodes.
concept  cs  data-structures  bitcoin  cryptocurrency  blockchain  atoms  wiki  reference  nibble  hashing  ideas  crypto  rigorous-crypto  protocol-metadata 
june 2017 by nhaliday
I am fascinated by Tim May's crypto-anarchy. Unlike the communities
traditionally associated with the word "anarchy", in a crypto-anarchy the
government is not temporarily destroyed but permanently forbidden and
permanently unnecessary. It's a community where the threat of violence is
impotent because violence is impossible, and violence is impossible
because its participants cannot be linked to their true names or physical

Until now it's not clear, even theoretically, how such a community could
operate. A community is defined by the cooperation of its participants,
and efficient cooperation requires a medium of exchange (money) and a way
to enforce contracts. Traditionally these services have been provided by
the government or government sponsored institutions and only to legal
entities. In this article I describe a protocol by which these services
can be provided to and by untraceable entities.
ratty  unaffiliated  crypto-anarchy  crypto  cryptocurrency  coordination  contracts  money  institutions  org:junk  bitcoin  smart-contracts  ideas  blockchain  allodium  protocol-metadata 
june 2017 by nhaliday
how big was the edge? | West Hunter
One consideration in the question of what drove the Great Divergence [when Europe’s power and wealth came to greatly exceed that of the far East] is the extent to which Europe was already ahead in science, mathematics, and engineering. As I have said, at the highest levels European was already much more intellectually sophisticated than China. I have a partial list of such differences, but am interested in what my readers can come up with.

What were the European advantages in science, mathematics, and technology circa 1700? And, while we’re at it, in what areas did China/Japan/Korea lead at that point in time?

Before 1700, Ashkenazi Jews did not contribute to the growth of mathematics, science, or technology in Europe. As for the idea that they played a crucial financial role in this period – not so. Medicis, Fuggers.

I’m not so sure about China being behind in agricultural productivity.
Nor canal building. Miles ahead on that, I’d have thought.

China also had eyeglasses.
Well after they were invented in Italy.

I would say that although the Chinese discovered and invented many things, they never developed science, anymore than they developed axiomatic mathematics.

I believe Chinese steel production led the world until late in the 18th century, though I haven’t found any references to support that.
Probably true in the late Sung period, but not later. [ed.: So 1200s AD.]

I’m confess I’m skeptical of your statement that the literacy rate in England in 1650 was 50%. Perhaps it was in London but the entire population?
More like 30%, for men, lower for women.

They did pretty well, considering that they were just butterflies dreaming that they were men.

But… there is a real sense in which the Elements, or the New Astronomy, or the Principia, are more sophisticated than anything Confucious ever said.

They’re not just complicated – they’re correct.
Tell me how to distinguish good speculative metaphysics from bad speculative metaphysics.

random side note:
- dysgenics running at -.5-1 IQ/generation in NW Europe since ~1800 and China by ~1960
- gap between east asians and europeans typically a bit less than .5 SD (or .3 SD if you look at mainland chinese not asian-americans?), similar variances
- 160/30 * 1/15 = .36, so could explain most of gap depending on when exactly dysgenics started
- maybe Europeans were just smarter back then? still seems like you need additional cultural/personality and historical factors. could be parasite load too.

scientifically than europe”. Nonsense, of course. Hellenistic science was more advanced than that of India and China in 1700 ! Although it makes me wonder the extent to which they’re teaching false history of science and technology in schools today- there’s apparently demand to blot out white guys from the story, which wouldn’t leave much.

Europe, back then, could be ridiculously sophisticated, at the highest levels. There had been no simple, accurate way of determining longitude – important in navigation, but also in mapmaking.


In the course of playing with this technique, the Danish astronomer Ole Rømer noted some discrepancies in the timing of those eclipses – they were farther apart when Earth and Jupiter were moving away from each other, closer together when the two planets were approaching each other. From which he deduced that light had a finite speed, and calculated the approximate value.

“But have you noticed having a better memory than other smart people you respect?”

Oh yes.
I think some people have a stronger meta-memory than others, which can work as a multiplier of their intelligence. For some, their memory is a well ordered set of pointers to where information exists. It’s meta-data, rather than data itself. For most people, their memory is just a list of data, loosely organized by subject. Mixed in may be some meta-data, but otherwise it is a closed container.

I suspect sociopaths and politicians have a strong meta-data layer.
west-hunter  discussion  history  early-modern  science  innovation  comparison  asia  china  divergence  the-great-west-whale  culture  society  technology  civilization  europe  frontier  arms  military  agriculture  discovery  coordination  literature  sinosphere  roots  anglosphere  gregory-clark  spearhead  parasites-microbiome  dysgenics  definite-planning  reflection  s:*  big-picture  🔬  track-record  scitariat  broad-econ  info-dynamics  chart  prepping  zeitgeist  rot  wealth-of-nations  cultural-dynamics  ideas  enlightenment-renaissance-restoration-reformation  occident  modernity  microfoundations  the-trenches  marginal  summary  orient  speedometer  the-world-is-just-atoms  gnon  math  geometry  defense  architecture  hari-seldon  multi  westminster  culture-war  identity-politics  twitter  social  debate  speed  space  examples  physics  old-anglo  giants  nordic  geography  navigation  maps  aphorism  poast  retention  neurons  thinking  finance  trivia  pro-rata  data  street-fighting  protocol-metadata  context  oceans 
march 2017 by nhaliday
The Common Law Corporation: The Power of the Trust in Anglo-American Business History
In a new article just published in the Columbia Law Review, I offer new answers by suggesting that if the corporate form mattered at all in Anglo-American legal history, it was not for the reasons we have long supposed. Based on a new examination of historical legal sources from the late Middle Ages to the middle of the twentieth century, I show that the basic powers of the corporate form were also available throughout most of modern history through an underappreciated but enormously important legal device known as the common law trust. The trust’s success at mimicking the corporate form meant that the corporate form was almost never the exclusive source of the legal features that have long been considered its key contribution to modern life.
study  summary  economics  industrial-org  coordination  institutions  history  law  business  anglo  usa  medieval  early-modern  mostly-modern  contracts  anglosphere  capitalism  cultural-dynamics  pre-ww2  corporation  axioms  organizing  protocol-metadata  innovation  finance  null-result  contrarianism 
march 2017 by nhaliday
Unenumerated: Genoa
The Genovese were the chief commercial innovators of the later Middle Ages, and if anything was key to their innovations it was their advanced contract law and their commitment to freedom of contract. Nothing showed this commitment more than its long struggle against Church doctrine banning usury, which at the time meant any charging of interest. Genovese contracts "hid" interest charges as profits (which were acceptable) or in exchange rates.
unaffiliated  szabo  history  economics  business  contracts  institutions  coordination  europe  mediterranean  britain  early-modern  the-great-west-whale  capitalism  insurance  broad-econ  cultural-dynamics  medieval  anglosphere  wealth-of-nations  divergence  enlightenment-renaissance-restoration-reformation  modernity  political-econ  microfoundations  protocol-metadata  debt  finance  innovation 
february 2017 by nhaliday
China invents the digital totalitarian state | The Economist
PROGRAMMING CHINA: The Communist Party’s autonomic approach to managing state security: https://www.merics.org/sites/default/files/2017-12/171212_China_Monitor_44_Programming_China_EN__0.pdf
- The Chinese Communist Party (CCP) has developed a form of authoritarianism that cannot be measured through traditional political scales like reform versus retrenchment. This version of authoritarianism involves both “hard” and “soft” authoritarian methods that constantly act together.
- To describe the social management process, this paper introduces a new analytical framework called China’s “Autonomic Nervous System” (ANS). This approach explains China’s social management process through a complex systems engineering framework. This framework mirrors the CCP’s Leninist way of thinking.
- The framework describes four key parts of social management, visualized through ANS’s “self-configuring,” “self-healing,” “self-optimizing” and “self-protecting” objectives.

China's Social Credit System: An Evolving Practice of Control: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3175792

The Chinese government is not the only entity that has access to millions of faces + identifying information. So do Google, Facebook, Instagram, and anyone who has scraped information from similar social networks (e.g., US security services, hackers, etc.).

In light of such ML capabilities it seems clear that anti-ship ballistic missiles can easily target a carrier during the final maneuver phase of descent, using optical or infrared sensors (let alone radar).

China goes all-in on technology the US is afraid to do right.
US won't learn its lesson in time for CRISPR or AI.

Artificial intelligence is developing fast in China. But is it likely to enable the suppression of freedoms? One of China's most successful investors, Neil Shen, has a short answer to that question. Also, Chinese AI companies now have the potential to overtake their Western rivals -- we explain why. Anne McElvoy hosts with The Economist's AI expert, Tom Standage

the dude just stonewalls when asked at 7:50, completely zipped lips

What you’re looking at above is the work of SenseTime, a Chinese computer vision startup. The software in question, called SenseVideo, is a visual scenario analytics system. Basically, it can analyse video footage to pinpoint whether moving objects are humans, cars, or other entities. It’s even sophisticated enough to detect gender, clothing, and the type of vehicle it’s looking at, all in real time.


Even China’s Backwater Cities Are Going Smart: http://www.sixthtone.com/news/1001452/even-chinas-backwater-cities-are-going-smart

remember that tweet with the ML readout of Chinese surveilance cameras? Get ready for the future (via @triviumchina)

XI praised the organization and promised to help it beef up its operations (China
- "China will 'help ... 100 developing countries build or upgrade communication systems and crime labs in the next five years'"
- "The Chinese government will establish an international law enforcement institute under the Ministry of Public Security which will train 20,000 police for developing nations in the coming five years"

The Chinese connection to the Zimbabwe 'coup': http://www.cnn.com/2017/11/17/africa/china-zimbabwe-mugabe-diplomacy/index.html

China to create national name-and-shame system for ‘deadbeat borrowers’: http://www.scmp.com/news/china/economy/article/2114768/china-create-national-name-and-shame-system-deadbeat-borrowers
Anyone who fails to repay a bank loan will be blacklisted and have their personal details made public

China Snares Innocent and Guilty Alike to Build World’s Biggest DNA Database: https://www.wsj.com/articles/china-snares-innocent-and-guilty-alike-to-build-worlds-biggest-dna-database-1514310353
Police gather blood and saliva samples from many who aren’t criminals, including those who forget ID cards, write critically of the state or are just in the wrong place

Many of the ways Chinese police are collecting samples are impermissible in the U.S. In China, DNA saliva swabs or blood samples are routinely gathered from people detained for violations such as forgetting to carry identity cards or writing blogs critical of the state, according to documents from a national police DNA conference in September and official forensic journals.

Others aren’t suspected of any crime. Police target certain groups considered a higher risk to social stability. These include migrant workers and, in one city, coal miners and home renters, the documents show.


In parts of the country, law enforcement has stored DNA profiles with a subject’s other biometric information, including fingerprints, portraits and voice prints, the heads of the DNA program wrote in the Chinese journal Forensic Science and Technology last year. One provincial police force has floated plans to link the data to a person’s information such as online shopping records and entertainment habits, according to a paper presented at the national police DNA conference. Such high-tech files would create more sophisticated versions of paper dossiers that police have long relied on to keep tabs on citizens.

Marrying DNA profiles with real-time surveillance tools, such as monitoring online activity and cameras hooked to facial-recognition software, would help China’s ruling Communist Party develop an all-encompassing “digital totalitarian state,” says Xiao Qiang, adjunct professor at the University of California at Berkeley’s School of Information.


A teenage boy studying in one of the county’s high schools recalled that a policeman came into his class after lunch one day this spring and passed out the collection boxes. Male students were told to clean their mouths, spit into the boxes and place them into envelopes on which they had written their names.


Chinese police sometimes try to draw connections between ethnic background or place of origin and propensity for crime. Police officers in northwestern China’s Ningxia region studied data on local prisoners and noticed that a large number came from three towns. They decided to collect genetic material from boys and men from every clan to bolster the local DNA database, police said at the law-enforcement DNA conference in September.

China is certainly in the lead in the arena of digital-biometric monitoring. Particularly “interesting” is the proposal to merge DNA info with online behavioral profiling.



This is the thing I find the most disenchanting about the current political spectrum. It's all reheated ideas that are a century old, at least. Everyone wants to run our iPhone society with power structures dating to the abacus.
Thank God for the forward-thinking Chinese Communist Party and its high-tech social credit system!


INSIDE CHINA'S VAST NEW EXPERIMENT IN SOCIAL RANKING: https://www.wired.com/story/age-of-social-credit/

The government thinks "social credit" will fix the country's lack of trust — and the public agrees.

To be Chinese today is to live in a society of distrust, where every opportunity is a potential con and every act of generosity a risk of exploitation. When old people fall on the street, it’s common that no one offers to help them up, afraid that they might be accused of pushing them in the first place and sued. The problem has grown steadily since the start of the country’s economic boom in the 1980s. But only recently has the deficit of social trust started to threaten not just individual lives, but the country’s economy and system of politics as a whole. The less people trust each other, the more the social pact that the government has with its citizens — of social stability and harmony in exchange for a lack of political rights — disintegrates.

All of which explains why Chinese state media has recently started to acknowledge the phenomenon — and why the government has started searching for solutions. But rather than promoting the organic return of traditional morality to reduce the gulf of distrust, the Chinese government has preferred to invest its energy in technological fixes. It’s now rolling out systems of data-driven “social credit” that will purportedly address the problem by tracking “good” and “bad” behavior, with rewards and punishments meted out accordingly. In the West, plans of this sort have tended to spark fears about the reach of the surveillance state. Yet in China, it’s being welcomed by a public fed up of not knowing who to trust.

It’s unsurprising that a system that promises to place a check on unfiltered power has proven popular — although it’s… [more]
news  org:rec  org:biz  china  asia  institutions  government  anglosphere  privacy  civil-liberty  individualism-collectivism  org:anglo  technocracy  authoritarianism  managerial-state  intel  sinosphere  order-disorder  madisonian  orient  n-factor  internet  domestication  multi  commentary  hn  society  huge-data-the-biggest  unaffiliated  twitter  social  trust  hsu  scitariat  anonymity  computer-vision  gnon  🐸  leviathan  arms  oceans  sky  open-closed  alien-character  dirty-hands  backup  podcast  audio  interview  ai  antidemos  video  org:foreign  ratty  postrat  expansionism  developing-world  debt  corruption  anomie  organizing  dark-arts  alt-inst  org:lite  africa  orwellian  innovation  biotech  enhancement  GWAS  genetics  genomics  trends  education  crime  criminal-justice  criminology  journos-pundits  chart  consumerism  entertainment  within-group  urban-rural  geography  org:mag  modernity  flux-stasis  hmm  comparison  speedometer  reddit  discussion  ssc  mobile  futurism  absolute-relative  apple  scale  cohesion  cooperate-defect  coordination  egalit 
january 2017 by nhaliday
Common law and the origin of shareholder protection
This paper examines the origins of investor protection under the common law by analysing the development of shareholder protection in Victorian Britain, the home of the common law. In this era, very little was codified, with corporate law simply suggesting a default template of rules. Ultimately, the matter of protection was one for the corporation and its shareholders. Using c.500 articles of association and ownership records of publicly-traded Victorian corporations, we find that corporations afforded investors with just as much protection as is present in modern corporate law and that firms with better shareholder protection had more diffuse ownership.
study  economics  cliometrics  industrial-revolution  law  britain  institutions  anglosphere  business  finance  history  contracts  industrial-org  anglo  wonkish  early-modern  roots  the-great-west-whale  capitalism  broad-econ  political-econ  pre-ww2  modernity  north-weingast-like  corporation  axioms  organizing  interests  protocol-metadata  innovation 
january 2017 by nhaliday
Can Smart Contracts Be Legally Binding? | Elaine's Idle Mind
Smart contracts make it so lawyers don’t get to argue over nonsense and write 52-page papers discussing clickwrap case law. The whole point of a smart contract is to NOT go to court.

If you need to ask whether your smart contract is legally enforceable, you’re doing it wrong. Smart contracts make it so people don’t have to litigate over details like “Did this guy pay for parking or not?” Sure, smart contracts should be designed to model the common-law process of contract formation – not because that makes them legally binding, but because it’s a highly-evolved process that has been used for hundreds of years.
smart-contracts  contracts  essay  reflection  blockchain  crypto-anarchy  law  contrarianism  protocol-metadata 
december 2016 by nhaliday
Rob Pike: Notes on Programming in C
Issues of typography
Sometimes they care too much: pretty printers mechanically produce pretty output that accentuates irrelevant detail in the program, which is as sensible as putting all the prepositions in English text in bold font. Although many people think programs should look like the Algol-68 report (and some systems even require you to edit programs in that style), a clear program is not made any clearer by such presentation, and a bad program is only made laughable.
Typographic conventions consistently held are important to clear presentation, of course - indentation is probably the best known and most useful example - but when the ink obscures the intent, typography has taken over.


Finally, I prefer minimum-length but maximum-information names, and then let the context fill in the rest. Globals, for instance, typically have little context when they are used, so their names need to be relatively evocative. Thus I say maxphysaddr (not MaximumPhysicalAddress) for a global variable, but np not NodePointer for a pointer locally defined and used. This is largely a matter of taste, but taste is relevant to clarity.


C is unusual in that it allows pointers to point to anything. Pointers are sharp tools, and like any such tool, used well they can be delightfully productive, but used badly they can do great damage (I sunk a wood chisel into my thumb a few days before writing this). Pointers have a bad reputation in academia, because they are considered too dangerous, dirty somehow. But I think they are powerful notation, which means they can help us express ourselves clearly.
Consider: When you have a pointer to an object, it is a name for exactly that object and no other.


A delicate matter, requiring taste and judgement. I tend to err on the side of eliminating comments, for several reasons. First, if the code is clear, and uses good type names and variable names, it should explain itself. Second, comments aren't checked by the compiler, so there is no guarantee they're right, especially after the code is modified. A misleading comment can be very confusing. Third, the issue of typography: comments clutter code.
But I do comment sometimes. Almost exclusively, I use them as an introduction to what follows.


Most programs are too complicated - that is, more complex than they need to be to solve their problems efficiently. Why? Mostly it's because of bad design, but I will skip that issue here because it's a big one. But programs are often complicated at the microscopic level, and that is something I can address here.
Rule 1. You can't tell where a program is going to spend its time. Bottlenecks occur in surprising places, so don't try to second guess and put in a speed hack until you've proven that's where the bottleneck is.

Rule 2. Measure. Don't tune for speed until you've measured, and even then don't unless one part of the code overwhelms the rest.

Rule 3. Fancy algorithms are slow when n is small, and n is usually small. Fancy algorithms have big constants. Until you know that n is frequently going to be big, don't get fancy. (Even if n does get big, use Rule 2 first.) For example, binary trees are always faster than splay trees for workaday problems.

Rule 4. Fancy algorithms are buggier than simple ones, and they're much harder to implement. Use simple algorithms as well as simple data structures.

The following data structures are a complete list for almost all practical programs:

linked list
hash table
binary tree
Of course, you must also be prepared to collect these into compound data structures. For instance, a symbol table might be implemented as a hash table containing linked lists of arrays of characters.
Rule 5. Data dominates. If you've chosen the right data structures and organized things well, the algorithms will almost always be self-evident. Data structures, not algorithms, are central to programming. (See The Mythical Man-Month: Essays on Software Engineering by F. P. Brooks, page 102.)

Rule 6. There is no Rule 6.

Programming with data.
One of the reasons data-driven programs are not common, at least among beginners, is the tyranny of Pascal. Pascal, like its creator, believes firmly in the separation of code and data. It therefore (at least in its original form) has no ability to create initialized data. This flies in the face of the theories of Turing and von Neumann, which define the basic principles of the stored-program computer. Code and data are the same, or at least they can be. How else can you explain how a compiler works? (Functional languages have a similar problem with I/O.)

Function pointers
Another result of the tyranny of Pascal is that beginners don't use function pointers. (You can't have function-valued variables in Pascal.) Using function pointers to encode complexity has some interesting properties.
Some of the complexity is passed to the routine pointed to. The routine must obey some standard protocol - it's one of a set of routines invoked identically - but beyond that, what it does is its business alone. The complexity is distributed.

There is this idea of a protocol, in that all functions used similarly must behave similarly. This makes for easy documentation, testing, growth and even making the program run distributed over a network - the protocol can be encoded as remote procedure calls.

I argue that clear use of function pointers is the heart of object-oriented programming. Given a set of operations you want to perform on data, and a set of data types you want to respond to those operations, the easiest way to put the program together is with a group of function pointers for each type. This, in a nutshell, defines class and method. The O-O languages give you more of course - prettier syntax, derived types and so on - but conceptually they provide little extra.


Include files
Simple rule: include files should never include include files. If instead they state (in comments or implicitly) what files they need to have included first, the problem of deciding which files to include is pushed to the user (programmer) but in a way that's easy to handle and that, by construction, avoids multiple inclusions. Multiple inclusions are a bane of systems programming. It's not rare to have files included five or more times to compile a single C source file. The Unix /usr/include/sys stuff is terrible this way.
There's a little dance involving #ifdef's that can prevent a file being read twice, but it's usually done wrong in practice - the #ifdef's are in the file itself, not the file that includes it. The result is often thousands of needless lines of code passing through the lexical analyzer, which is (in good compilers) the most expensive phase.

Just follow the simple rule.

cf https://stackoverflow.com/questions/1101267/where-does-the-compiler-spend-most-of-its-time-during-parsing
First, I don't think it actually is true: in many compilers, most time is not spend in lexing source code. For example, in C++ compilers (e.g. g++), most time is spend in semantic analysis, in particular in overload resolution (trying to find out what implicit template instantiations to perform). Also, in C and C++, most time is often spend in optimization (creating graph representations of individual functions or the whole translation unit, and then running long algorithms on these graphs).

When comparing lexical and syntactical analysis, it may indeed be the case that lexical analysis is more expensive. This is because both use state machines, i.e. there is a fixed number of actions per element, but the number of elements is much larger in lexical analysis (characters) than in syntactical analysis (tokens).

programming  systems  philosophy  c(pp)  summer-2014  intricacy  engineering  rhetoric  contrarianism  diogenes  parsimony  worse-is-better/the-right-thing  data-structures  list  algorithms  stylized-facts  essay  ideas  performance  functional  state  pls  oop  gotchas  blowhards  duplication  compilers  syntax  lexical  checklists  metabuch  lens  notation  thinking  neurons  guide  pareto  heuristic  time  cost-benefit  multi  q-n-a  stackex  plt  hn  commentary  minimalism  techtariat  rsc  writing  technical-writing  cracker-prog  code-organizing  grokkability  protocol-metadata  direct-indirect  grokkability-clarity  latency-throughput 
august 2014 by nhaliday

bundles : abstractcoordmetatechie

related tags

absolute-relative  abstraction  accuracy  advice  africa  age-of-discovery  agriculture  ai  algorithms  alien-character  alignment  allodium  alt-inst  altruism  analogy  analysis  analytical-holistic  anglo  anglosphere  anomie  anonymity  antidemos  aphorism  api  apple  approximation  architecture  arms  arrows  art  article  asia  atoms  audio  authoritarianism  automation  axelrod  axioms  backup  baseball  behavioral-gen  best-practices  big-peeps  big-picture  bio  biodet  biotech  bitcoin  blockchain  blowhards  books  britain  broad-econ  browser  build-packaging  business  c(pp)  caching  calculation  canon  capitalism  career  carmack  causation  censorship  chart  cheatsheet  checking  checklists  china  civic  civil-liberty  civilization  class  client-server  cliometrics  cmu  cocktail  cocoa  code-organizing  coding-theory  cog-psych  cohesion  collaboration  comics  commentary  communication  community  comparison  compensation  competition  compilers  complement-substitute  composition-decomposition  compression  computer-memory  computer-vision  concept  conceptual-vocab  concrete  concurrency  conference  config  conquest-empire  constraint-satisfaction  consumerism  context  contracts  contradiction  contrarianism  cooperate-defect  coordination  core-rats  corporation  correctness  correlation  corruption  cost-benefit  coupling-cohesion  cracker-econ  cracker-prog  crime  criminal-justice  criminology  critique  crooked  crosstab  crypto  crypto-anarchy  cryptocurrency  cs  cultural-dynamics  culture  culture-war  cybernetics  d-lang  d3  dan-luu  dark-arts  data  data-science  data-structures  database  dataset  dataviz  dbs  death  debate  debt  debugging  decentralized  deep-learning  defense  definite-planning  degrees-of-freedom  democracy  demographics  descriptive  design  desktop  detail-architecture  developing-world  devtools  differential-privacy  diogenes  direct-indirect  direction  dirty-hands  discovery  discussion  distributed  distribution  divergence  documentation  domestication  dotnet  draft  driving  DSL  duplication  dysgenics  early-modern  economics  econotariat  ecosystem  eden-heaven  editors  education  efficiency  egalitarianism-hierarchy  elegance  elite  email  embodied  empirical  engineering  enhancement  enlightenment-renaissance-restoration-reformation  entertainment  environmental-effects  error  error-handling  essay  ethical-algorithms  europe  evidence-based  examples  exit-voice  expansionism  experiment  expert-experience  explanans  explanation  exploratory  facebook  feudal  ffi  fiction  film  finance  flexibility  flux-stasis  flynn  foreign-lang  form-design  formal-methods  frameworks  free  frontend  frontier  functional  futurism  garett-jones  gender  gender-diff  genetics  genomics  geography  geometry  giants  git  github  gnon  gnu  gnxp  golang  google  gotchas  government  graphics  graphs  gregory-clark  grokkability  grokkability-clarity  grugq  GT-101  guide  guilt-shame  GWAS  gwern  happy-sad  hardware  hari-seldon  harvard  hashing  haskell  hci  heavyweights  heuristic  hi-order-bits  history  hmm  hn  homo-hetero  howto  hsu  huge-data-the-biggest  human-bean  human-capital  ide  ideas  identification-equivalence  identity  identity-politics  idk  IEEE  incentives  increase-decrease  individualism-collectivism  industrial-org  industrial-revolution  info-dynamics  info-econ  infographic  init  innovation  institutions  insurance  integration-extension  intel  intellectual-property  interests  interface  interface-compatibility  internet  interview  intricacy  invariance  IoT  iron-age  japan  jargon  javascript  jobs  journos-pundits  jvm  kinship  language  latency-throughput  law  leadership  learning  legacy  legibility  len:short  lens  let-me-see  leviathan  lexical  libraries  linear-algebra  linear-models  linearity  linguistics  links  linux  list  literature  llvm  local-global  long-short-run  longitudinal  machine-learning  madisonian  management  managerial-state  map-territory  maps  marginal  markets  matching  math  math.DS  measure  measurement  media  medieval  mediterranean  mental-math  metabuch  metal-to-virtual  methodology  michael-nielsen  micro  microfoundations  microsoft  military  minimalism  minimum-viable  mit  mobile  model-class  models  modernity  money  mostly-modern  multi  n-factor  nascent-state  navigation  network-structure  networking  neurons  new-religion  news  nibble  nihil  nitty-gritty  no-go  noblesse-oblige  nonlinearity  nordic  north-weingast-like  notation  novelty  null-result  numerics  objektbuch  ocaml-sml  occident  oceans  old-anglo  oly  oop  open-closed  opsec  order-disorder  org:anglo  org:biz  org:bleg  org:bv  org:com  org:edu  org:foreign  org:junk  org:lite  org:mag  org:med  org:ngo  org:rec  organization  organizing  orient  orwellian  os  oss  p2p  papers  paradox  parasites-microbiome  pareto  parsimony  paste  pdf  peace-violence  performance  philosophy  physics  pic  pinboard  plan9  pls  plt  poast  podcast  polanyi-marx  polisci  political-econ  poll  population  postmortem  postrat  power  pragmatic  pre-ww2  prediction  prepping  preprint  presentation  privacy  pro-rata  programming  project  propaganda  properties  property-rights  proposal  protocol-metadata  psychology  publishing  python  q-n-a  qra  quotes  random  rant  ratty  realness  recommendations  reddit  reference  reflection  regulation  replication  repo  reputation  responsibility  retention  revolution  rhetoric  rigor  rigorous-crypto  risk  roots  rot  rsc  rust  s-factor  s:*  s:**  safety  scala  scale  sci-comp  science  scitariat  search  security  selection  sentiment  sequential  shift  shipping  simplification-normalization  sinosphere  skeleton  skunkworks  sky  sleuthin  slides  smart-contracts  social  social-norms  social-psych  society  socs-and-mops  software  space  span-cover  speaking  spearhead  speed  speedometer  sports  spreading  ssc  stackex  state  state-of-art  static-dynamic  stats  status  stories  straussian  street-fighting  strings  structure  study  stylized-facts  sub-super  subculture  sulla  summary  summer-2014  synchrony  syntax  system-design  systems  szabo  tcstariat  tech  technical-writing  technocracy  technology  techtariat  temperature  terminal  the-classics  the-great-west-whale  the-trenches  the-watchers  the-world-is-just-atoms  thesis  thinking  time  time-series  tip-of-tongue  tools  top-n  traces  track-record  tradecraft  tradeoffs  tradition  transportation  trees  trends  tricks  trivia  trust  tutorial  twitter  types  ubiquity  unaffiliated  unintended-consequences  unit  universalism-particularism  unix  urban-rural  usa  utopia-dystopia  ux  vcs  video  visualization  volo-avolo  war  wealth-of-nations  web  west-hunter  westminster  whole-partial-many  wiki  within-group  wonkish  workflow  working-stiff  world  world-war  worrydream  worse-is-better/the-right-thing  writing  yak-shaving  zeitgeist  🎩  🐸  🔬  🖥 

Copy this bookmark: