evidence-based   441

« earlier    

An Eye Tracking Study on camelCase and under_score Identifier Styles - IEEE Conference Publication
One main difference is that subjects were trained mainly in the underscore style and were all programmers. While results indicate no difference in accuracy between the two styles, subjects recognize identifiers in the underscore style more quickly.

ToCamelCaseorUnderscore: https://citeseerx.ist.psu.edu/viewdoc/summary?doi=
An empirical study of 135 programmers and non-programmers was conducted to better understand the impact of identifier style on code readability. The experiment builds on past work of others who study how readers of natural language perform such tasks. Results indicate that camel casing leads to higher accuracy among all subjects regardless of training, and those trained in camel casing are able to recognize identifiers in the camel case style faster than identifiers in the underscore style.

A 2009 study comparing snake case to camel case found that camel case identifiers could be recognised with higher accuracy among both programmers and non-programmers, and that programmers already trained in camel case were able to recognise those identifiers faster than underscored snake-case identifiers.[35]

A 2010 follow-up study, under the same conditions but using an improved measurement method with use of eye-tracking equipment, indicates: "While results indicate no difference in accuracy between the two styles, subjects recognize identifiers in the underscore style more quickly."[36]
study  psychology  cog-psych  hci  programming  best-practices  stylized-facts  null-result  multi  wiki  reference  concept  protocol  empirical  evidence-based  efficiency  accuracy  time 
yesterday by nhaliday
history - Why are UNIX/POSIX system call namings so illegible? - Unix & Linux Stack Exchange
It's due to the technical constraints of the time. The POSIX standard was created in the 1980s and referred to UNIX, which was born in the 1970. Several C compilers at that time were limited to identifiers that were 6 or 8 characters long, so that settled the standard for the length of variable and function names.

We carried out a family of controlled experiments to investigate whether the use of abbreviated identifier names, with respect to full-word identifier names, affects fault fixing in C and Java source code. This family consists of an original (or baseline) controlled experiment and three replications. We involved 100 participants with different backgrounds and experiences in total. Overall results suggested that there is no difference in terms of effort, effectiveness, and efficiency to fix faults, when source code contains either only abbreviated or only full-word identifier names. We also conducted a qualitative study to understand the values, beliefs, and assumptions that inform and shape fault fixing when identifier names are either abbreviated or full-word. We involved in this qualitative study six professional developers with 1--3 years of work experience. A number of insights emerged from this qualitative study and can be considered a useful complement to the quantitative results from our family of experiments. One of the most interesting insights is that developers, when working on source code with abbreviated identifier names, adopt a more methodical approach to identify and fix faults by extending their focus point and only in a few cases do they expand abbreviated identifiers.
q-n-a  stackex  trivia  programming  os  systems  legacy  legibility  ux  libraries  unix  linux  hacker  cracker-prog  multi  evidence-based  empirical  expert-experience  engineering  study  best-practices  comparison  quality  debugging  efficiency  time 
2 days ago by nhaliday
An Efficiency Comparison of Document Preparation Systems Used in Academic Research and Development
The choice of an efficient document preparation system is an important decision for any academic researcher. To assist the research community, we report a software usability study in which 40 researchers across different disciplines prepared scholarly texts with either Microsoft Word or LaTeX. The probe texts included simple continuous text, text with tables and subheadings, and complex text with several mathematical equations. We show that LaTeX users were slower than Word users, wrote less text in the same amount of time, and produced more typesetting, orthographical, grammatical, and formatting errors. On most measures, expert LaTeX users performed even worse than novice Word users. LaTeX users, however, more often report enjoying using their respective software. We conclude that even experienced LaTeX users may suffer a loss in productivity when LaTeX is used, relative to other document preparation systems. Individuals, institutions, and journals should carefully consider the ramifications of this finding when choosing document preparation strategies, or requiring them of authors.


However, our study suggests that LaTeX should be used as a document preparation system only in cases in which a document is heavily loaded with mathematical equations. For all other types of documents, our results suggest that LaTeX reduces the user’s productivity and results in more orthographical, grammatical, and formatting errors, more typos, and less written text than Microsoft Word over the same duration of time. LaTeX users may argue that the overall quality of the text that is created with LaTeX is better than the text that is created with Microsoft Word. Although this argument may be true, the differences between text produced in more recent editions of Microsoft Word and text produced in LaTeX may be less obvious than it was in the past. Moreover, we believe that the appearance of text matters less than the scientific content and impact to the field. In particular, LaTeX is also used frequently for text that does not contain a significant amount of mathematical symbols and formula. We believe that the use of LaTeX under these circumstances is highly problematic and that researchers should reflect on the criteria that drive their preferences to use LaTeX over Microsoft Word for text that does not require significant mathematical representations.


A second decision criterion that factors into the choice to use a particular software system is reflection about what drives certain preferences. A striking result of our study is that LaTeX users are highly satisfied with their system despite reduced usability and productivity. From a psychological perspective, this finding may be related to motivational factors, i.e., the driving forces that compel or reinforce individuals to act in a certain way to achieve a desired goal. A vital motivational factor is the tendency to reduce cognitive dissonance. According to the theory of cognitive dissonance, each individual has a motivational drive to seek consonance between their beliefs and their actual actions. If a belief set does not concur with the individual’s actual behavior, then it is usually easier to change the belief rather than the behavior [6]. The results from many psychological studies in which people have been asked to choose between one of two items (e.g., products, objects, gifts, etc.) and then asked to rate the desirability, value, attractiveness, or usefulness of their choice, report that participants often reduce unpleasant feelings of cognitive dissonance by rationalizing the chosen alternative as more desirable than the unchosen alternative [6, 7]. This bias is usually unconscious and becomes stronger as the effort to reject the chosen alternative increases, which is similar in nature to the case of learning and using LaTeX.


Given these numbers it remains an open question to determine the amount of taxpayer money that is spent worldwide for researchers to use LaTeX over a more efficient document preparation system, which would free up their time to advance their respective field. Some publishers may save a significant amount of money by requesting or allowing LaTeX submissions because a well-formed LaTeX document complying with a well-designed class file (template) is much easier to bring into their publication workflow. However, this is at the expense of the researchers’ labor time and effort. We therefore suggest that leading scientific journals should consider accepting submissions in LaTeX only if this is justified by the level of mathematics presented in the paper. In all other cases, we think that scholarly journals should request authors to submit their documents in Word or PDF format. We believe that this would be a good policy for two reasons. First, we think that the appearance of the text is secondary to the scientific merit of an article and its impact to the field. And, second, preventing researchers from producing documents in LaTeX would save time and money to maximize the benefit of research and development for both the research team and the public.

[ed.: I sense some salt.

And basically no description of how "# errors" was calculated.]

I question the validity of their methodology.
At no point in the paper is exactly what is meant by a "formatting error" or a "typesetting error" defined. From what I gather, the participants in the study were required to reproduce the formatting and layout of the sample text. In theory, a LaTeX file should strictly be a semantic representation of the content of the document; while TeX may have been a raw typesetting language, this is most definitely not the intended use case of LaTeX and is overall a very poor test of its relative advantages and capabilities.
The separation of the semantic definition of the content from the rendering of the document is, in my opinion, the most important feature of LaTeX. Like CSS, this allows the actual formatting to be abstracted away, allowing plain (marked-up) content to be written without worrying about typesetting.
Word has some similar capabilities with styles, and can be used in a similar manner, though few Word users actually use the software properly. This may sound like a relatively insignificant point, but in practice, almost every Word document I have seen has some form of inconsistent formatting. If Word disallowed local formatting changes (including things such as relative spacing of nested bullet points), forcing all formatting changes to be done in document-global styles, it would be a far better typesetting system. Also, the users would be very unhappy.
Yes, LaTeX can undeniably be a pain in the arse, especially when it comes to trying to get figures in the right place; however the combination of a simple, semantic plain-text representation with a flexible and professional typesetting and rendering engine are undeniable and completely unaddressed by this study.
It seems that the test was heavily biased in favor of WYSIWYG.
Of course that approach makes it very simple to reproduce something, as has been tested here. Even simpler would be to scan the document and run OCR. The massive problem with both approaches (WYSIWYG and scanning) is that you can't generalize any of it. You're doomed repeating it forever.
(I'll also note the other significant issue with this study: when the ratings provided by participants came out opposite of their test results, they attributed it to irrational bias.)

Over the past few years however, the line between the tools has blurred. In 2017, Microsoft made it possible to use LaTeX’s equation-writing syntax directly in Word, and last year it scrapped Word’s own equation editor. Other text editors also support elements of LaTeX, allowing newcomers to use as much or as little of the language as they like.

study  hmm  academia  writing  publishing  yak-shaving  technical-writing  software  tools  comparison  latex  scholar  regularizer  idk  microsoft  evidence-based  science  desktop  time  efficiency  multi  hn  commentary  critique  news  org:sci  flux-stasis  duplication  metrics  biases 
28 days ago by nhaliday
When to use C over C++, and C++ over C? - Software Engineering Stack Exchange
You pick C when
- you need portable assembler (which is what C is, really) for whatever reason,
- your platform doesn't provide C++ (a C compiler is much easier to implement),
- you need to interact with other languages that can only interact with C (usually the lowest common denominator on any platform) and your code consists of little more than the interface, not making it worth to lay a C interface over C++ code,
- you hack in an Open Source project (many of which, for various reasons, stick to C),
- you don't know C++.
In all other cases you should pick C++.


At the same time, I have to say that @Toll's answers (for one obvious example) have things just about backwards in most respects. Reasonably written C++ will generally be at least as fast as C, and often at least a little faster. Readability is generally much better, if only because you don't get buried in an avalanche of all the code for even the most trivial algorithms and data structures, all the error handling, etc.


As it happens, C and C++ are fairly frequently used together on the same projects, maintained by the same people. This allows something that's otherwise quite rare: a study that directly, objectively compares the maintainability of code written in the two languages by people who are equally competent overall (i.e., the exact same people). At least in the linked study, one conclusion was clear and unambiguous: "We found that using C++ instead of C results in improved software quality and reduced maintenance effort..."


(Side-note: Check out Linus Torvads' rant on why he prefers C to C++. I don't necessarily agree with his points, but it gives you insight into why people might choose C over C++. Rather, people that agree with him might choose C for these reasons.)


Why would anybody use C over C++? [closed]: https://stackoverflow.com/questions/497786/why-would-anybody-use-c-over-c
Joel's answer is good for reasons you might have to use C, though there are a few others:
- You must meet industry guidelines, which are easier to prove and test for in C.
- You have tools to work with C, but not C++ (think not just about the compiler, but all the support tools, coverage, analysis, etc)
- Your target developers are C gurus
- You're writing drivers, kernels, or other low level code
- You know the C++ compiler isn't good at optimizing the kind of code you need to write
- Your app not only doesn't lend itself to be object oriented, but would be harder to write in that form

In some cases, though, you might want to use C rather than C++:
- You want the performance of assembler without the trouble of coding in assembler (C++ is, in theory, capable of 'perfect' performance, but the compilers aren't as good at seeing optimizations a good C programmer will see)
- The software you're writing is trivial, or nearly so - whip out the tiny C compiler, write a few lines of code, compile and you're all set - no need to open a huge editor with helpers, no need to write practically empty and useless classes, deal with namespaces, etc. You can do nearly the same thing with a C++ compiler and simply use the C subset, but the C++ compiler is slower, even for tiny programs.
- You need extreme performance or small code size, and know the C++ compiler will actually make it harder to accomplish due to the size and performance of the libraries
- You contend that you could just use the C subset and compile with a C++ compiler, but you'll find that if you do that you'll get slightly different results depending on the compiler.

Regardless, if you're doing that, you're using C. Is your question really "Why don't C programmers use C++ compilers?" If it is, then you either don't understand the language differences, or you don't understand compiler theory.


- Because they already know C
- Because they're building an embedded app for a platform that only has a C compiler
- Because they're maintaining legacy software written in C
- You're writing something on the level of an operating system, a relational database engine, or a retail 3D video game engine.
q-n-a  stackex  programming  engineering  pls  best-practices  impetus  checklists  c(pp)  systems  assembly  compilers  hardware  embedded  oss  links  study  evidence-based  devtools  performance  rant  expert-experience  types  blowhards  linux  git  vcs  debate  rhetoric  worse-is-better/the-right-thing  cracker-prog 
9 weeks ago by nhaliday
A compendium of innovation methods | Nesta
From experimentation to standards of evidence, this new compendium brings together 13 innovation methods in a practical, colourful guide, packed with examples.
Methods  innovation  evidence-based 
march 2019 by weitzenegger
A cross-language perspective on speech information rate
Figure 2.

English (IREN = 1.08) shows a higher Information Rate than Vietnamese (IRVI = 1). On the contrary, Japanese exhibits the lowest IRL value of the sample. Moreover, one can observe that several languages may reach very close IRL with different encoding strategies: Spanish is characterized by a fast rate of low-density syllables while Mandarin exhibits a 34% slower syllabic rate with syllables ‘denser’ by a factor of 49%. Finally, their Information Rates differ only by 4%.

Is spoken English more efficient than other languages?: https://linguistics.stackexchange.com/questions/2550/is-spoken-english-more-efficient-than-other-languages
As a translator, I can assure you that English is no more efficient than other languages.
[some comments on a different answer:]
Russian, when spoken, is somewhat less efficient than English, and that is for sure. No one who has ever worked as an interpreter can deny it. You can convey somewhat more information in English than in Russian within an hour. The English language is not constrained by the rigid case and gender systems of the Russian language, which somewhat reduce the information density of the Russian language. The rules of the Russian language force the speaker to incorporate sometimes unnecessary details in his speech, which can be problematic for interpreters – user74809 Nov 12 '18 at 12:48
But in writing, though, I do think that Russian is somewhat superior. However, when it comes to common daily speech, I do not think that anyone can claim that English is less efficient than Russian. As a matter of fact, I also find Russian to be somewhat more mentally taxing than English when interpreting. I mean, anyone who has lived in the world of Russian and then moved to the world of English is certain to notice that English is somewhat more efficient in everyday life. It is not a night-and-day difference, but it is certainly noticeable. – user74809 Nov 12 '18 at 13:01
By the way, I am not knocking Russian. I love Russian, it is my mother tongue and the only language, in which I sound like a native speaker. I mean, I still have a pretty thick Russian accent. I am not losing it anytime soon, if ever. But like I said, living in both worlds, the Moscow world and the Washington D.C. world, I do notice that English is objectively more efficient, even if I am myself not as efficient in it as most other people. – user74809 Nov 12 '18 at 13:40

Do most languages need more space than English?: https://english.stackexchange.com/questions/2998/do-most-languages-need-more-space-than-english
Speaking as a translator, I can share a few rules of thumb that are popular in our profession:
- Hebrew texts are usually shorter than their English equivalents by approximately 1/3. To a large extent, that can be attributed to cheating, what with no vowels and all.
- Spanish, Portuguese and French (I guess we can just settle on Romance) texts are longer than their English counterparts by about 1/5 to 1/4.
- Scandinavian languages are pretty much on par with English. Swedish is a tiny bit more compact.
- Whether or not Russian (and by extension, Ukrainian and Belorussian) is more compact than English is subject to heated debate, and if you ask five people, you'll be presented with six different opinions. However, everybody seems to agree that the difference is just a couple percent, be it this way or the other.


A point of reference from the website I maintain. The files where we store the translations have the following sizes:

English: 200k
Portuguese: 208k
Spanish: 209k
German: 219k
And the translations are out of date. That is, there are strings in the English file that aren't yet in the other files.

For Chinese, the situation is a bit different because the character encoding comes into play. Chinese text will have shorter strings, because most words are one or two characters, but each character takes 3–4 bytes (for UTF-8 encoding), so each word is 3–12 bytes long on average. So visually the text takes less space but in terms of the information exchanged it uses more space. This Language Log post suggests that if you account for the encoding and remove redundancy in the data using compression you find that English is slightly more efficient than Chinese.

Is English more efficient than Chinese after all?: https://languagelog.ldc.upenn.edu/nll/?p=93
[Executive summary: Who knows?]

This follows up on a series of earlier posts about the comparative efficiency — in terms of text size — of different languages ("One world, how many bytes?", 8/5/2005; "Comparing communication efficiency across languages", 4/4/2008; "Mailbag: comparative communication efficiency", 4/5/2008). Hinrich Schütze wrote:
pdf  study  language  foreign-lang  linguistics  pro-rata  bits  communication  efficiency  density  anglo  japan  asia  china  mediterranean  data  multi  comparison  writing  meta:reading  measure  compression  empirical  evidence-based  experiment  analysis  chart  trivia  cocktail 
february 2019 by nhaliday
Know someone who wants to be -based & -driven in helping us fundraise for evidence-based programs?
evidence-based  data-driven  from twitter_favs
january 2019 by Varna
Want to work on the "science of " -based programs in w/ rockstars like…
globaldev  evidence-based  scaling  from twitter_favs
june 2018 by Varna
Randomizing Religion: The Impact of Protestant Evangelism on Economic Outcomes
To test the causal impact of religiosity, we conducted a randomized evaluation of an evangelical Protestant Christian values and theology education program that consisted of 15 weekly half-hour sessions. We analyze outcomes for 6,276 ultra-poor Filipino households six months after the program ended. We find _significant increases in religiosity and income_, no significant changes in total labor supply, assets, consumption, food security, or _life satisfaction, and a significant decrease in perceived relative economic status_. Exploratory analysis suggests the program may have improved hygienic practices and increased household discord, and that _the income treatment effect may operate through increasing grit_.


Social Cohesion, Religious Beliefs, and the Effect of Protestantism on Suicide: https://www.mitpressjournals.org/doi/abs/10.1162/REST_a_00708
In an economic theory of suicide, we model social cohesion of the religious community and religious beliefs about afterlife as two mechanisms by which Protestantism increases suicide propensity. We build a unique micro-regional dataset of 452 Prussian counties in 1816-21 and 1869-71, when religiousness was still pervasive. Exploiting the concentric dispersion of Protestantism around Wittenberg, our instrumental-variable model finds that Protestantism had a substantial positive effect on suicide. Results are corroborated in first-difference models. Tests relating to the two mechanisms based on historical church-attendance data and modern suicide data suggest that the sociological channel plays the more important role.

this is also mentioned in the survey of reformation effects (under "dark" effects)
study  field-study  sociology  wonkish  intervention  religion  theos  branches  evidence-based  christianity  protestant-catholic  asia  developing-world  economics  compensation  money  labor  human-capital  emotion  s-factor  discipline  multi  social-structure  death  individualism-collectivism  n-factor  cohesion  causation  endogenous-exogenous  history  early-modern  europe  germanic  geography  within-group  urban-rural  marginal-rev  econotariat  commentary  class  personality  social-psych 
february 2018 by nhaliday
Cold open water plunge may provide instant pain relief - BBC News
see nordic traditions, see russian traditions, and see german tradtional "alt" medicine prescribed by Krankenkassen (wechselbaeder, wadenbaeder, cold showrs, sauna, retreats, water fasting) // cold showers - http://casereports.bmj.com/content/2018/bcr-2017-222236 - Postoperative neuropathic pain exacerbated by movement is poorly understood and difficult to treat but a relatively common complication of surgical procedures such as endoscopic thoracic sympathectomy. Here, we describe a case of unexpected, immediate, complete and sustained remission of postoperative intercostal neuralgia after the patient engaged in an open-water swim in markedly cold conditions.
neuroscience  neurology  cold  shower  exposure  mental  health  immune  system  evidence-based  medicine  lymphatic 
february 2018 by asterisk2a
Making government services more efficient: Introducing the 'evidence tool kit' - AEI
These small, behind-the-scenes steps toward better policy execution can make a significant difference.
social-services  data  evidence-based  studies  welfare  benefits 
january 2018 by capcrime
RT : So some overgrown man is so threatened by and -based reason that he literally just erase…
fetus  diversity  evidence-based  from twitter_favs
december 2017 by kohlmannj
So some overgrown man is so threatened by and -based reason that he literally just erase…
diversity  evidence-based  fetus  from twitter_favs
december 2017 by amerberg

« earlier    

related tags

2016  2017  80000-hours  ability-competence  absolute-relative  academia  accuracy  advice  aging  albion  allodium  alt-inst  analysis  anglo  anglosphere  announcement  anomie  anthropology  anti-intellectualism  antidemos  aphorism  arbitrage  arms  article  asia  assembly  attaq  attention  audio  authoritarianism  axioms  babies  backup  behavioral-gen  benefits  best-practices  best  biases  big-picture  big  biodet  biomechanics  biophysical-econ  bits  blog  blowhards  books  bootstraps  bounded-cognition  brain-scan  branches  britain  broad-econ  business  c(pp)  c:***  canada  cancer  candidate-gene  career  carmack  causation  chapman  charity  chart  checklists  child_development  children  china  christianity  civil-liberty  clarity  class  cliometrics  cochrane  cocktail  cog-psych  cohesion  cold-war  cold  commentary  communication  comparison  compensation  compilers  complex-systems  compression  concept  confidence  conflict  conquest-empire  contrarianism  convexity-curvature  cool  coordination  core-rats  correlation  cosmetic  cost-benefit  counterexample  cracker-econ  cracker-prog  crime  criminal-justice  criminology  crisis  critique  crooked  cultural-dynamics  culture-war  culture  current-events  curriculum  curvature  cycles  cynicism-idealism  dark-arts  data-driven  data-science  data  database  death  debate  debugging  defense  degrees-of-freedom  democracy  density  dental  dermatology  descriptive  design  desktop  developing-world  developmental  devtools  diet  diglectin  direct-indirect  discipline  discovery  discussion  disease  distribution  diversity  dominant-minority  door  drugs  duplication  dynamic  dynamical  early-modern  econ-metrics  econ-productivity  econometrics  economics  econotariat  education  effect-size  effective-altruism  efficiency  egalitarianism-hierarchy  ego-depletion  elite  embedded  embodied-cognition  embodied-pack  embodied  emotion  empirical  employment  endo-exo  endogenous-exogenous  endurance  engineering  ensembles  environmental-effects  envy  epidemiology  epistemic  error  essay  estimate  ethics  ethnocentrism  ethnography  eu  europe  evaluation  evidence  evopsych  examples  expansionism  experiment  expert-experience  exposure  expression-survival  extra-introversion  fetus  field-study  finance  fitness  fitsci  flexibility  flux-stasis  flynn  focus  food  foreign-lang  foreign-policy  formal-methods  frequency  frontier  garett-jones  gelman  gender-diff  gender  genetics  geography  germanic  git  globaldev  gnon  google  gotchas  government  graphical-models  graphs  gray-econ  group-level  growth-econ  gt-101  guardian  guide  guidelines  gwern  habit  hacker  hanushek  hardware  hari-seldon  hci  health  higher-ed  history  hmm  hn  homo-hetero  hsu  huge-data-the-biggest  human-bean  human-capital  humility  hyperisland  hypothesis-testing  ideology  idk  illusion  ilo  immune  impetus  impro  incentives  individualism-collectivism  industrial-org  inequality  info-dynamics  info-foraging  infographic  inhibition  initiative  innovation  input-output  insight  institutions  intel  interest  interests  internet  intervention  interview  intricacy  ioannidis  iq  iran  iraq-syria  is-ought  japan  jobs  journos-pundits  katherinesmith  km4dev  knowledge  kumbaya-kult  labor  language  large-factor  latex  latin-america  law  leadership  learning  left-wing  legacy  legibility  lens  lesswrong  let-me-see  letters  leviathan  libraries  linguistics  links  linux  list  lms  lobby  lol  long-term  longitudinal  lymphatic  macro  madisonian  magnitude  management  map-territory  map  marginal-rev  marginal  market-failure  markets  martial  maxim-gun  maximisation  meaningness  measure  measurement  media  medical  medicine  medieval  mediterranean  mena  mental  meta-analysis  meta:medicine  meta:reading  meta:rhetoric  meta:science  meta:war  metaanalysis  metabolic  metabuch  metameta  methodology  methods  metrics  micro  microfoundations  microsoft  migration  military  mindful  mobility  models  monetary-fiscal  money  morality  mostly-modern  mph  multi  n-factor  nationalism-globalism  natural-experiment  network-structure  neuro-nitgrit  neuro  neurology  neurons  neuroscience  news  nhs  nice  nietzschean  nihil  nitty-gritty  nl-and-so-can-you  nordic  nottinghham  null-result  nutrition  objektbuch  occam  occident  oct2013  open-closed  operational  optimism  order-disorder  org:anglo  org:data  org:davos  org:econlib  org:edu  org:gov  org:health  org:lite  org:local  org:mag  org:nat  org:ngo  org:popup  org:rec  org:sci  organization  organizing  os  oscillation  oss  p:whenever  parenting  parsimony  path-dependence  paying-rent  pdf  peace-violence  people  performance  personality  pessimism  phalanges  pharma  pharmaceutical  philosophy  phm109  piracy  planning  play  pls  poast  podcast  policy  polisci  political-econ  politics  poll  popsci  post-truth  power  practice  pragmatic  pre-ww2  preprint  prevention  prioritizing  pro-rata  profile  profit  programming  property-rights  proposal  protestant-catholic  protocol  psych-architecture  psychiatry  psychology  psychometrics  public-goodish  public-health  public  publishing  putnam-like  q-n-a  qualitative  quality  quant  quantitative  quixotic  quotes  r-lang  race  randy-ayndy  ranking  rant  rat-pack  rationality  ratty  realness  realpolitik  reason  recommendations  reddit  redistribution  reference  reflection  regional-scatter-plots  regression-to-mean  regression  regularizer  regulation  religion  replication  research  researchers  retention  review  revolving  rhetoric  rhythm  roots  rounding  running  s-factor  s:**  s:*  sapiens  scale  scaling  scholar  science  scitariat  screening  search  security  selection  self-control  self-report  sequential  shift  shipping  shower  signal-noise  signaling  sinosphere  skills  sleep  soccer  social-choice  social-norms  social-psych  social-science  social-services  social-structure  social  society  sociology  software  solid-study  speaking  spearhead  speculation  spock  sports  ssc  stackex  stamina  stat-power  statesmen  statistics  stats  stock-flow  stories  storytelling  strategy  stream  street-fighting  stress  studies  study  studying  stylized-facts  success  summary  supply-demand  survey  system  systematic-ad-hoc  systems  tactics  taxes  teaching  tech  technical-writing  technocracy  techtariat  temperance  terrorism  the-great-west-whale  the-monster  the-south  the-trenches  the-watchers  the-west  theory-of-mind  theos  therapeutics  things  thinking  time-series  time-use  time  todo  tools  top-n  toronto  traces  track-record  tradecraft  trends  tribalism  tricks  trivia  truth  twitter  types  unaffiliated  uncertainty  unintended-consequences  unit  unix  urban-rural  us-them  usa  users  ux  values  vampire-squid  vcs  vested  virginia-dc  visualization  volo-avolo  vulgar  walter-scheidel  war  wealth-of-nations  wealth  welfare  west-hunter  westminster  white-paper  wiki  within-group  wonkish  work  working-stiff  world-war  world  worse-is-better/the-right-thing  writing  yak-shaving  youth  yvain  🌞  🎩  🔬  🖥  🤖  🦉 

Copy this bookmark: