nhaliday + checklists   149

How I Choose What To Read — David Perell
READING HEURISTICS
1. TRUST RECOMMENDATIONS — BUT NOT TOO MUCH
2. TAME THE THRILLERS
3. BLEND A BIZARRE BOWL
4. TRUST THE LINDY EFFECT
5. FAVOR BIOGRAPHIES OVER SELF-HELP
unaffiliated  advice  reflection  checklists  metabuch  learning  studying  info-foraging  skeleton  books  heuristic  contrarianism  ubiquity  time  track-record  thinking  blowhards  bret-victor  worrydream  list  top-n  recommendations  arbitrage  trust  aphorism 
5 days ago by nhaliday
Software Testing Anti-patterns | Hacker News
I haven't read this but both the article and commentary/discussion look interesting from a glance

hmm: https://news.ycombinator.com/item?id=16896390
In small companies where there is no time to "waste" on tests, my view is that 80% of the problems can be caught with 20% of the work by writing integration tests that cover large areas of the application. Writing unit tests would be ideal, but time-consuming. For a web project, that would involve testing all pages for HTTP 200 (< 1 hour bash script that will catch most major bugs), automatically testing most interfaces to see if filling data and clicking "save" works. Of course, for very important/dangerous/complex algorithms in the code, unit tests are useful, but generally, that represents a very low fraction of a web application's code.
hn  commentary  techtariat  discussion  programming  engineering  methodology  best-practices  checklists  thinking  correctness  api  interface-compatibility  jargon  list  metabuch  objektbuch  workflow  documentation  debugging  span-cover  checking  metrics  abstraction  within-without  characterization  error  move-fast-(and-break-things)  minimum-viable  efficiency  multi  poast  pareto  coarse-fine 
5 weeks ago by nhaliday
What do executives do, anyway? - apenwarr
To paraphrase the book, the job of an executive is: to define and enforce culture and values for their whole organization, and to ratify good decisions.

That's all.

Not to decide. Not to break ties. Not to set strategy. Not to be the expert on every, or any topic. Just to sit in the room while the right people make good decisions in alignment with their values. And if they do, to endorse it. And if they don't, to send them back to try again.

There's even an algorithm for this.
techtariat  business  sv  tech  entrepreneurialism  management  startups  books  review  summary  culture  info-dynamics  strategy  hi-order-bits  big-picture  thinking  checklists  top-n  responsibility  organizing 
7 weeks ago by nhaliday
Learning to learn | jiasi
It might sound a bit stupid, but I just realized that a better reading strategy could help me learn faster, almost three times as fast as before.

To enter a research field, we sometimes have to read tens of research papers. We could alternatively read summaries like textbooks and survey papers, which are generally more comprehensive and more friendly for non-experts. But some fields don’t have good summaries out there, for reasons like the fields being too new, too narrow, or too broad.

...

Part 1. Taking good notes and keeping them organized.

Where we store information greatly affects how we access it. If we can always easily find some information — from Google or our own notes — then we can pick it up quickly, even after forgetting it. This observation can make us smarter.

Let’s do the same when reading papers. Now I keep searchable notes as follows:
- For every topic, create a document that contains the notes for all papers on this topic.[1]
- For each paper, take these notes: summaries, quotes, and sufficient bibliographic information for future lookup.[2, pages 95-99]
- When reading a new paper, if it cites a paper that I have already read, review the notes for the cited paper. Update the notes as needed.
This way, we won’t lose what we have read and learned.

Part 2. Skipping technical sections for 93% of the time.

Only 7% of readers of a paper will read its technical sections.[1] Thus, if we want to read like average, it might make sense to skip technical sections for roughly 93% of papers that we read. For example, consider reading each paper like this:
- Read only the big-picture sections — abstract, introduction, and conclusion;
- Scan the technical sections — figures, tables, and the first and the last paragraphs for each section[2, pages 76-77] — to check surprises;
- Take notes;
- Done!
In theory, the only 7% of the papers that we need to read carefully would be those that we really have to know well.
techtariat  scholar  academia  meta:research  notetaking  studying  learning  grad-school  phd  reflection  meta:reading  prioritizing  quality  writing  technical-writing  growth  checklists  metabuch  advice 
11 weeks ago by nhaliday
python - Does pandas iterrows have performance issues? - Stack Overflow
Generally, iterrows should only be used in very very specific cases. This is the general order of precedence for performance of various operations:

1) vectorization
2) using a custom cython routine
3) apply
a) reductions that can be performed in cython
b) iteration in python space
4) itertuples
5) iterrows
6) updating an empty frame (e.g. using loc one-row-at-a-time)
q-n-a  stackex  programming  python  libraries  gotchas  data-science  sci-comp  performance  checklists  objektbuch  best-practices  DSL  frameworks 
may 2019 by nhaliday
c++ - Debugging template instantiations - Stack Overflow
Yes, there is a template metaprogramming debugger. Templight

https://github.com/mikael-s-persson/templight
--
Seems to be dead now, though :( [ed.: Partially true. They've merged pull requests recently tho.]
--
Metashell is still in active development though: github.com/metashell/metashell
q-n-a  stackex  nitty-gritty  pls  types  c(pp)  debugging  devtools  tools  programming  howto  advice  checklists  multi  repo  wire-guided  static-dynamic  compilers  performance  measurement  time  latency-throughput 
may 2019 by nhaliday
Philip Guo - Research Design Patterns
List of ways to generate research directions. Some are pretty specific to applied CS.
techtariat  nibble  academia  meta:research  scholar  cs  systems  list  top-n  checklists  ideas  creative  frontier  memes(ew)  info-dynamics  innovation  novelty  the-trenches  tactics 
may 2019 by nhaliday
When to use C over C++, and C++ over C? - Software Engineering Stack Exchange
You pick C when
- you need portable assembler (which is what C is, really) for whatever reason,
- your platform doesn't provide C++ (a C compiler is much easier to implement),
- you need to interact with other languages that can only interact with C (usually the lowest common denominator on any platform) and your code consists of little more than the interface, not making it worth to lay a C interface over C++ code,
- you hack in an Open Source project (many of which, for various reasons, stick to C),
- you don't know C++.
In all other cases you should pick C++.

--

At the same time, I have to say that @Toll's answers (for one obvious example) have things just about backwards in most respects. Reasonably written C++ will generally be at least as fast as C, and often at least a little faster. Readability is generally much better, if only because you don't get buried in an avalanche of all the code for even the most trivial algorithms and data structures, all the error handling, etc.

...

As it happens, C and C++ are fairly frequently used together on the same projects, maintained by the same people. This allows something that's otherwise quite rare: a study that directly, objectively compares the maintainability of code written in the two languages by people who are equally competent overall (i.e., the exact same people). At least in the linked study, one conclusion was clear and unambiguous: "We found that using C++ instead of C results in improved software quality and reduced maintenance effort..."

--

(Side-note: Check out Linus Torvads' rant on why he prefers C to C++. I don't necessarily agree with his points, but it gives you insight into why people might choose C over C++. Rather, people that agree with him might choose C for these reasons.)

http://harmful.cat-v.org/software/c++/linus

Why would anybody use C over C++? [closed]: https://stackoverflow.com/questions/497786/why-would-anybody-use-c-over-c
Joel's answer is good for reasons you might have to use C, though there are a few others:
- You must meet industry guidelines, which are easier to prove and test for in C.
- You have tools to work with C, but not C++ (think not just about the compiler, but all the support tools, coverage, analysis, etc)
- Your target developers are C gurus
- You're writing drivers, kernels, or other low level code
- You know the C++ compiler isn't good at optimizing the kind of code you need to write
- Your app not only doesn't lend itself to be object oriented, but would be harder to write in that form

In some cases, though, you might want to use C rather than C++:
- You want the performance of assembler without the trouble of coding in assembler (C++ is, in theory, capable of 'perfect' performance, but the compilers aren't as good at seeing optimizations a good C programmer will see)
- The software you're writing is trivial, or nearly so - whip out the tiny C compiler, write a few lines of code, compile and you're all set - no need to open a huge editor with helpers, no need to write practically empty and useless classes, deal with namespaces, etc. You can do nearly the same thing with a C++ compiler and simply use the C subset, but the C++ compiler is slower, even for tiny programs.
- You need extreme performance or small code size, and know the C++ compiler will actually make it harder to accomplish due to the size and performance of the libraries
- You contend that you could just use the C subset and compile with a C++ compiler, but you'll find that if you do that you'll get slightly different results depending on the compiler.

Regardless, if you're doing that, you're using C. Is your question really "Why don't C programmers use C++ compilers?" If it is, then you either don't understand the language differences, or you don't understand compiler theory.

--

- Because they already know C
- Because they're building an embedded app for a platform that only has a C compiler
- Because they're maintaining legacy software written in C
- You're writing something on the level of an operating system, a relational database engine, or a retail 3D video game engine.
q-n-a  stackex  programming  engineering  pls  best-practices  impetus  checklists  c(pp)  systems  assembly  compilers  hardware  embedded  oss  links  study  evidence-based  devtools  performance  rant  expert-experience  types  blowhards  linux  git  vcs  debate  rhetoric  worse-is-better/the-right-thing  cracker-prog  multi  metal-to-virtual  interface-compatibility 
may 2019 by nhaliday
its-not-software - steveyegge2
You don't work in the software industry.

...

So what's the software industry, and how do we differ from it?

Well, the software industry is what you learn about in school, and it's what you probably did at your previous company. The software industry produces software that runs on customers' machines — that is, software intended to run on a machine over which you have no control.

So it includes pretty much everything that Microsoft does: Windows and every application you download for it, including your browser.

It also includes everything that runs in the browser, including Flash applications, Java applets, and plug-ins like Adobe's Acrobat Reader. Their deployment model is a little different from the "classic" deployment models, but it's still software that you package up and release to some unknown client box.

...

Servware

Our industry is so different from the software industry, and it's so important to draw a clear distinction, that it needs a new name. I'll call it Servware for now, lacking anything better. Hardware, firmware, software, servware. It fits well enough.

Servware is stuff that lives on your own servers. I call it "stuff" advisedly, since it's more than just software; it includes configuration, monitoring systems, data, documentation, and everything else you've got there, all acting in concert to produce some observable user experience on the other side of a network connection.
techtariat  sv  tech  rhetoric  essay  software  saas  devops  engineering  programming  contrarianism  list  top-n  best-practices  applicability-prereqs  desktop  flux-stasis  homo-hetero  trends  games  thinking  checklists  dbs  models  communication  tutorial  wiki  integration-extension  frameworks  api  whole-partial-many  metrics  retrofit  c(pp)  pls  code-dive  planning  working-stiff  composition-decomposition  libraries  conceptual-vocab  amazon  system-design  cracker-prog  tech-infrastructure  blowhards  client-server 
may 2019 by nhaliday
unix - How can I profile C++ code running on Linux? - Stack Overflow
If your goal is to use a profiler, use one of the suggested ones.

However, if you're in a hurry and you can manually interrupt your program under the debugger while it's being subjectively slow, there's a simple way to find performance problems.

Just halt it several times, and each time look at the call stack. If there is some code that is wasting some percentage of the time, 20% or 50% or whatever, that is the probability that you will catch it in the act on each sample. So that is roughly the percentage of samples on which you will see it. There is no educated guesswork required. If you do have a guess as to what the problem is, this will prove or disprove it.

You may have multiple performance problems of different sizes. If you clean out any one of them, the remaining ones will take a larger percentage, and be easier to spot, on subsequent passes. This magnification effect, when compounded over multiple problems, can lead to truly massive speedup factors.

Caveat: Programmers tend to be skeptical of this technique unless they've used it themselves. They will say that profilers give you this information, but that is only true if they sample the entire call stack, and then let you examine a random set of samples. (The summaries are where the insight is lost.) Call graphs don't give you the same information, because they don't summarize at the instruction level, and
they give confusing summaries in the presence of recursion.
They will also say it only works on toy programs, when actually it works on any program, and it seems to work better on bigger programs, because they tend to have more problems to find. They will say it sometimes finds things that aren't problems, but that is only true if you see something once. If you see a problem on more than one sample, it is real.

http://poormansprofiler.org/

gprof, Valgrind and gperftools - an evaluation of some tools for application level CPU profiling on Linux: http://gernotklingler.com/blog/gprof-valgrind-gperftools-evaluation-tools-application-level-cpu-profiling-linux/
gprof is the dinosaur among the evaluated profilers - its roots go back into the 1980’s. It seems it was widely used and a good solution during the past decades. But its limited support for multi-threaded applications, the inability to profile shared libraries and the need for recompilation with compatible compilers and special flags that produce a considerable runtime overhead, make it unsuitable for using it in today’s real-world projects.

Valgrind delivers the most accurate results and is well suited for multi-threaded applications. It’s very easy to use and there is KCachegrind for visualization/analysis of the profiling data, but the slow execution of the application under test disqualifies it for larger, longer running applications.

The gperftools CPU profiler has a very little runtime overhead, provides some nice features like selectively profiling certain areas of interest and has no problem with multi-threaded applications. KCachegrind can be used to analyze the profiling data. Like all sampling based profilers, it suffers statistical inaccuracy and therefore the results are not as accurate as with Valgrind, but practically that’s usually not a big problem (you can always increase the sampling frequency if you need more accurate results). I’m using this profiler on a large code-base and from my personal experience I can definitely recommend using it.
q-n-a  stackex  programming  engineering  performance  devtools  tools  advice  checklists  hacker  nitty-gritty  tricks  lol  multi  unix  linux  techtariat  analysis  comparison  recommendations  software  measurement  oly-programming  concurrency  debugging  metabuch 
may 2019 by nhaliday
Teach debugging
A friend of mine and I couldn't understand why some people were having so much trouble; the material seemed like common sense. The Feynman Method was the only tool we needed.

1. Write down the problem
2. Think real hard
3. Write down the solution

The Feynman Method failed us on the last project: the design of a divider, a real-world-scale project an order of magnitude more complex than anything we'd been asked to tackle before. On the day he assigned the project, the professor exhorted us to begin early. Over the next few weeks, we heard rumors that some of our classmates worked day and night without making progress.

...

And then, just after midnight, a number of our newfound buddies from dinner reported successes. Half of those who started from scratch had working designs. Others were despondent, because their design was still broken in some subtle, non-obvious way. As I talked with one of those students, I began poring over his design. And after a few minutes, I realized that the Feynman method wasn't the only way forward: it should be possible to systematically apply a mechanical technique repeatedly to find the source of our problems. Beneath all the abstractions, our projects consisted purely of NAND gates (woe to those who dug around our toolbox enough to uncover dynamic logic), which outputs a 0 only when both inputs are 1. If the correct output is 0, both inputs should be 1. The input that isn't is in error, an error that is, itself, the output of a NAND gate where at least one input is 0 when it should be 1. We applied this method recursively, finding the source of all the problems in both our designs in under half an hour.

How To Debug Any Program: https://www.blinddata.com/blog/how-to-debug-any-program-9
May 8th 2019 by Saketh Are

Start by Questioning Everything

...

When a program is behaving unexpectedly, our attention tends to be drawn first to the most complex portions of the code. However, mistakes can come in all forms. I've personally been guilty of rushing to debug sophisticated portions of my code when the real bug was that I forgot to read in the input file. In the following section, we'll discuss how to reliably focus our attention on the portions of the program that need correction.

Then Question as Little as Possible

Suppose that we have a program and some input on which its behavior doesn’t match our expectations. The goal of debugging is to narrow our focus to as small a section of the program as possible. Once our area of interest is small enough, the value of the incorrect output that is being produced will typically tell us exactly what the bug is.

In order to catch the point at which our program diverges from expected behavior, we must inspect the intermediate state of the program. Suppose that we select some point during execution of the program and print out all values in memory. We can inspect the results manually and decide whether they match our expectations. If they don't, we know for a fact that we can focus on the first half of the program. It either contains a bug, or our expectations of what it should produce were misguided. If the intermediate state does match our expectations, we can focus on the second half of the program. It either contains a bug, or our understanding of what input it expects was incorrect.

Question Things Efficiently

For practical purposes, inspecting intermediate state usually doesn't involve a complete memory dump. We'll typically print a small number of variables and check whether they have the properties we expect of them. Verifying the behavior of a section of code involves:

1. Before it runs, inspecting all values in memory that may influence its behavior.
2. Reasoning about the expected behavior of the code.
3. After it runs, inspecting all values in memory that may be modified by the code.

Reasoning about expected behavior is typically the easiest step to perform even in the case of highly complex programs. Practically speaking, it's time-consuming and mentally strenuous to write debug output into your program and to read and decipher the resulting values. It is therefore advantageous to structure your code into functions and sections that pass a relatively small amount of information between themselves, minimizing the number of values you need to inspect.

...

Finding the Right Question to Ask

We’ve assumed so far that we have available a test case on which our program behaves unexpectedly. Sometimes, getting to that point can be half the battle. There are a few different approaches to finding a test case on which our program fails. It is reasonable to attempt them in the following order:

1. Verify correctness on the sample inputs.
2. Test additional small cases generated by hand.
3. Adversarially construct corner cases by hand.
4. Re-read the problem to verify understanding of input constraints.
5. Design large cases by hand and write a program to construct them.
6. Write a generator to construct large random cases and a brute force oracle to verify outputs.
techtariat  dan-luu  engineering  programming  debugging  IEEE  reflection  stories  education  higher-ed  checklists  iteration-recursion  divide-and-conquer  thinking  ground-up  nitty-gritty  giants  feynman  error  input-output  structure  composition-decomposition  abstraction  systematic-ad-hoc  reduction  teaching  state  correctness  multi  oly  oly-programming  metabuch  neurons  problem-solving  wire-guided  marginal  strategy  tactics  methodology  simplification-normalization 
may 2019 by nhaliday
Becoming a Man - Quillette
written by William Buckner

“In the puberty rites, the novices are made aware of the sacred value of food and assume the adult condition; that is, they no longer depend on their mothers and on the labor of others for nourishment. Initiation, then, is equivalent to a revelation of the sacred, of death, sexuality, and the struggle for food. Only after having acquired these dimensions of human existence does one become truly a man.” – Mircea Eliade, Rites and Symbols of Initiation: The Mysteries of Birth and Rebirth, 1958

“To be a man in most of the societies we have looked at, one must impregnate women, protect dependents from danger, and provision kith and kin.” – David D. Gilmore, Manhood in the Making, 1990

“Keep your head clear and know how to suffer like a man.” – Ernest Hemingway, The Old Man and the Sea, 1952

There are commonalities of human behavior that extend beyond any geographic or cultural boundary. Every known society has a sexual division of labor – many facets of which are ubiquitous the world over. Some activities are universally considered to be primarily, or exclusively, the responsibility of men, such as hunting large mammals, metalworking, and warfare. Other activities, such as caregiving, cooking, and preparing vegetable foods, are nearly always considered primarily the responsibility of women.

...

Across vastly different societies, with very dissimilar political systems, it is often similar sets of skills that are considered desirable for their (predominately male) leaders. A man can gain status through displays of key talents; through his ability to persuade; by developing and maintaining important social relationships; and by solving difficult problems. In his classic paper on the political systems of ‘egalitarian’ small-scale societies, anthropologist Christopher Boehm writes, “a good leader seems to be generous, brave in combat, wise in making subsistence or military decisions, apt at resolving intragroup conflicts, a good speaker, fair, impartial, tactful, reliable, and morally upright.” In his study on the Mardu hunter-gatherers of Australia, anthropologist Robert Tonkinson wrote that the highest status was given to the “cooks,” which is the title given to “the older men who prepare the many different ceremonial feasts, act as advisors and directors of most rituals (and perform the most important “big” dances), and are guardians of the caches of sacred objects.”

Anthropologist Paul Roscoe writes that some of the important skills of ‘Big Men’ in New Guinea horticulturist societies are, “courage and proficiency in war or hunting; talented oratory; ability in mediation and organization; a gift for singing, dancing, wood carving, and/or graphic artistry; the ability to transact pigs and wealth; ritual expertise; and so on.” In the volume Cooperation and Collective Action (2012), Roscoe notes further that the traits that distinguish a ‘Big Man’ are “his skills in…conflict resolution; his charisma, diplomacy, ability to plan, industriousness, and intelligence” and “his abilities in political manipulation.” In their paper on ‘The Big Man Mechanism,’ anthropologist Joseph Henrich and his colleagues describe the common pathways to status found across cultures, noting that, “In small-scale societies, the domains associated with prestige include hunting, oratory, shamanic knowledge and combat.”

...

In his book How Can I Get Through To You? (2002), author Terrence Real describes visiting a remote village of Maasai pastoralists in Tanzania. Real asked the village elders (all male) what makes a good warrior and a good man. After a vibrant discussion, one of the oldest males stood up and told Real;

I refuse to tell you what makes a good morani [warrior]. But I will tell you what makes a great morani. When the moment calls for fierceness a good morani is very ferocious. And when the moment calls for kindness, a good morani is utterly tender. Now, what makes a great morani is knowing which moment is which! (Real, 64)

This quote is also favorably cited by feminist author bell hooks in her book The Will to Change (2004). While hooks and Real offer perspectives quite different from my approach here, the words of the Massai elder illustrate an ideal conception of masculinity that may appeal to many people of diverse ideologies and cultural backgrounds. A great warrior, a great man, is discerning – not needlessly hostile nor chronically deferential, he instead recognizes the responsibilities of both defending, and caring for, his friends and family.

...

As anthropologist David G. Gilmore notes in Manhood in the Making, exhortations such as “be a man” are common across societies throughout the world. Such remarks represent the recognition that being a man came with a set of duties and responsibilities. If men failed to stay cool under pressure in the midst of hunting or warfare, and thus failed to provide for, or protect, their families and allies, this would have been devastating to their societies.

Throughout our evolutionary history, the cultures that had a sexual division of labor, and socialized males to help provide for and protect the group, would have had a better chance at survival, and would have outcompeted those societies that failed to instill such values.

Some would argue, quite reasonably, that in contemporary, industrialized, democratic societies, values associated with hunting and warfare are outmoded. Gilmore writes that, “So long as there are battles to be fought, wars to be won, heights to be scaled, hard work to be done, some of us will have to “act like men.”” Yet the challenges of modern societies for most people are often very different from those that occurred throughout much of our history.

Still, some common components of the traditional, idealized masculine identity I describe here may continue to be useful in the modern era, such as providing essential resources for the next generation of children, solving social conflicts, cultivating useful, practical skills, and obtaining socially valuable knowledge. Obviously, these traits are not, and need not be, restricted to men. But when it comes to teaching the next generation of young males what socially responsible masculinity looks like, it might be worth keeping these historical contributions in mind. Not as a standard that one should necessarily feel unduly pressured by, but as a set of productive goals and aspirations that can aid in personal development and social enrichment.

The Behavioral Ecology of Male Violence: http://quillette.com/2018/02/24/behavioral-ecology-male-violence/

“Aggressive competition for access to mates is much
more beneficial for human males than for females…”
~Georgiev et al. 1

...

To understand why this pattern is so consistent across a wide variety of culturally and geographically diverse societies, we need to start by looking at sex differences in reproductive biology.

Biologically, individuals that produce small, relatively mobile gametes (sex cells), such as sperm or pollen, are defined as male, while individuals that produce larger, less mobile gametes, such as eggs or ovules, are defined as female. Consequently, males tend to have more variance in reproductive success than females, and a greater potential reproductive output. Emperor of Morocco, Moulay Ismael the Bloodthirsty (1672–1727) was estimated to have fathered 1171 children from 500 women over the course of 32 years,6 while the maximum recorded number of offspring for a woman is 69, attributed to an unnamed 18th century Russian woman married to a man named Feodor Vassilyev.

[data]

Across a wide variety of taxa, the sex that produces smaller, mobile gametes tends to invest less in parental care than the sex that produces larger, less mobile gametes. For over 90 percent of mammalian species, male investment in their offspring ends at conception, and they provide no parental care thereafter.7 A male mammal can often increase his reproductive success by seeking to maximize mating opportunities with females, and engaging in violent competition with rival males to do so. From a fitness perspective, it may be wasteful for a male to provide parental care, as it limits his reproductive output by reducing the time and energy he spends competing for mates.
news  org:mag  org:popup  letters  scitariat  gender  gender-diff  fashun  status  peace-violence  war  alien-character  social-structure  anthropology  sapiens  meaningness  planning  long-term  parenting  big-peeps  old-anglo  quotes  stereotypes  labor  farmers-and-foragers  properties  food  ritual  s-factor  courage  martial  vitality  virtu  aristos  uncertainty  outcome-risk  conquest-empire  leadership  impro  iq  machiavelli  dark-arts  henrich  religion  theos  europe  gallic  statesmen  politics  government  law  honor  civil-liberty  sociality  temperance  patience  responsibility  reputation  britain  optimate  checklists  advice  stylized-facts  prudence  EEA  evopsych  management  track-record  competition  coalitions  personality  links  multi  nature  model-organism  sex  deep-materialism  eden  moments  male-variability  fertility  developmental  investing  ecology  EGT  humanity  energy-resources  cooperate-defect  flexibility 
april 2018 by nhaliday
The idea of empire in the "Aeneid" on JSTOR
http://latindiscussion.com/forum/latin/to-rule-mankind-and-make-the-world-obey.11016/
Let's see...Aeneid, Book VI, ll. 851-853:

tu regere imperio populos, Romane, memento
(hae tibi erunt artes), pacique imponere morem,
parcere subiectis et debellare superbos.'

Which Dryden translated as:
To rule mankind, and make the world obey,
Disposing peace and war by thy own majestic way;
To tame the proud, the fetter'd slave to free:
These are imperial arts, and worthy thee."

If you wanted a literal translation,
"You, Roman, remember to rule people by command
(these were arts to you), and impose the custom to peace,
to spare the subjected and to vanquish the proud."

I don't want to derail your thread but pacique imponere morem -- "to impose the custom to peace"
Does it mean "be the toughest kid on the block," as in Pax Romana?

...

That 17th century one is a loose translation indeed. Myself I'd put it as

"Remember to rule over (all) the (world's) races by means of your sovereignty, oh Roman, (for indeed) you (alone) shall have the means (to do so), and to inculcate the habit of peace, and to have mercy on the enslaved and to destroy the arrogant."

http://classics.mit.edu/Virgil/aeneid.6.vi.html
And thou, great hero, greatest of thy name,
Ordain'd in war to save the sinking state,
And, by delays, to put a stop to fate!
Let others better mold the running mass
Of metals, and inform the breathing brass,
And soften into flesh a marble face;
Plead better at the bar; describe the skies,
And when the stars descend, and when they rise.
But, Rome, 't is thine alone, with awful sway,
To rule mankind, and make the world obey,
Disposing peace and war by thy own majestic way;
To tame the proud, the fetter'd slave to free:
These are imperial arts, and worthy thee."
study  article  letters  essay  pdf  piracy  history  iron-age  mediterranean  the-classics  big-peeps  literature  aphorism  quotes  classic  alien-character  sulla  poetry  conquest-empire  civilization  martial  vitality  peace-violence  order-disorder  domestication  courage  multi  poast  universalism-particularism  world  leviathan  foreign-lang  nascent-state  canon  org:junk  org:edu  tradeoffs  checklists  power  strategy  tactics  paradox  analytical-holistic  hari-seldon  aristos  wisdom  janus  parallax 
january 2018 by nhaliday
The Gelman View – spottedtoad
I have read Andrew Gelman’s blog for about five years, and gradually, I’ve decided that among his many blog posts and hundreds of academic articles, he is advancing a philosophy not just of statistics but of quantitative social science in general. Not a statistician myself, here is how I would articulate the Gelman View:

A. Purposes

1. The purpose of social statistics is to describe and understand variation in the world. The world is a complicated place, and we shouldn’t expect things to be simple.
2. The purpose of scientific publication is to allow for communication, dialogue, and critique, not to “certify” a specific finding as absolute truth.
3. The incentive structure of science needs to reward attempts to independently investigate, reproduce, and refute existing claims and observed patterns, not just to advance new hypotheses or support a particular research agenda.

B. Approach

1. Because the world is complicated, the most valuable statistical models for the world will generally be complicated. The result of statistical investigations will only rarely be to  give a stamp of truth on a specific effect or causal claim, but will generally show variation in effects and outcomes.
2. Whenever possible, the data, analytic approach, and methods should be made as transparent and replicable as possible, and should be fair game for anyone to examine, critique, or amend.
3. Social scientists should look to build upon a broad shared body of knowledge, not to “own” a particular intervention, theoretic framework, or technique. Such ownership creates incentive problems when the intervention, framework, or technique fail and the scientist is left trying to support a flawed structure.

Components

1. Measurement. How and what we measure is the first question, well before we decide on what the effects are or what is making that measurement change.
2. Sampling. Who we talk to or collect information from always matters, because we should always expect effects to depend on context.
3. Inference. While models should usually be complex, our inferential framework should be simple enough for anyone to follow along. And no p values.

He might disagree with all of this, or how it reflects his understanding of his own work. But I think it is a valuable guide to empirical work.
ratty  unaffiliated  summary  gelman  scitariat  philosophy  lens  stats  hypothesis-testing  science  meta:science  social-science  institutions  truth  is-ought  best-practices  data-science  info-dynamics  alt-inst  academia  empirical  evidence-based  checklists  strategy  epistemic 
november 2017 by nhaliday
Places, not Programs – spottedtoad
1. There has to be a place for people to go.
2. It has to be safe.
3. There preferably needs to be bathrooms and water available there.
Schools fulfill this list, which is one reason they are still among our few remaining sources of shared meaning and in-person community. As Christ Arnade has often remarked, McDonalds fast-food restaurants fulfill this list, and are therefore undervalued sources of community in low-income communities. (The young black guys in my Philadelphia Americorps program would not-entirely-jokingly allude to McDonalds as the central hub of the weekend social/dating scene, where only one’s most immaculate clothing- a brand-new shirt, purchased just for the occasion- would suffice.) Howard Schultz, for all his occasional bouts of madness, understood from the beginning that Starbucks would succeed by becoming a “third space” between work and home, which the coffee chain for all its faults has indubitably become for many people. Ivan Illich argued that the streets themselves in poor countries once, but no longer, acted as the same kind of collective commons.
ratty  unaffiliated  institutions  community  alt-inst  metabuch  rhetoric  contrarianism  policy  wonkish  realness  intervention  education  embodied  order-disorder  checklists  cost-disease 
november 2017 by nhaliday
functions - What are the use cases for different scoping constructs? - Mathematica Stack Exchange
As you mentioned there are many things to consider and a detailed discussion is possible. But here are some rules of thumb that I apply the majority of the time:

Module[{x}, ...] is the safest and may be needed if either

There are existing definitions for x that you want to avoid breaking during the evaluation of the Module, or
There is existing code that relies on x being undefined (for example code like Integrate[..., x]).
Module is also the only choice for creating and returning a new symbol. In particular, Module is sometimes needed in advanced Dynamic programming for this reason.

If you are confident there aren't important existing definitions for x or any code relying on it being undefined, then Block[{x}, ...] is often faster. (Note that, in a project entirely coded by you, being confident of these conditions is a reasonable "encapsulation" standard that you may wish to enforce anyway, and so Block is often a sound choice in these situations.)

With[{x = ...}, expr] is the only scoping construct that injects the value of x inside Hold[...]. This is useful and important. With can be either faster or slower than Block depending on expr and the particular evaluation path that is taken. With is less flexible, however, since you can't change the definition of x inside expr.
q-n-a  stackex  programming  CAS  trivia  howto  best-practices  checklists  pls  atoms 
november 2017 by nhaliday
design patterns - What is MVC, really? - Software Engineering Stack Exchange
The model manages fundamental behaviors and data of the application. It can respond to requests for information, respond to instructions to change the state of its information, and even to notify observers in event-driven systems when information changes. This could be a database, or any number of data structures or storage systems. In short, it is the data and data-management of the application.

The view effectively provides the user interface element of the application. It'll render data from the model into a form that is suitable for the user interface.

The controller receives user input and makes calls to model objects and the view to perform appropriate actions.

...

Though this answer has 21 upvotes, I find the sentence "This could be a database, or any number of data structures or storage systems. (tl;dr : it's the data and data-management of the application)" horrible. The model is the pure business/domain logic. And this can and should be so much more than data management of an application. I also differentiate between domain logic and application logic. A controller should not ever contain business/domain logic or talk to a database directly.
q-n-a  stackex  explanation  concept  conceptual-vocab  structure  composition-decomposition  programming  engineering  best-practices  pragmatic  jargon  thinking  metabuch  working-stiff  tech  🖥  checklists  code-organizing  abstraction 
october 2017 by nhaliday
Living with Ignorance in a World of Experts
Another kind of track record that we might care about is not about the expert’s performance, qua expert, but about her record of epistemic integrity. This will be important for helping provide reasonably well supported answers to (Q3) and (Q4) in particular. Anderson (2011) offers some related ideas in her discussion of “criteria for judging honesty” and “criteria for judging epistemic responsibility.” Things we might be interested include the following:
• evidence of previous expert-related dishonesty (e.g. plagiarism, faking data)
• evidence of a record of misleading statements (e.g. cherry-picking data, quotations out of context)
• evidence of a record of misrepresenting views of expert opponents
• evidence of evasion of peer-review or refusal to allow other experts to assess work
• evidence of refusal to disclose data, methodology, or detailed results
• evidence of refusal to disclose results contrary to the expert’s own views
• evidence of “dialogic irrationality”: repeating claims after they have been publicly refuted, without responding to the refutations
• evidence of a record of “over-claiming” of expertise: claiming expertise beyond the expert’s domain of expertise
• evidence of a record of “lending” one’s expertise to support other individuals or institutions that themselves lack epistemic integrity in some of the above ways
• evidence of being an “opinion for hire”—offering expert testimony for pay, perhaps particularly if that testimony conflicts with other things the expert has said
pdf  essay  study  philosophy  rationality  epistemic  info-dynamics  westminster  track-record  checklists  list  tetlock  expert  info-foraging  sleuthin  metabuch  meta:rhetoric  integrity  honor  crooked  phalanges  truth  expert-experience  reason  decision-making 
september 2017 by nhaliday
Patrick McKenzie on Twitter: "It occurs to me that my hobby in writing letters about the Fair Credit Reporting Act is suddenly topical! So some quick opinionated advice:"
identity theft and credit monitoring guide (inspired by Equifax)

https://www.upguard.com/breaches/credit-crunch-national-credit-federation
https://twitter.com/WAWilsonIV/status/937086175969386496
I really think the only solution to this is Congress or the courts acting to create serious civil liability for data breaches:
Another way this would be good: companies having to count your personal information as a serious potential cost as well as a potential asset will make it rational for them to invade your privacy less.
techtariat  twitter  social  discussion  current-events  drama  personal-finance  debt  money  safety  planning  objektbuch  howto  checklists  human-bean  duty  multi  backup  law  policy  incentives  privacy  business  regulation  security  unaffiliated 
september 2017 by nhaliday
Medicine as a pseudoscience | West Hunter
The idea that venesection was a good thing, or at least not so bad, on the grounds that one in a few hundred people have hemochromatosis (in Northern Europe) reminds me of the people who don’t wear a seatbelt, since it would keep them from being thrown out of their convertible into a waiting haystack, complete with nubile farmer’s daughter. Daughters. It could happen. But it’s not the way to bet.

Back in the good old days, Charles II, age 53, had a fit one Sunday evening, while fondling two of his mistresses.

Monday they bled him (cupping and scarifying) of eight ounces of blood. Followed by an antimony emetic, vitriol in peony water, purgative pills, and a clyster. Followed by another clyster after two hours. Then syrup of blackthorn, more antimony, and rock salt. Next, more laxatives, white hellebore root up the nostrils. Powdered cowslip flowers. More purgatives. Then Spanish Fly. They shaved his head and stuck blistering plasters all over it, plastered the soles of his feet with tar and pigeon-dung, then said good-night.

...

Friday. The king was worse. He tells them not to let poor Nelly starve. They try the Oriental Bezoar Stone, and more bleeding. Dies at noon.

Most people didn’t suffer this kind of problem with doctors, since they never saw one. Charles had six. Now Bach and Handel saw the same eye surgeon, John Taylor – who blinded both of them. Not everyone can put that on his resume!

You may wonder how medicine continued to exist, if it had a negative effect, on the whole. There’s always the placebo effect – at least there would be, if it existed. Any real placebo effect is very small: I’d guess exactly zero. But there is regression to the mean. You see the doctor when you’re feeling worse than average – and afterwards, if he doesn’t kill you outright, you’re likely to feel better. Which would have happened whether you’d seen him or not, but they didn’t often do RCTs back in the day – I think James Lind was the first (1747).

Back in the late 19th century, Christian Scientists did better than others when sick, because they didn’t believe in medicine. For reasons I think mistaken, because Mary Baker Eddy rejected the reality of the entire material world, but hey, it worked. Parenthetically, what triggered all that New Age nonsense in 19th century New England? Hash?

This did not change until fairly recently. Sometime in the early 20th medicine, clinical medicine, what doctors do, hit break-even. Now we can’t do without it. I wonder if there are, or will be, other examples of such a pile of crap turning (mostly) into a real science.

good tweet: https://twitter.com/bowmanthebard/status/897146294191390720
The brilliant GP I've had for 35+ years has retired. How can I find another one who meets my requirements?

1 is overweight
2 drinks more than officially recommended amounts
3 has an amused, tolerant atitude to human failings
4 is well aware that we're all going to die anyway, & there are better or worse ways to die
5 has a healthy skeptical attitude to mainstream medical science
6 is wholly dismissive of "a|ternative” medicine
7 believes in evolution
8 thinks most diseases get better without intervention, & knows the dangers of false positives
9 understands the base rate fallacy

EconPapers: Was Civil War Surgery Effective?: http://econpapers.repec.org/paper/htrhcecon/444.htm
contra Greg Cochran:
To shed light on the subject, I analyze a data set created by Dr. Edmund Andrews, a Civil war surgeon with the 1st Illinois Light Artillery. Dr. Andrews’s data can be rendered into an observational data set on surgical intervention and recovery, with controls for wound location and severity. The data also admits instruments for the surgical decision. My analysis suggests that Civil War surgery was effective, and increased the probability of survival of the typical wounded soldier, with average treatment effect of 0.25-0.28.

Medical Prehistory: https://westhunt.wordpress.com/2016/03/14/medical-prehistory/
What ancient medical treatments worked?

https://westhunt.wordpress.com/2016/03/14/medical-prehistory/#comment-76878
In some very, very limited conditions, bleeding?
--
Bad for you 99% of the time.

https://westhunt.wordpress.com/2016/03/14/medical-prehistory/#comment-76947
Colchicine – used to treat gout – discovered by the Ancient Greeks.

https://westhunt.wordpress.com/2016/03/14/medical-prehistory/#comment-76973
Dracunculiasis (Guinea worm)
Wrap the emerging end of the worm around a stick and slowly pull it out.
(3,500 years later, this remains the standard treatment.)
https://en.wikipedia.org/wiki/Ebers_Papyrus

https://westhunt.wordpress.com/2016/03/14/medical-prehistory/#comment-76971
Some of the progress is from formal medicine, most is from civil engineering, better nutrition ( ag science and physical chemistry), less crowded housing.

Nurses vs doctors: https://westhunt.wordpress.com/2014/10/01/nurses-vs-doctors/
Medicine, the things that doctors do, was an ineffective pseudoscience until fairly recently. Until 1800 or so, they were wrong about almost everything. Bleeding, cupping, purging, the four humors – useless. In the 1800s, some began to realize that they were wrong, and became medical nihilists that improved outcomes by doing less. Some patients themselves came to this realization, as when Civil War casualties hid from the surgeons and had better outcomes. Sometime in the early 20th century, MDs reached break-even, and became an increasingly positive influence on human health. As Lewis Thomas said, medicine is the youngest science.

Nursing, on the other hand, has always been useful. Just making sure that a patient is warm and nourished when too sick to take care of himself has helped many survive. In fact, some of the truly crushing epidemics have been greatly exacerbated when there were too few healthy people to take care of the sick.

Nursing must be old, but it can’t have existed forever. Whenever it came into existence, it must have changed the selective forces acting on the human immune system. Before nursing, being sufficiently incapacitated would have been uniformly fatal – afterwards, immune responses that involved a period of incapacitation (with eventual recovery) could have been selectively favored.

when MDs broke even: https://westhunt.wordpress.com/2014/10/01/nurses-vs-doctors/#comment-58981
I’d guess the 1930s. Lewis Thomas thought that he was living through big changes. They had a working serum therapy for lobar pneumonia ( antibody-based). They had many new vaccines ( diphtheria in 1923, whopping cough in 1926, BCG and tetanus in 1927, yellow fever in 1935, typhus in 1937.) Vitamins had been mostly worked out. Insulin was discovered in 1929. Blood transfusions. The sulfa drugs, first broad-spectrum antibiotics, showed up in 1935.

DALYs per doctor: https://westhunt.wordpress.com/2018/01/22/dalys-per-doctor/
The disability-adjusted life year (DALY) is a measure of overall disease burden – the number of years lost. I’m wondering just much harm premodern medicine did, per doctor. How many healthy years of life did a typical doctor destroy (net) in past times?

...

It looks as if the average doctor (in Western medicine) killed a bunch of people over his career ( when contrasted with doing nothing). In the Charles Manson class.

Eventually the market saw through this illusion. Only took a couple of thousand years.

https://westhunt.wordpress.com/2018/01/22/dalys-per-doctor/#comment-100741
That a very large part of healthcare spending is done for non-health reasons. He has a chapter on this in his new book, also check out his paper “Showing That You Care: The Evolution of Health Altruism” http://mason.gmu.edu/~rhanson/showcare.pdf
--
I ran into too much stupidity to finish the article. Hanson’s a loon. For example when he talks about the paradox of blacks being more sentenced on drug offenses than whites although they use drugs at similar rate. No paradox: guys go to the big house for dealing, not for using. Where does he live – Mars?

I had the same reaction when Hanson parroted some dipshit anthropologist arguing that the stupid things people do while drunk are due to social expectations, not really the alcohol.
Horseshit.

I don’t think that being totally unable to understand everybody around you necessarily leads to deep insights.

https://westhunt.wordpress.com/2018/01/22/dalys-per-doctor/#comment-100744
What I’ve wondered is if there was anything that doctors did that actually was helpful and if perhaps that little bit of success helped them fool people into thinking the rest of it helped.
--
Setting bones. extracting arrows: spoon of Diocles. Colchicine for gout. Extracting the Guinea worm. Sometimes they got away with removing the stone. There must be others.
--
Quinine is relatively recent: post-1500. Obstetrical forceps also. Caesarean deliveries were almost always fatal to the mother until fairly recently.

Opium has been around for a long while : it works.

https://westhunt.wordpress.com/2018/01/22/dalys-per-doctor/#comment-100839
If pre-modern medicine was indeed worse than useless – how do you explain no one noticing that patients who get expensive treatments are worse off than those who didn’t?
--
were worse off. People are kinda dumb – you’ve noticed?
--
My impression is that while people may be “kinda dumb”, ancient customs typically aren’t.
Even if we assume that all people who lived prior to the 19th century were too dumb to make the rational observation, wouldn’t you expect this ancient practice to be subject to selective pressure?
--
Your impression is wrong. Do you think that there some slick reason for Carthaginians incinerating their first-born?

Theodoric of York, bloodletting: https://www.youtube.com/watch?v=yvff3TViXmY

details on blood-letting and hemochromatosis: https://westhunt.wordpress.com/2018/01/22/dalys-per-doctor/#comment-100746

Starting Over: https://westhunt.wordpress.com/2018/01/23/starting-over/
Looking back on it, human health would have … [more]
west-hunter  scitariat  discussion  ideas  medicine  meta:medicine  science  realness  cost-benefit  the-trenches  info-dynamics  europe  the-great-west-whale  history  iron-age  the-classics  mediterranean  medieval  early-modern  mostly-modern  🌞  harvard  aphorism  rant  healthcare  regression-to-mean  illusion  public-health  multi  usa  northeast  pre-ww2  checklists  twitter  social  albion  ability-competence  study  cliometrics  war  trivia  evidence-based  data  intervention  effect-size  revolution  speculation  sapiens  drugs  antiquity  lived-experience  list  survey  questions  housing  population  density  nutrition  wiki  embodied  immune  evolution  poast  chart  markets  civil-liberty  randy-ayndy  market-failure  impact  scale  pro-rata  estimate  street-fighting  fermi  marginal  truth  recruiting  alt-inst  academia  social-science  space  physics  interdisciplinary  ratty  lesswrong  autism  👽  subculture  hanson  people  track-record  crime  criminal-justice  criminology  race  ethanol  error  video  lol  comedy  tradition  institutions  iq  intelligence  MENA  impetus  legacy 
august 2017 by nhaliday
Town square test - Wikipedia
In his book The Case for Democracy, published in 2004, Sharansky explains the term: If a person cannot walk into the middle of the town square and express his or her views without fear of arrest, imprisonment, or physical harm, then that person is living in a fear society, not a free society. We cannot rest until every person living in a "fear society" has finally won their freedom.[1]

Heckler's veto: https://en.wikipedia.org/wiki/Heckler%27s_veto
gedanken  checklists  politics  polisci  ideology  government  authoritarianism  antidemos  civil-liberty  civic  exit-voice  anarcho-tyranny  managerial-state  wiki  reference  russia  communism  democracy  quiz  power  multi  polarization  track-record  orwellian 
august 2017 by nhaliday
Kelly criterion - Wikipedia
In probability theory and intertemporal portfolio choice, the Kelly criterion, Kelly strategy, Kelly formula, or Kelly bet, is a formula used to determine the optimal size of a series of bets. In most gambling scenarios, and some investing scenarios under some simplifying assumptions, the Kelly strategy will do better than any essentially different strategy in the long run (that is, over a span of time in which the observed fraction of bets that are successful equals the probability that any given bet will be successful). It was described by J. L. Kelly, Jr, a researcher at Bell Labs, in 1956.[1] The practical use of the formula has been demonstrated.[2][3][4]

The Kelly Criterion is to bet a predetermined fraction of assets and can be counterintuitive. In one study,[5][6] each participant was given $25 and asked to bet on a coin that would land heads 60% of the time. Participants had 30 minutes to play, so could place about 300 bets, and the prizes were capped at $250. Behavior was far from optimal. "Remarkably, 28% of the participants went bust, and the average payout was just $91. Only 21% of the participants reached the maximum. 18 of the 61 participants bet everything on one toss, while two-thirds gambled on tails at some stage in the experiment." Using the Kelly criterion and based on the odds in the experiment, the right approach would be to bet 20% of the pot on each throw (see first example in Statement below). If losing, the size of the bet gets cut; if winning, the stake increases.
nibble  betting  investing  ORFE  acm  checklists  levers  probability  algorithms  wiki  reference  atoms  extrema  parsimony  tidbits  decision-theory  decision-making  street-fighting  mental-math  calculation 
august 2017 by nhaliday
Book review: "Working Effectively with Legacy Code" by Michael C. Feathers - Eli Bendersky's website
The basic premise of the book is simple, and can be summarized as follows:

To improve some piece of code, we must be able to refactor it.
To be able to refactor code, we must have tests that prove our refactoring didn't break anything.
To have reasonable tests, the code has to be testable; that is, it should be in a form amenable to test harnessing. This most often means breaking implicit dependencies.
... and the author spends about 400 pages on how to achieve that. This book is dense, and it took me a long time to plow through it. I started reading linerarly, but very soon discovered this approach doesn't work. So I began hopping forward and backward between the main text and the "dependency-breaking techniques" chapter which holds isolated recipes for dealing with specific kinds of dependencies. There's quite a bit of repetition in the book, which makes it even more tedious to read.

The techniques described by the author are as terrible as the code they're up against. Horrible abuses of the preprocessor in C/C++, abuses of inheritance in C++ and Java, and so on. Particularly the latter is quite sobering. If you love OOP beware - this book may leave you disenchanted, if not full of hate.

To reiterate the conclusion I already presented earlier - get this book if you have to work with old balls of mud; it will be effort well spent. Otherwise, if you're working on one of those new-age continuously integrated codebases with a 2/1 test to code ratio, feel free to skip it.
techtariat  books  review  summary  critique  engineering  programming  intricacy  code-dive  best-practices  checklists  checking  working-stiff  retrofit  oop  code-organizing  legacy  correctness  coupling-cohesion  composition-decomposition  tricks  metabuch  nitty-gritty  move-fast-(and-break-things)  methodology 
july 2017 by nhaliday
Logic | West Hunter
All the time I hear some public figure saying that if we ban or allow X, then logically we have to ban or allow Y, even though there are obvious practical reasons for X and obvious practical reasons against Y.

No, we don’t.

http://www.amnation.com/vfr/archives/005864.html
http://www.amnation.com/vfr/archives/002053.html

compare: https://pinboard.in/u:nhaliday/b:190b299cf04a

Small Change Good, Big Change Bad?: https://www.overcomingbias.com/2018/02/small-change-good-big-change-bad.html
And on reflection it occurs to me that this is actually THE standard debate about change: some see small changes and either like them or aren’t bothered enough to advocate what it would take to reverse them, while others imagine such trends continuing long enough to result in very large and disturbing changes, and then suggest stronger responses.

For example, on increased immigration some point to the many concrete benefits immigrants now provide. Others imagine that large cumulative immigration eventually results in big changes in culture and political equilibria. On fertility, some wonder if civilization can survive in the long run with declining population, while others point out that population should rise for many decades, and few endorse the policies needed to greatly increase fertility. On genetic modification of humans, some ask why not let doctors correct obvious defects, while others imagine parents eventually editing kid genes mainly to max kid career potential. On oil some say that we should start preparing for the fact that we will eventually run out, while others say that we keep finding new reserves to replace the ones we use.

...

If we consider any parameter, such as typical degree of mind wandering, we are unlikely to see the current value as exactly optimal. So if we give people the benefit of the doubt to make local changes in their interest, we may accept that this may result in a recent net total change we don’t like. We may figure this is the price we pay to get other things we value more, and we we know that it can be very expensive to limit choices severely.

But even though we don’t see the current value as optimal, we also usually see the optimal value as not terribly far from the current value. So if we can imagine current changes as part of a long term trend that eventually produces very large changes, we can become more alarmed and willing to restrict current changes. The key question is: when is that a reasonable response?

First, big concerns about big long term changes only make sense if one actually cares a lot about the long run. Given the usual high rates of return on investment, it is cheap to buy influence on the long term, compared to influence on the short term. Yet few actually devote much of their income to long term investments. This raises doubts about the sincerity of expressed long term concerns.

Second, in our simplest models of the world good local choices also produce good long term choices. So if we presume good local choices, bad long term outcomes require non-simple elements, such as coordination, commitment, or myopia problems. Of course many such problems do exist. Even so, someone who claims to see a long term problem should be expected to identify specifically which such complexities they see at play. It shouldn’t be sufficient to just point to the possibility of such problems.

...

Fourth, many more processes and factors limit big changes, compared to small changes. For example, in software small changes are often trivial, while larger changes are nearly impossible, at least without starting again from scratch. Similarly, modest changes in mind wandering can be accomplished with minor attitude and habit changes, while extreme changes may require big brain restructuring, which is much harder because brains are complex and opaque. Recent changes in market structure may reduce the number of firms in each industry, but that doesn’t make it remotely plausible that one firm will eventually take over the entire economy. Projections of small changes into large changes need to consider the possibility of many such factors limiting large changes.

Fifth, while it can be reasonably safe to identify short term changes empirically, the longer term a forecast the more one needs to rely on theory, and the more different areas of expertise one must consider when constructing a relevant model of the situation. Beware a mere empirical projection into the long run, or a theory-based projection that relies on theories in only one area.

We should very much be open to the possibility of big bad long term changes, even in areas where we are okay with short term changes, or at least reluctant to sufficiently resist them. But we should also try to hold those who argue for the existence of such problems to relatively high standards. Their analysis should be about future times that we actually care about, and can at least roughly foresee. It should be based on our best theories of relevant subjects, and it should consider the possibility of factors that limit larger changes.

And instead of suggesting big ways to counter short term changes that might lead to long term problems, it is often better to identify markers to warn of larger problems. Then instead of acting in big ways now, we can make sure to track these warning markers, and ready ourselves to act more strongly if they appear.

Growth Is Change. So Is Death.: https://www.overcomingbias.com/2018/03/growth-is-change-so-is-death.html
I see the same pattern when people consider long term futures. People can be quite philosophical about the extinction of humanity, as long as this is due to natural causes. Every species dies; why should humans be different? And few get bothered by humans making modest small-scale short-term modifications to their own lives or environment. We are mostly okay with people using umbrellas when it rains, moving to new towns to take new jobs, etc., digging a flood ditch after our yard floods, and so on. And the net social effect of many small changes is technological progress, economic growth, new fashions, and new social attitudes, all of which we tend to endorse in the short run.

Even regarding big human-caused changes, most don’t worry if changes happen far enough in the future. Few actually care much about the future past the lives of people they’ll meet in their own life. But for changes that happen within someone’s time horizon of caring, the bigger that changes get, and the longer they are expected to last, the more that people worry. And when we get to huge changes, such as taking apart the sun, a population of trillions, lifetimes of millennia, massive genetic modification of humans, robots replacing people, a complete loss of privacy, or revolutions in social attitudes, few are blasé, and most are quite wary.

This differing attitude regarding small local changes versus large global changes makes sense for parameters that tend to revert back to a mean. Extreme values then do justify extra caution, while changes within the usual range don’t merit much notice, and can be safely left to local choice. But many parameters of our world do not mostly revert back to a mean. They drift long distances over long times, in hard to predict ways that can be reasonably modeled as a basic trend plus a random walk.

This different attitude can also make sense for parameters that have two or more very different causes of change, one which creates frequent small changes, and another which creates rare huge changes. (Or perhaps a continuum between such extremes.) If larger sudden changes tend to cause more problems, it can make sense to be more wary of them. However, for most parameters most change results from many small changes, and even then many are quite wary of this accumulating into big change.

For people with a sharp time horizon of caring, they should be more wary of long-drifting parameters the larger the changes that would happen within their horizon time. This perspective predicts that the people who are most wary of big future changes are those with the longest time horizons, and who more expect lumpier change processes. This prediction doesn’t seem to fit well with my experience, however.

Those who most worry about big long term changes usually seem okay with small short term changes. Even when they accept that most change is small and that it accumulates into big change. This seems incoherent to me. It seems like many other near versus far incoherences, like expecting things to be simpler when you are far away from them, and more complex when you are closer. You should either become more wary of short term changes, knowing that this is how big longer term change happens, or you should be more okay with big long term change, seeing that as the legitimate result of the small short term changes you accept.

https://www.overcomingbias.com/2018/03/growth-is-change-so-is-death.html#comment-3794966996
The point here is the gradual shifts of in-group beliefs are both natural and no big deal. Humans are built to readily do this, and forget they do this. But ultimately it is not a worry or concern.

But radical shifts that are big, whether near or far, portend strife and conflict. Either between groups or within them. If the shift is big enough, our intuition tells us our in-group will be in a fight. Alarms go off.
west-hunter  scitariat  discussion  rant  thinking  rationality  metabuch  critique  systematic-ad-hoc  analytical-holistic  metameta  ideology  philosophy  info-dynamics  aphorism  darwinian  prudence  pragmatic  insight  tradition  s:*  2016  multi  gnon  right-wing  formal-values  values  slippery-slope  axioms  alt-inst  heuristic  anglosphere  optimate  flux-stasis  flexibility  paleocon  polisci  universalism-particularism  ratty  hanson  list  examples  migration  fertility  intervention  demographics  population  biotech  enhancement  energy-resources  biophysical-econ  nature  military  inequality  age-generation  time  ideas  debate  meta:rhetoric  local-global  long-short-run  gnosis-logos  gavisti  stochastic-processes  eden-heaven  politics  equilibrium  hive-mind  genetics  defense  competition  arms  peace-violence  walter-scheidel  speed  marginal  optimization  search  time-preference  patience  futurism  meta:prediction  accuracy  institutions  tetlock  theory-practice  wire-guided  priors-posteriors  distribution  moments  biases  epistemic  nea 
may 2017 by nhaliday
In the first place | West Hunter
We hear a lot about innovative educational approaches, and since these silly people have been at this for a long time now, we hear just as often about the innovative approaches that some idiot started up a few years ago and are now crashing in flames.  We’re in steady-state.

I’m wondering if it isn’t time to try something archaic.  In particular, mnemonic techniques, such as the method of loci.  As far as I know, nobody has actually tried integrating the more sophisticated mnemonic techniques into a curriculum.  Sure, we all know useful acronyms, like the one for resistor color codes, but I’ve not heard of anyone teaching kids how to build a memory palace.

https://westhunt.wordpress.com/2013/12/28/in-the-first-place/#comment-20106
I have never used formal mnemonic techniques, but life has recently tested me on how well I remember material from my college days. Turns out that I can still do the sorts of math and physics problems that I could then, in subjects like classical mechanics, real analysis, combinatorics, complex variables, quantum mechanics, statistical mechanics, etc. I usually have to crack the book though. Some of that material I have used from time to time, or even fairly often (especially linear algebra), most not. I’m sure I’m slower than I was then, at least on the stuff I haven’t used.

https://westhunt.wordpress.com/2013/12/28/in-the-first-place/#comment-20109
Long-term memory capacity must be finite, but I know of no evidence that anyone has ever run out of it. As for the idea that you don’t really need a lot of facts in your head to come up with new ideas: pretty much the opposite of the truth, in a lot of fields.

https://en.wikipedia.org/wiki/Method_of_loci

Mental Imagery > Ancient Imagery Mnemonics: https://plato.stanford.edu/entries/mental-imagery/ancient-imagery-mnemonics.html
In the Middle Ages and the Renaissance, very elaborate versions of the method evolved, using specially learned imaginary spaces (Memory Theaters or Palaces), and complex systems of predetermined symbolic images, often imbued with occult or spiritual significances. However, modern experimental research has shown that even a simple and easily learned form of the method of loci can be highly effective (Ross & Lawrence, 1968; Maguire et al., 2003), as are several other imagery based mnemonic techniques (see section 4.2 of the main entry).

The advantages of organizing knowledge in terms of country and place: http://marginalrevolution.com/marginalrevolution/2018/02/advantages-organizing-knowledge-terms-country-place.html

https://www.quora.com/What-are-the-best-books-on-Memory-Palace

fascinating aside:
US vs Nazi army, Vietnam, the draft: https://westhunt.wordpress.com/2013/12/28/in-the-first-place/#comment-20136
You think I know more about this than a retired major general and former head of the War College? I do, of course, but that fact itself should worry you.

He’s not all wrong, but a lot of what he says is wrong. For example, the Germany Army was a conscript army, so conscription itself can’t explain why the Krauts were about 25% more effective than the average American unit. Nor is it true that the draft in WWII was corrupt.

The US had a different mix of armed forces – more air forces and a much larger Navy than Germany. Those services have higher technical requirements and sucked up a lot of the smarter guys. That was just a product of the strategic situation.

The Germans had better officers, partly because of better training and doctrine, partly the fruit of a different attitude towards the army. The US, much of the time, thought of the Army as a career for losers, but Germans did not.

The Germans had an enormous amount of relevant combat experience, much more than anyone in the US. Spend a year or two on the Eastern Front and you learn.

And the Germans had better infantry weapons.

The US tooth-to-tail ratio was , I think, worse than that of the Germans: some of that was a natural consequence of being an expeditionary force, but some was just a mistake. You want supply sergeants to be literate, but it is probably true that we put too many of the smarter guys into non-combat positions. That changed some when we ran into manpower shortages in late 1944 and combed out the support positions.

This guy is back-projecting Vietnam problems into WWII – he’s mostly wrong.

more (more of a focus on US Marines than Army): https://www.quora.com/Were-US-Marines-tougher-than-elite-German-troops-in-WW2/answer/Joseph-Scott-13
west-hunter  scitariat  speculation  ideas  proposal  education  learning  retention  neurons  the-classics  nitty-gritty  visuo  spatial  psych-architecture  multi  poast  history  mostly-modern  world-war  war  military  strategy  usa  europe  germanic  cold-war  visual-understanding  cartoons  narrative  wordlessness  comparison  asia  developing-world  knowledge  metabuch  econotariat  marginal-rev  discussion  world  thinking  government  local-global  humility  wire-guided  policy  iron-age  mediterranean  wiki  reference  checklists  exocortex  early-modern  org:edu  philosophy  enlightenment-renaissance-restoration-reformation  qra  q-n-a  books  recommendations  list  links  ability-competence  leadership  elite  higher-ed  math  physics  linear-algebra  cost-benefit  prioritizing  defense  martial  war-nerd 
may 2017 by nhaliday
Book Review: How Asia Works by Joe Studwell | Don't Worry About the Vase
1. Thou shalt enact real land reform.
2. Thou shalt protect infant industries and enforce upon them export discipline.
3. Thou shalt repress and direct thine financial system.

export discipline = see what price foreigners will buy product for

Garett Jones agrees: https://twitter.com/GarettJones/status/902579701968928771
https://archive.is/AlKxq
Park Chung Hee's brutal combination in SK of hardening the budget constraint while dangling incentives in front of top exporters was key IMO
By dangling the incentives before *exporters*, Park gave them an incentive to please customers who couldn't be bribed or shamed into buying.
and keeping the militant unions in check :-)
ratty  unaffiliated  books  summary  review  critique  history  economics  policy  world  developing-world  asia  government  stylized-facts  growth-econ  japan  korea  china  sinosphere  property-rights  labor  agriculture  rent-seeking  class  communism  checklists  political-econ  broad-econ  gray-econ  finance  org:davos  capitalism  trade  nationalism-globalism  🎩  markets  paying-rent  supply-demand  great-powers  multi  twitter  social  commentary  econotariat  garett-jones  pseudoE  wealth-of-nations 
february 2017 by nhaliday
« earlier      
per page:    204080120160

bundles : abstractmetaproblem-solvingthinking

related tags

80000-hours  aaronson  ability-competence  abstraction  academia  accretion  accuracy  acemoglu  acm  acmtariat  additive  advanced  adversarial  advice  africa  age-generation  age-of-discovery  agriculture  ai  akrasia  albion  algorithms  alien-character  allodium  alt-inst  amazon  analogy  analysis  analytical-holistic  anarcho-tyranny  anglo  anglosphere  announcement  anthropology  antidemos  antiquity  aphorism  api  apollonian-dionysian  applicability-prereqs  applications  approximation  arbitrage  archaeology  aristos  arms  arrows  article  asia  assembly  atoms  attention  authoritarianism  autism  automation  axioms  backup  bangbang  bare-hands  barons  beeminder  benchmarks  best-practices  betting  biases  big-peeps  big-picture  big-surf  bio  biodet  bioinformatics  biophysical-econ  biotech  blowhards  books  bootstraps  bostrom  bounded-cognition  brands  bret-victor  britain  broad-econ  browser  build-packaging  business  c(pp)  caching  calculation  canon  capitalism  cardio  career  cartoons  CAS  causation  chapman  characterization  charity  chart  cheatsheet  checking  checklists  china  christianity  civic  civil-liberty  civilization  clarity  class  class-warfare  classic  clever-rats  client-server  cliometrics  cloud  coalitions  coarse-fine  code-dive  code-organizing  cog-psych  cohesion  cold-war  collaboration  comedy  commentary  communication  communism  community  comparison  competition  compilers  complex-systems  composition-decomposition  computer-memory  computer-vision  concept  conceptual-vocab  concurrency  config  confluence  confusion  conquest-empire  context  contradiction  contrarianism  convergence  convexity-curvature  cooking  cooperate-defect  coordination  core-rats  correctness  correlation  corruption  cost-benefit  cost-disease  counter-revolution  coupling-cohesion  courage  cracker-econ  cracker-prog  creative  crime  criminal-justice  criminology  critique  crooked  crux  crypto  cs  culture  culture-war  curiosity  current-events  curvature  cycles  cynicism-idealism  dan-luu  dark-arts  darwinian  data  data-science  data-structures  dataviz  dbs  death  debate  debt  debugging  decentralized  decision-making  decision-theory  deep-learning  deep-materialism  defense  definite-planning  degrees-of-freedom  democracy  demographics  dennett  density  dependence-independence  descriptive  design  desktop  detail-architecture  developing-world  developmental  devops  devtools  diet  differential  dimensionality  diogenes  direct-indirect  direction  discipline  discussion  disease  distributed  distribution  divide-and-conquer  diy  documentation  domestication  drama  drugs  DSL  duality  duplication  duty  early-modern  ecology  econometrics  economics  econotariat  ecosystem  eden  eden-heaven  editors  education  EEA  effect-size  effective-altruism  efficiency  EGT  elections  elite  email  embedded  embodied  embodied-pack  embodied-street-fighting  empirical  endocrine  ends-means  energy-resources  engineering  enhancement  enlightenment-renaissance-restoration-reformation  ensembles  entrepreneurialism  entropy-like  envy  epidemiology  epistemic  equilibrium  error  essay  estimate  ethanol  ethics  europe  evidence-based  evolution  evopsych  examples  existence  exit-voice  exocortex  expansionism  expectancy  expert  expert-experience  explanans  explanation  explore-exploit  exposition  extratricky  extrema  facebook  farmers-and-foragers  fashun  features  fermi  fertility  feynman  finance  finiteness  fitsci  flexibility  fluid  flux-stasis  focus  food  foreign-lang  foreign-policy  form-design  formal-values  fourier  frameworks  frontend  frontier  functional  futurism  gallic  games  garett-jones  gavisti  gbooks  gedanken  gelman  gender  gender-diff  generalization  generative  genetics  genomics  germanic  giants  gilens-page  git  github  gnon  gnosis-logos  gnxp  google  gotchas  government  gowers  grad-school  gradient-descent  graph-theory  graphical-models  graphs  gray-econ  great-powers  grokkability  grokkability-clarity  ground-up  growth  growth-econ  growth-mindset  grugq  gtd  guide  gwern  h2o  habit  hacker  hamming  hanson  hardware  hari-seldon  harvard  health  healthcare  heavyweights  henrich  heuristic  hi-order-bits  hidden-motives  high-variance  higher-ed  history  hive-mind  hmm  hn  homo-hetero  homogeneity  honor  housing  howto  huge-data-the-biggest  human-bean  human-capital  human-study  humanity  humility  hypochondria  hypocrisy  hypothesis-testing  ide  ideas  identification-equivalence  identity-politics  ideology  idk  IEEE  iidness  illusion  immune  impact  impetus  impro  incentives  india  industrial-org  inequality  inference  info-dynamics  info-foraging  infographic  inhibition  init  innovation  input-output  insight  institutions  integration-extension  integrity  intel  intellectual-property  intelligence  interdisciplinary  interests  interface-compatibility  internet  interpretability  intervention  interview-prep  intricacy  intuition  investigative-journo  investing  ios  iq  iraq-syria  iron-age  is-ought  islam  iteration-recursion  janus  japan  jargon  javascript  jobs  judgement  jvm  kernels  knowledge  korea  labor  language  latency-throughput  law  leadership  learning  lecture-notes  left-wing  legacy  len:short  lens  lesswrong  let-me-see  letters  levers  leviathan  lexical  libraries  life-history  lifehack  linear-algebra  linear-models  linearity  links  linux  list  literature  lived-experience  local-global  logic  lol  long-short-run  long-term  low-hanging  machiavelli  machine-learning  macro  madisonian  magnitude  maker  male-variability  management  managerial-state  manifolds  map-territory  marginal  marginal-rev  market-failure  markets  markov  martial  martingale  math  math.CA  math.GN  math.NT  mathtariat  meaningness  measure  measurement  media  medicine  medieval  mediterranean  memes(ew)  MENA  mental-math  meta-analysis  meta:math  meta:medicine  meta:prediction  meta:reading  meta:research  meta:rhetoric  meta:science  meta:war  metabolic  metabuch  metal-to-virtual  metameta  methodology  metrics  micro  migration  military  minimalism  minimum-viable  miri-cfar  model-class  model-organism  model-selection  models  moments  monetary-fiscal  money  money-for-time  mood-affiliation  morality  mostly-modern  move-fast-(and-break-things)  multi  multiplicative  murray  narrative  nascent-state  nationalism-globalism  nature  near-far  necessity-sufficiency  network-structure  networking  neuro  neurons  news  nibble  nietzschean  nihil  nitty-gritty  noise-structure  nootropics  northeast  notation  notetaking  novelty  nuclear  null-result  numerics  nutrition  objektbuch  ocaml-sml  occam  occident  oceans  ocr  old-anglo  oly  oly-programming  oop  operational  opsec  optimate  optimism  optimization  order-disorder  ORFE  org:anglo  org:bleg  org:com  org:data  org:davos  org:edu  org:euro  org:gov  org:health  org:junk  org:lite  org:mag  org:mat  org:med  org:nat  org:ngo  org:popup  org:rec  org:sci  organizing  orient  orwellian  oss  outcome-risk  outdoors  outliers  overflow  p:***  p:whenever  paleocon  papers  paradox  parallax  parasites-microbiome  parenting  pareto  parody  parsimony  paste  patience  paulg  paying-rent  pdf  peace-violence  people  performance  personal-finance  personality  persuasion  pessimism  phalanges  phd  philosophy  physics  pic  pigeonhole-markov  piracy  planning  plots  pls  plt  poast  poetry  polarization  policy  polisci  political-econ  politics  poll  population  population-genetics  positivity  postrat  power  practice  pragmatic  pre-2013  pre-ww2  prediction-markets  prepping  preprint  prioritizing  priors-posteriors  privacy  pro-rata  probability  problem-solving  procrastination  product-management  productivity  prof  profile  programming  progression  project  propaganda  properties  property-rights  proposal  protestant-catholic  protocol-metadata  prudence  pseudoE  psych-architecture  psychology  public-health  publishing  python  q-n-a  qra  quality  quantitative-qualitative  questions  quixotic  quiz  quotes  race  random  randy-ayndy  ranking  rant  rat-pack  rationality  ratty  realness  realpolitik  reason  recommendations  recruiting  reddit  redistribution  reduction  reference  reflection  regression  regression-to-mean  regularization  regularizer  regulation  reinforcement  religion  rent-seeking  replication  repo  reputation  research  responsibility  retention  retrofit  review  revolution  rhetoric  right-wing  rigidity  rigor  risk  ritual  roadmap  robust  roots  rsc  russia  s-factor  s:*  s:***  s:null  saas  safety  sample-complexity  sampling-bias  sanctity-degradation  sapiens  scale  scaling-tech  scholar  scholar-pack  sci-comp  science  science-anxiety  scitariat  search  security  selection  self-control  series  sex  shipping  short-circuit  signal-noise  signaling  signum  simplification-normalization  sinosphere  skeleton  skunkworks  sleep  sleuthin  slippery-slope  smoothness  social  social-choice  social-norms  social-psych  social-science  social-structure  sociality  sociology  socs-and-mops  soft-question  software  solid-study  space  space-complexity  span-cover  spatial  speaking  spectral  speculation  speed  speedometer  spock  ssc  stackex  stagnation  stamina  stanford  startups  state  statesmen  static-dynamic  stats  status  stereotypes  stochastic-processes  stock-flow  stories  strategy  street-fighting  structure  study  studying  stylized-facts  sub-super  subculture  sublinear  success  sulla  summary  summer-2014  supply-demand  survey  sv  symmetry  syntax  synthesis  system-design  systematic-ad-hoc  systems  tactics  taxes  tcs  tcstariat  teaching  tech  tech-infrastructure  technical-writing  techtariat  telos-atelos  temperance  terrorism  tetlock  the-classics  the-great-west-whale  the-monster  the-trenches  theory-of-mind  theory-practice  theos  thick-thin  thiel  thinking  threat-modeling  tidbits  time  time-preference  time-use  toolkit  tools  top-n  topology  track-record  trade  tradecraft  tradeoffs  tradition  transitions  travel  trees  trends  tribalism  tricki  tricks  trivia  trump  trust  truth  tutorial  twitter  types  ubiquity  ui  unaffiliated  uncertainty  universalism-particularism  unix  unsupervised  urban  urban-rural  us-them  usa  ux  vague  values  vcs  venture  video  virtu  visual-understanding  visualization  visuo  vitality  volo-avolo  walter-scheidel  war  war-nerd  water  wealth-of-nations  web  weird  west-hunter  westminster  whole-partial-many  wiki  wire-guided  wisdom  within-without  wkfly  wonkish  wordlessness  workflow  working-stiff  world  world-war  worrydream  worse-is-better/the-right-thing  writing  yak-shaving  yc  yoga  yvain  zooming  🌞  🎓  🎩  🐸  👳  👽  🔬  🖥  🤖  🤖🦉  🦀  🦉 

Copy this bookmark:



description:


tags: