nhaliday + hmm   581

Japanese sound symbolism - Wikipedia
Japanese has a large inventory of sound symbolic or mimetic words, known in linguistics as ideophones.[1][2] Sound symbolic words are found in written as well as spoken Japanese.[3] Known popularly as onomatopoeia, these words are not just imitative of sounds but cover a much wider range of meanings;[1] indeed, many sound-symbolic words in Japanese are for things that don't make any noise originally, most clearly demonstrated by shiinto (しいんと), meaning "silently".
language  foreign-lang  trivia  wiki  reference  audio  hmm  alien-character  culture  list  objektbuch  japan  asia  writing 
5 weeks ago by nhaliday
[Tutorial] A way to Practice Competitive Programming : From Rating 1000 to 2400+ - Codeforces
this guy really didn't take that long to reach red..., as of today he's done 20 contests in 2y to my 44 contests in 7y (w/ a long break)...>_>

tho he has 3 times as many submissions as me. maybe he does a lot of virtual rounds?

some snippets from the PDF guide linked:
1400-1900:
To be rating 1900, skills as follows are needed:
- You know and can use major algorithms like these:
Brute force DP DFS BFS Dijkstra
Binary Indexed Tree nCr, nPr Mod inverse Bitmasks Binary Search
- You can code faster (For example, 5 minutes for R1100 problems, 10 minutes for
R1400 problems)

If you are not good at fast-coding and fast-debugging, you should solve AtCoder problems. Actually, and statistically, many Japanese are good at fast-coding relatively while not so good at solving difficult problems. I think that’s because of AtCoder.

I recommend to solve problem C and D in AtCoder Beginner Contest. On average, if you can solve problem C of AtCoder Beginner Contest within 10 minutes and problem D within 20 minutes, you are Div1 in FastCodingForces :)

...

Interestingly, typical problems are concentrated in Div2-only round problems. If you are not good at Div2-only round, it is likely that you are not good at using typical algorithms, especially 10 algorithms that are written above.

If you can use some typical problem but not good at solving more than R1500 in Codeforces, you should begin TopCoder. This type of practice is effective for people who are good at Div.2 only round but not good at Div.1+Div.2 combined or Div.1+Div.2 separated round.

Sometimes, especially in Div1+Div2 round, some problems need mathematical concepts or thinking. Since there are a lot of problems which uses them (and also light-implementation!) in TopCoder, you should solve TopCoder problems.

I recommend to solve Div1Easy of recent 100 SRMs. But some problems are really difficult, (e.g. even red-ranked coder could not solve) so before you solve, you should check how many percent of people did solve this problem. You can use https://competitiveprogramming.info/ to know some informations.

1900-2200:
To be rating 2200, skills as follows are needed:
- You know and can use 10 algorithms which I stated in pp.11 and segment trees
(including lazy propagations)
- You can solve problems very fast: For example, 5 mins for R1100, 10 mins for
R1500, 15 mins for R1800, 40 mins for R2000.
- You have decent skills for mathematical-thinking or considering problems
- Strong mental which can think about the solution more than 1 hours, and don’t give up even if you are below average in Div1 in the middle of the contest

This is only my way to practice, but I did many virtual contests when I was rating 2000. In this page, virtual contest does not mean “Virtual Participation” in Codeforces. It means choosing 4 or 5 problems which the difficulty is near your rating (For example, if you are rating 2000, choose R2000 problems in Codeforces) and solve them within 2 hours. You can use https://vjudge.net/. In this website, you can make virtual contests from problems on many online judges. (e.g. AtCoder, Codeforces, Hackerrank, Codechef, POJ, ...)

If you cannot solve problem within the virtual contests and could not be able to find the solution during the contest, you should read editorial. Google it. (e.g. If you want to know editorial of Codeforces Round #556 (Div. 1), search “Codeforces Round #556 editorial” in google) There is one more important thing to gain rating in Codeforces. To solve problem fast, you should equip some coding library (or template code). For example, I think that equipping segment tree libraries, lazy segment tree libraries, modint library, FFT library, geometry library, etc. is very effective.

2200 to 2400:
Rating 2200 and 2400 is actually very different ...

To be rating 2400, skills as follows are needed:
- You should have skills that stated in previous section (rating 2200)
- You should solve difficult problems which are only solved by less than 100 people in Div1 contests

...

At first, there are a lot of educational problems in AtCoder. I recommend you should solve problem E and F (especially 700-900 points problem in AtCoder) of AtCoder Regular Contest, especially ARC058-ARC090. Though old AtCoder Regular Contests are balanced for “considering” and “typical”, but sadly, AtCoder Grand Contest and recent AtCoder Regular Contest problems are actually too biased for considering I think, so I don’t recommend if your goal is gain rating in Codeforces. (Though if you want to gain rating more than 2600, you should solve problems from AtCoder Grand Contest)

For me, actually, after solving AtCoder Regular Contests, my average performance in CF virtual contest increased from 2100 to 2300 (I could not reach 2400 because start was early)

If you cannot solve problems, I recommend to give up and read editorial as follows:
Point value 600 700 800 900 1000-
CF rating R2000 R2200 R2400 R2600 R2800
Time to editorial 40 min 50 min 60 min 70 min 80 min

If you solve AtCoder educational problems, your skills of competitive programming will be increased. But there is one more problem. Without practical skills, you rating won’t increase. So, you should do 50+ virtual participations (especially Div.1) in Codeforces. In virtual participation, you can learn how to compete as a purple/orange-ranked coder (e.g. strategy) and how to use skills in Codeforces contests that you learned in AtCoder. I strongly recommend to read editorial of all problems except too difficult one (e.g. Less than 30 people solved in contest) after the virtual contest. I also recommend to write reflections about strategy, learns and improvements after reading editorial on notebooks after the contests/virtual.

In addition, about once a week, I recommend you to make time to think about much difficult problem (e.g. R2800 in Codeforces) for couple of hours. If you could not reach the solution after thinking couple of hours, I recommend you to read editorial because you can learn a lot. Solving high-level problems may give you chance to gain over 100 rating in a single contest, but also can give you chance to solve easier problems faster.
oly  oly-programming  problem-solving  learning  practice  accretion  strategy  hmm  pdf  guide  reflection  advice  wire-guided  marginal  stylized-facts  speed  time  cost-benefit  tools  multi  sleuthin  review  comparison  puzzles  contest  aggregator  recommendations  objektbuch  time-use  growth  studying  🖥  👳  yoga 
august 2019 by nhaliday
The 'science' of training in competitive programming - Codeforces
"Hard problems" is subjective. A good rule of thumb for learning problem solving (at least according to me) is that your problem selection is good if you fail to solve roughly 50% of problems you attempt. Anything in [20%,80%] should still be fine, although many people have problems staying motivated if they fail too often. Read solutions for problems you fail to solve.

(There is some actual math behind this. Hopefully one day I'll have the time to write it down.)
- misof in a comment
--
I don't believe in any of things like "either you solve it in 30mins — few hours, or you never solve it at all". There are some magic at first glance algorithms like polynomial hashing, interval tree or FFT (which is magic even at tenth glance :P), but there are not many of them and vast majority of algorithms are possible to be invented on our own, for example dp. In high school I used to solve many problems from IMO and PMO and when I didn't solve a problem I tried it once again for some time. And I have solved some problems after third or sth like that attempt. Though, if we are restricting ourselves to beginners, I think that it still holds true, but it would be better to read solutions after some time, because there are so many other things which we can learn, so better not get stuck at one particular problem, when there are hundreds of other important concepts to be learnt.
oly  oly-programming  problem-solving  learning  practice  accretion  strategy  marginal  wire-guided  stylized-facts  hmm  advice  tactics  time  time-use  cost-benefit  growth  studying  🖥  👳 
august 2019 by nhaliday
The returns to speaking a second language
Does speaking a foreign language have an impact on earnings? The authors use a variety of empirical strategies to address this issue for a representative sample of U.S. college graduates. OLS regressions with a complete set of controls to minimize concerns about omitted variable biases, propensity score methods, and panel data techniques all lead to similar conclusions. The hourly earnings of those who speak a foreign language are more than 2 percent higher than the earnings of those who do not. The authors obtain higher and more imprecise point estimates using state high school graduation and college entry and graduation requirements as instrumental variables.

...

We find that college graduates who speak a second language earn, on average, wages that are 2 percent higher than those who don’t. We include a complete set of controls for general ability using information on grades and college admission tests and reduce the concern that selection drives the results controlling for the academic major chosen by the student. We obtain similar results with simple regression methods if we use nonparametric methods based on the propensity score and if we exploit the temporal variation in the knowledge of a second language. The estimates, thus, are not driven by observable differences in the composition of the pools of bilinguals and monolinguals, by the linear functional form that we impose in OLS regressions, or by constant unobserved heterogeneity. To reduce the concern that omitted variables bias our estimates, we make use of several instrumental variables (IVs). Using high school and college graduation requirements as instruments, we estimate more substantial returns to learning a second language, on the order of 14 to 30 percent. These results have high standard errors, but they suggest that OLS estimates may actually be biased downward.

...

In separate (unreported) regressions, we explore the labor market returns to speaking specific languages. We estimate OLS regressions following the previous specifications but allow the coefficient to vary by language spoken. In our sample, German is the language that obtains the highest rewards in the labor market. The returns to speaking German are 3.8 percent, while they are 2.3 for speaking French and 1.5 for speaking Spanish. In fact, only the returns to speaking German remain statistically significant in this regression. The results indicate that those who speak languages known by a smaller number of people obtain higher rewards in the labor market.14

The Relative Importance of the European Languages: https://ideas.repec.org/p/kud/kuiedp/0623.html
study  economics  labor  cost-benefit  hmm  language  foreign-lang  usa  empirical  evidence-based  education  human-capital  compensation  correlation  endogenous-exogenous  natural-experiment  policy  wonkish  🎩  french  germanic  latin-america  multi  spanish  china  asia  japan 
july 2019 by nhaliday
The Existential Risk of Math Errors - Gwern.net
How big is this upper bound? Mathematicians have often made errors in proofs. But it’s rarer for ideas to be accepted for a long time and then rejected. But we can divide errors into 2 basic cases corresponding to type I and type II errors:

1. Mistakes where the theorem is still true, but the proof was incorrect (type I)
2. Mistakes where the theorem was false, and the proof was also necessarily incorrect (type II)

Before someone comes up with a final answer, a mathematician may have many levels of intuition in formulating & working on the problem, but we’ll consider the final end-product where the mathematician feels satisfied that he has solved it. Case 1 is perhaps the most common case, with innumerable examples; this is sometimes due to mistakes in the proof that anyone would accept is a mistake, but many of these cases are due to changing standards of proof. For example, when David Hilbert discovered errors in Euclid’s proofs which no one noticed before, the theorems were still true, and the gaps more due to Hilbert being a modern mathematician thinking in terms of formal systems (which of course Euclid did not think in). (David Hilbert himself turns out to be a useful example of the other kind of error: his famous list of 23 problems was accompanied by definite opinions on the outcome of each problem and sometimes timings, several of which were wrong or questionable5.) Similarly, early calculus used ‘infinitesimals’ which were sometimes treated as being 0 and sometimes treated as an indefinitely small non-zero number; this was incoherent and strictly speaking, practically all of the calculus results were wrong because they relied on an incoherent concept - but of course the results were some of the greatest mathematical work ever conducted6 and when later mathematicians put calculus on a more rigorous footing, they immediately re-derived those results (sometimes with important qualifications), and doubtless as modern math evolves other fields have sometimes needed to go back and clean up the foundations and will in the future.7

...

Isaac Newton, incidentally, gave two proofs of the same solution to a problem in probability, one via enumeration and the other more abstract; the enumeration was correct, but the other proof totally wrong and this was not noticed for a long time, leading Stigler to remark:

...

TYPE I > TYPE II?
“Lefschetz was a purely intuitive mathematician. It was said of him that he had never given a completely correct proof, but had never made a wrong guess either.”
- Gian-Carlo Rota13

Case 2 is disturbing, since it is a case in which we wind up with false beliefs and also false beliefs about our beliefs (we no longer know that we don’t know). Case 2 could lead to extinction.

...

Except, errors do not seem to be evenly & randomly distributed between case 1 and case 2. There seem to be far more case 1s than case 2s, as already mentioned in the early calculus example: far more than 50% of the early calculus results were correct when checked more rigorously. Richard Hamming attributes to Ralph Boas a comment that while editing Mathematical Reviews that “of the new results in the papers reviewed most are true but the corresponding proofs are perhaps half the time plain wrong”.

...

Gian-Carlo Rota gives us an example with Hilbert:

...

Olga labored for three years; it turned out that all mistakes could be corrected without any major changes in the statement of the theorems. There was one exception, a paper Hilbert wrote in his old age, which could not be fixed; it was a purported proof of the continuum hypothesis, you will find it in a volume of the Mathematische Annalen of the early thirties.

...

Leslie Lamport advocates for machine-checked proofs and a more rigorous style of proofs similar to natural deduction, noting a mathematician acquaintance guesses at a broad error rate of 1/329 and that he routinely found mistakes in his own proofs and, worse, believed false conjectures30.

[more on these "structured proofs":
https://academia.stackexchange.com/questions/52435/does-anyone-actually-publish-structured-proofs
https://mathoverflow.net/questions/35727/community-experiences-writing-lamports-structured-proofs
]

We can probably add software to that list: early software engineering work found that, dismayingly, bug rates seem to be simply a function of lines of code, and one would expect diseconomies of scale. So one would expect that in going from the ~4,000 lines of code of the Microsoft DOS operating system kernel to the ~50,000,000 lines of code in Windows Server 2003 (with full systems of applications and libraries being even larger: the comprehensive Debian repository in 2007 contained ~323,551,126 lines of code) that the number of active bugs at any time would be… fairly large. Mathematical software is hopefully better, but practitioners still run into issues (eg Durán et al 2014, Fonseca et al 2017) and I don’t know of any research pinning down how buggy key mathematical systems like Mathematica are or how much published mathematics may be erroneous due to bugs. This general problem led to predictions of doom and spurred much research into automated proof-checking, static analysis, and functional languages31.

[related:
https://mathoverflow.net/questions/11517/computer-algebra-errors
I don't know any interesting bugs in symbolic algebra packages but I know a true, enlightening and entertaining story about something that looked like a bug but wasn't.

Define sinc𝑥=(sin𝑥)/𝑥.

Someone found the following result in an algebra package: ∫∞0𝑑𝑥sinc𝑥=𝜋/2
They then found the following results:

...

So of course when they got:

∫∞0𝑑𝑥sinc𝑥sinc(𝑥/3)sinc(𝑥/5)⋯sinc(𝑥/15)=(467807924713440738696537864469/935615849440640907310521750000)𝜋

hmm:
Which means that nobody knows Fourier analysis nowdays. Very sad and discouraging story... – fedja Jan 29 '10 at 18:47

--

Because the most popular systems are all commercial, they tend to guard their bug database rather closely -- making them public would seriously cut their sales. For example, for the open source project Sage (which is quite young), you can get a list of all the known bugs from this page. 1582 known issues on Feb.16th 2010 (which includes feature requests, problems with documentation, etc).

That is an order of magnitude less than the commercial systems. And it's not because it is better, it is because it is younger and smaller. It might be better, but until SAGE does a lot of analysis (about 40% of CAS bugs are there) and a fancy user interface (another 40%), it is too hard to compare.

I once ran a graduate course whose core topic was studying the fundamental disconnect between the algebraic nature of CAS and the analytic nature of the what it is mostly used for. There are issues of logic -- CASes work more or less in an intensional logic, while most of analysis is stated in a purely extensional fashion. There is no well-defined 'denotational semantics' for expressions-as-functions, which strongly contributes to the deeper bugs in CASes.]

...

Should such widely-believed conjectures as P≠NP or the Riemann hypothesis turn out be false, then because they are assumed by so many existing proofs, a far larger math holocaust would ensue38 - and our previous estimates of error rates will turn out to have been substantial underestimates. But it may be a cloud with a silver lining, if it doesn’t come at a time of danger.

https://mathoverflow.net/questions/338607/why-doesnt-mathematics-collapse-down-even-though-humans-quite-often-make-mista

more on formal methods in programming:
https://www.quantamagazine.org/formal-verification-creates-hacker-proof-code-20160920/
https://intelligence.org/2014/03/02/bob-constable/

https://softwareengineering.stackexchange.com/questions/375342/what-are-the-barriers-that-prevent-widespread-adoption-of-formal-methods
Update: measured effort
In the October 2018 issue of Communications of the ACM there is an interesting article about Formally verified software in the real world with some estimates of the effort.

Interestingly (based on OS development for military equipment), it seems that producing formally proved software requires 3.3 times more effort than with traditional engineering techniques. So it's really costly.

On the other hand, it requires 2.3 times less effort to get high security software this way than with traditionally engineered software if you add the effort to make such software certified at a high security level (EAL 7). So if you have high reliability or security requirements there is definitively a business case for going formal.

WHY DON'T PEOPLE USE FORMAL METHODS?: https://www.hillelwayne.com/post/why-dont-people-use-formal-methods/
You can see examples of how all of these look at Let’s Prove Leftpad. HOL4 and Isabelle are good examples of “independent theorem” specs, SPARK and Dafny have “embedded assertion” specs, and Coq and Agda have “dependent type” specs.6

If you squint a bit it looks like these three forms of code spec map to the three main domains of automated correctness checking: tests, contracts, and types. This is not a coincidence. Correctness is a spectrum, and formal verification is one extreme of that spectrum. As we reduce the rigour (and effort) of our verification we get simpler and narrower checks, whether that means limiting the explored state space, using weaker types, or pushing verification to the runtime. Any means of total specification then becomes a means of partial specification, and vice versa: many consider Cleanroom a formal verification technique, which primarily works by pushing code review far beyond what’s humanly possible.

...

The question, then: “is 90/95/99% correct significantly cheaper than 100% correct?” The answer is very yes. We all are comfortable saying that a codebase we’ve well-tested and well-typed is mostly correct modulo a few fixes in prod, and we’re even writing more than four lines of code a day. In fact, the vast… [more]
ratty  gwern  analysis  essay  realness  truth  correctness  reason  philosophy  math  proofs  formal-methods  cs  programming  engineering  worse-is-better/the-right-thing  intuition  giants  old-anglo  error  street-fighting  heuristic  zooming  risk  threat-modeling  software  lens  logic  inference  physics  differential  geometry  estimate  distribution  robust  speculation  nonlinearity  cost-benefit  convexity-curvature  measure  scale  trivia  cocktail  history  early-modern  europe  math.CA  rigor  news  org:mag  org:sci  miri-cfar  pdf  thesis  comparison  examples  org:junk  q-n-a  stackex  pragmatic  tradeoffs  cracker-prog  techtariat  invariance  DSL  chart  ecosystem  grokkability  heavyweights  CAS  static-dynamic  lower-bounds  complexity  tcs  open-problems  big-surf  ideas  certificates-recognition  proof-systems  PCP  mediterranean  SDP  meta:prediction  epistemic  questions  guessing  distributed  overflow  nibble  soft-question  track-record  big-list  hmm  frontier  state-of-art  move-fast-(and-break-things)  grokkability-clarity  technical-writing  trust 
july 2019 by nhaliday
Why is Google Translate so bad for Latin? A longish answer. : latin
hmm:
> All it does its correlate sequences of up to five consecutive words in texts that have been manually translated into two or more languages.
That sort of system ought to be perfect for a dead language, though. Dump all the Cicero, Livy, Lucretius, Vergil, and Oxford Latin Course into a database and we're good.

We're not exactly inundated with brand new Latin to translate.
--
> Dump all the Cicero, Livy, Lucretius, Vergil, and Oxford Latin Course into a database and we're good.
What makes you think that the Google folks haven't done so and used that to create the language models they use?
> That sort of system ought to be perfect for a dead language, though.
Perhaps. But it will be bad at translating novel English sentences to Latin.
foreign-lang  reddit  social  discussion  language  the-classics  literature  dataset  measurement  roots  traces  syntax  anglo  nlp  stackex  links  q-n-a  linguistics  lexical  deep-learning  sequential  hmm  project  arrows  generalization  state-of-art  apollonian-dionysian  machine-learning  google 
june 2019 by nhaliday
An Efficiency Comparison of Document Preparation Systems Used in Academic Research and Development
The choice of an efficient document preparation system is an important decision for any academic researcher. To assist the research community, we report a software usability study in which 40 researchers across different disciplines prepared scholarly texts with either Microsoft Word or LaTeX. The probe texts included simple continuous text, text with tables and subheadings, and complex text with several mathematical equations. We show that LaTeX users were slower than Word users, wrote less text in the same amount of time, and produced more typesetting, orthographical, grammatical, and formatting errors. On most measures, expert LaTeX users performed even worse than novice Word users. LaTeX users, however, more often report enjoying using their respective software. We conclude that even experienced LaTeX users may suffer a loss in productivity when LaTeX is used, relative to other document preparation systems. Individuals, institutions, and journals should carefully consider the ramifications of this finding when choosing document preparation strategies, or requiring them of authors.

...

However, our study suggests that LaTeX should be used as a document preparation system only in cases in which a document is heavily loaded with mathematical equations. For all other types of documents, our results suggest that LaTeX reduces the user’s productivity and results in more orthographical, grammatical, and formatting errors, more typos, and less written text than Microsoft Word over the same duration of time. LaTeX users may argue that the overall quality of the text that is created with LaTeX is better than the text that is created with Microsoft Word. Although this argument may be true, the differences between text produced in more recent editions of Microsoft Word and text produced in LaTeX may be less obvious than it was in the past. Moreover, we believe that the appearance of text matters less than the scientific content and impact to the field. In particular, LaTeX is also used frequently for text that does not contain a significant amount of mathematical symbols and formula. We believe that the use of LaTeX under these circumstances is highly problematic and that researchers should reflect on the criteria that drive their preferences to use LaTeX over Microsoft Word for text that does not require significant mathematical representations.

...

A second decision criterion that factors into the choice to use a particular software system is reflection about what drives certain preferences. A striking result of our study is that LaTeX users are highly satisfied with their system despite reduced usability and productivity. From a psychological perspective, this finding may be related to motivational factors, i.e., the driving forces that compel or reinforce individuals to act in a certain way to achieve a desired goal. A vital motivational factor is the tendency to reduce cognitive dissonance. According to the theory of cognitive dissonance, each individual has a motivational drive to seek consonance between their beliefs and their actual actions. If a belief set does not concur with the individual’s actual behavior, then it is usually easier to change the belief rather than the behavior [6]. The results from many psychological studies in which people have been asked to choose between one of two items (e.g., products, objects, gifts, etc.) and then asked to rate the desirability, value, attractiveness, or usefulness of their choice, report that participants often reduce unpleasant feelings of cognitive dissonance by rationalizing the chosen alternative as more desirable than the unchosen alternative [6, 7]. This bias is usually unconscious and becomes stronger as the effort to reject the chosen alternative increases, which is similar in nature to the case of learning and using LaTeX.

...

Given these numbers it remains an open question to determine the amount of taxpayer money that is spent worldwide for researchers to use LaTeX over a more efficient document preparation system, which would free up their time to advance their respective field. Some publishers may save a significant amount of money by requesting or allowing LaTeX submissions because a well-formed LaTeX document complying with a well-designed class file (template) is much easier to bring into their publication workflow. However, this is at the expense of the researchers’ labor time and effort. We therefore suggest that leading scientific journals should consider accepting submissions in LaTeX only if this is justified by the level of mathematics presented in the paper. In all other cases, we think that scholarly journals should request authors to submit their documents in Word or PDF format. We believe that this would be a good policy for two reasons. First, we think that the appearance of the text is secondary to the scientific merit of an article and its impact to the field. And, second, preventing researchers from producing documents in LaTeX would save time and money to maximize the benefit of research and development for both the research team and the public.

[ed.: I sense some salt.

And basically no description of how "# errors" was calculated.]

https://news.ycombinator.com/item?id=8797002
I question the validity of their methodology.
At no point in the paper is exactly what is meant by a "formatting error" or a "typesetting error" defined. From what I gather, the participants in the study were required to reproduce the formatting and layout of the sample text. In theory, a LaTeX file should strictly be a semantic representation of the content of the document; while TeX may have been a raw typesetting language, this is most definitely not the intended use case of LaTeX and is overall a very poor test of its relative advantages and capabilities.
The separation of the semantic definition of the content from the rendering of the document is, in my opinion, the most important feature of LaTeX. Like CSS, this allows the actual formatting to be abstracted away, allowing plain (marked-up) content to be written without worrying about typesetting.
Word has some similar capabilities with styles, and can be used in a similar manner, though few Word users actually use the software properly. This may sound like a relatively insignificant point, but in practice, almost every Word document I have seen has some form of inconsistent formatting. If Word disallowed local formatting changes (including things such as relative spacing of nested bullet points), forcing all formatting changes to be done in document-global styles, it would be a far better typesetting system. Also, the users would be very unhappy.
Yes, LaTeX can undeniably be a pain in the arse, especially when it comes to trying to get figures in the right place; however the combination of a simple, semantic plain-text representation with a flexible and professional typesetting and rendering engine are undeniable and completely unaddressed by this study.
--
It seems that the test was heavily biased in favor of WYSIWYG.
Of course that approach makes it very simple to reproduce something, as has been tested here. Even simpler would be to scan the document and run OCR. The massive problem with both approaches (WYSIWYG and scanning) is that you can't generalize any of it. You're doomed repeating it forever.
(I'll also note the other significant issue with this study: when the ratings provided by participants came out opposite of their test results, they attributed it to irrational bias.)

https://www.nature.com/articles/d41586-019-01796-1
Over the past few years however, the line between the tools has blurred. In 2017, Microsoft made it possible to use LaTeX’s equation-writing syntax directly in Word, and last year it scrapped Word’s own equation editor. Other text editors also support elements of LaTeX, allowing newcomers to use as much or as little of the language as they like.

https://news.ycombinator.com/item?id=20191348
study  hmm  academia  writing  publishing  yak-shaving  technical-writing  software  tools  comparison  latex  scholar  regularizer  idk  microsoft  evidence-based  science  desktop  time  efficiency  multi  hn  commentary  critique  news  org:sci  flux-stasis  duplication  metrics  biases 
june 2019 by nhaliday
The End of the Editor Wars » Linux Magazine
Moreover, even if you assume a broad margin of error, the pollings aren't even close. With all the various text editors available today, Vi and Vim continue to be the choice of over a third of users, while Emacs well back in the pack, no longer a competitor for the most popular text editor.

https://www.quora.com/Are-there-more-Emacs-or-Vim-users
I believe Vim is actually more popular, but it's hard to find any real data on it. The best source I've seen is the annual StackOverflow developer survey where 15.2% of developers used Vim compared to a mere 3.2% for Emacs.

Oddly enough, the report noted that "Data scientists and machine learning developers are about 3 times more likely to use Emacs than any other type of developer," which is not necessarily what I would have expected.

[ed. NB: Vim still dominates overall.]

https://pinboard.in/u:nhaliday/b:6adc1b1ef4dc

Time To End The vi/Emacs Debate: https://cacm.acm.org/blogs/blog-cacm/226034-time-to-end-the-vi-emacs-debate/fulltext

Vim, Emacs and their forever war. Does it even matter any more?: https://blog.sourcerer.io/vim-emacs-and-their-forever-war-does-it-even-matter-any-more-697b1322d510
Like an episode of “Silicon Valley”, a discussion of Emacs vs. Vim used to have a polarizing effect that would guarantee a stimulating conversation, regardless of an engineer’s actual alignment. But nowadays, diehard Emacs and Vim users are getting much harder to find. Maybe I’m in the wrong orbit, but looking around today, I see that engineers are equally or even more likely to choose any one of a number of great (for any given definition of ‘great’) modern editors or IDEs such as Sublime Text, Visual Studio Code, Atom, IntelliJ (… or one of its siblings), Brackets, Visual Studio or Xcode, to name a few. It’s not surprising really — many top engineers weren’t even born when these editors were at version 1.0, and GUIs (for better or worse) hadn’t been invented.

...

… both forums have high traffic and up-to-the-minute comment and discussion threads. Some of the available statistics paint a reasonably healthy picture — Stackoverflow’s 2016 developer survey ranks Vim 4th out of 24 with 26.1% of respondents in the development environments category claiming to use it. Emacs came 15th with 5.2%. In combination, over 30% is, actually, quite impressive considering they’ve been around for several decades.

What’s odd, however, is that if you ask someone — say a random developer — to express a preference, the likelihood is that they will favor for one or the other even if they have used neither in anger. Maybe the meme has spread so widely that all responses are now predominantly ritualistic, and represent something more fundamental than peoples’ mere preference for an editor? There’s a rather obvious political hypothesis waiting to be made — that Emacs is the leftist, socialist, centralized state, while Vim represents the right and the free market, specialization and capitalism red in tooth and claw.

How is Emacs/Vim used in companies like Google, Facebook, or Quora? Are there any libraries or tools they share in public?: https://www.quora.com/How-is-Emacs-Vim-used-in-companies-like-Google-Facebook-or-Quora-Are-there-any-libraries-or-tools-they-share-in-public
In Google there's a fair amount of vim and emacs. I would say at least every other engineer uses one or another.

Among Software Engineers, emacs seems to be more popular, about 2:1. Among Site Reliability Engineers, vim is more popular, about 9:1.
--
People use both at Facebook, with (in my opinion) slightly better tooling for Emacs than Vim. We share a master.emacs and master.vimrc file, which contains the bare essentials (like syntactic highlighting for the Hack language). We also share a Ctags file that's updated nightly with a cron script.

Beyond the essentials, there's a group for Emacs users at Facebook that provides tips, tricks, and major-modes created by people at Facebook. That's where Adam Hupp first developed his excellent mural-mode (ahupp/mural), which does for Ctags what iDo did for file finding and buffer switching.
--
For emacs, it was very informal at Google. There wasn't a huge community of Emacs users at Google, so there wasn't much more than a wiki and a couple language styles matching Google's style guides.

https://trends.google.com/trends/explore?date=all&geo=US&q=%2Fm%2F07zh7,%2Fm%2F01yp0m

https://www.quora.com/Why-is-interest-in-Emacs-dropping
And it is still that. It’s just that emacs is no longer unique, and neither is Lisp.

Dynamically typed scripting languages with garbage collection are a dime a dozen now. Anybody in their right mind developing an extensible text editor today would just use python, ruby, lua, or JavaScript as the extension language and get all the power of Lisp combined with vibrant user communities and millions of lines of ready-made libraries that Stallman and Steele could only dream of in the 70s.

In fact, in many ways emacs and elisp have fallen behind: 40 years after Lambda, the Ultimate Imperative, elisp is still dynamically scoped, and it still doesn’t support multithreading — when I try to use dired to list the files on a slow NFS mount, the entire editor hangs just as thoroughly as it might have in the 1980s. And when I say “doesn’t support multithreading,” I don’t mean there is some other clever trick for continuing to do work while waiting on a system call, like asynchronous callbacks or something. There’s start-process which forks a whole new process, and that’s about it. It’s a concurrency model straight out of 1980s UNIX land.

But being essentially just a decent text editor has robbed emacs of much of its competitive advantage. In a world where every developer tool is scriptable with languages and libraries an order of magnitude more powerful than cranky old elisp, the reason to use emacs is not that it lets a programmer hit a button and evaluate the current expression interactively (which must have been absolutely amazing at one point in the past).

https://www.reddit.com/r/emacs/comments/bh5kk7/why_do_many_new_users_still_prefer_vim_over_emacs/

more general comparison, not just popularity:
Differences between Emacs and Vim: https://stackoverflow.com/questions/1430164/differences-between-Emacs-and-vim

https://www.reddit.com/r/emacs/comments/9hen7z/what_are_the_benefits_of_emacs_over_vim/

https://unix.stackexchange.com/questions/986/what-are-the-pros-and-cons-of-vim-and-emacs

https://www.quora.com/Why-is-Vim-the-programmers-favorite-editor
- Adrien Lucas Ecoffet,

Because it is hard to use. Really.

However, the second part of this sentence applies to just about every good editor out there: if you really learn Sublime Text, you will become super productive. If you really learn Emacs, you will become super productive. If you really learn Visual Studio… you get the idea.

Here’s the thing though, you never actually need to really learn your text editor… Unless you use vim.

...

For many people new to programming, this is the first time they have been a power user of… well, anything! And because they’ve been told how great Vim is, many of them will keep at it and actually become productive, not because Vim is particularly more productive than any other editor, but because it didn’t provide them with a way to not be productive.

They then go on to tell their friends how great Vim is, and their friends go on to become power users and tell their friends in turn, and so forth. All these people believe they became productive because they changed their text editor. Little do they realize that they became productive because their text editor changed them[1].

This is in no way a criticism of Vim. I myself was a beneficiary of such a phenomenon when I learned to type using the Dvorak layout: at that time, I believed that Dvorak would help you type faster. Now I realize the evidence is mixed and that Dvorak might not be much better than Qwerty. However, learning Dvorak forced me to develop good typing habits because I could no longer rely on looking at my keyboard (since I was still using a Qwerty physical keyboard), and this has made me a much more productive typist.

Technical Interview Performance by Editor/OS/Language: https://triplebyte.com/blog/technical-interview-performance-by-editor-os-language
[ed.: I'm guessing this is confounded to all hell.]

The #1 most common editor we see used in interviews is Sublime Text, with Vim close behind.

Emacs represents a fairly small market share today at just about a quarter the userbase of Vim in our interviews. This nicely matches the 4:1 ratio of Google Search Trends for the two editors.

...

Vim takes the prize here, but PyCharm and Emacs are close behind. We’ve found that users of these editors tend to pass our interview at an above-average rate.

On the other end of the spectrum is Eclipse: it appears that someone using either Vim or Emacs is more than twice as likely to pass our technical interview as an Eclipse user.

...

In this case, we find that the average Ruby, Swift, and C# users tend to be stronger, with Python and Javascript in the middle of the pack.

...

Here’s what happens after we select engineers to work with and send them to onsites:

[Python does best.]

There are no wild outliers here, but let’s look at the C++ segment. While C++ programmers have the most challenging time passing Triplebyte’s technical interview on average, the ones we choose to work with tend to have a relatively easier time getting offers at each onsite.

The Rise of Microsoft Visual Studio Code: https://triplebyte.com/blog/editor-report-the-rise-of-visual-studio-code
This chart shows the rates at which each editor's users pass our interview compared to the mean pass rate for all candidates. First, notice the preeminence of Emacs and Vim! Engineers who use these editors pass our interview at significantly higher rates than other engineers. And the effect size is not small. Emacs users pass our interview at a rate 50… [more]
news  linux  oss  tech  editors  devtools  tools  comparison  ranking  flux-stasis  trends  ubiquity  unix  increase-decrease  multi  q-n-a  qra  data  poll  stackex  sv  facebook  google  integration-extension  org:med  politics  stereotypes  coalitions  decentralized  left-wing  right-wing  chart  scale  time-series  distribution  top-n  list  discussion  ide  parsimony  intricacy  cost-benefit  tradeoffs  confounding  analysis  crosstab  pls  python  c(pp)  jvm  microsoft  golang  hmm  correlation  debate  critique  quora  contrarianism  ecosystem  DSL 
june 2019 by nhaliday
Interview with Donald Knuth | Interview with Donald Knuth | InformIT
Andrew Binstock and Donald Knuth converse on the success of open source, the problem with multicore architecture, the disappointing lack of interest in literate programming, the menace of reusable code, and that urban legend about winning a programming contest with a single compilation.

Reusable vs. re-editable code: https://hal.archives-ouvertes.fr/hal-01966146/document
- Konrad Hinsen

https://www.johndcook.com/blog/2008/05/03/reusable-code-vs-re-editable-code/
I think whether code should be editable or in “an untouchable black box” depends on the number of developers involved, as well as their talent and motivation. Knuth is a highly motivated genius working in isolation. Most software is developed by large teams of programmers with varying degrees of motivation and talent. I think the further you move away from Knuth along these three axes the more important black boxes become.
nibble  interview  giants  expert-experience  programming  cs  software  contrarianism  carmack  oss  prediction  trends  linux  concurrency  desktop  comparison  checking  debugging  stories  engineering  hmm  idk  algorithms  books  debate  flux-stasis  duplication  parsimony  best-practices  writing  documentation  latex  intricacy  structure  hardware  caching  workflow  editors  composition-decomposition  coupling-cohesion  exposition  technical-writing  thinking  cracker-prog  code-organizing  grokkability  multi  techtariat  commentary  pdf  reflection  essay  examples  python  data-science  libraries  grokkability-clarity 
june 2019 by nhaliday
What every computer scientist should know about floating-point arithmetic
Floating-point arithmetic is considered as esoteric subject by many people. This is rather surprising, because floating-point is ubiquitous in computer systems: Almost every language has a floating-point datatype; computers from PCs to supercomputers have floating-point accelerators; most compilers will be called upon to compile floating-point algorithms from time to time; and virtually every operating system must respond to floating-point exceptions such as overflow. This paper presents a tutorial on the aspects of floating-point that have a direct impact on designers of computer systems. It begins with background on floating-point representation and rounding error, continues with a discussion of the IEEE floating point standard, and concludes with examples of how computer system builders can better support floating point.

https://stackoverflow.com/questions/2729637/does-epsilon-really-guarantees-anything-in-floating-point-computations
"you must use an epsilon when dealing with floats" is a knee-jerk reaction of programmers with a superficial understanding of floating-point computations, for comparisons in general (not only to zero).

This is usually unhelpful because it doesn't tell you how to minimize the propagation of rounding errors, it doesn't tell you how to avoid cancellation or absorption problems, and even when your problem is indeed related to the comparison of two floats, it doesn't tell you what value of epsilon is right for what you are doing.

...

Regarding the propagation of rounding errors, there exists specialized analyzers that can help you estimate it, because it is a tedious thing to do by hand.

https://www.di.ens.fr/~cousot/projects/DAEDALUS/synthetic_summary/CEA/Fluctuat/index.html

This was part of HW1 of CS24:
https://en.wikipedia.org/wiki/Kahan_summation_algorithm
In particular, simply summing n numbers in sequence has a worst-case error that grows proportional to n, and a root mean square error that grows as {\displaystyle {\sqrt {n}}} {\sqrt {n}} for random inputs (the roundoff errors form a random walk).[2] With compensated summation, the worst-case error bound is independent of n, so a large number of values can be summed with an error that only depends on the floating-point precision.[2]

cf:
https://en.wikipedia.org/wiki/Pairwise_summation
In numerical analysis, pairwise summation, also called cascade summation, is a technique to sum a sequence of finite-precision floating-point numbers that substantially reduces the accumulated round-off error compared to naively accumulating the sum in sequence.[1] Although there are other techniques such as Kahan summation that typically have even smaller round-off errors, pairwise summation is nearly as good (differing only by a logarithmic factor) while having much lower computational cost—it can be implemented so as to have nearly the same cost (and exactly the same number of arithmetic operations) as naive summation.

In particular, pairwise summation of a sequence of n numbers xn works by recursively breaking the sequence into two halves, summing each half, and adding the two sums: a divide and conquer algorithm. Its worst-case roundoff errors grow asymptotically as at most O(ε log n), where ε is the machine precision (assuming a fixed condition number, as discussed below).[1] In comparison, the naive technique of accumulating the sum in sequence (adding each xi one at a time for i = 1, ..., n) has roundoff errors that grow at worst as O(εn).[1] Kahan summation has a worst-case error of roughly O(ε), independent of n, but requires several times more arithmetic operations.[1] If the roundoff errors are random, and in particular have random signs, then they form a random walk and the error growth is reduced to an average of {\displaystyle O(\varepsilon {\sqrt {\log n}})} O(\varepsilon {\sqrt {\log n}}) for pairwise summation.[2]

A very similar recursive structure of summation is found in many fast Fourier transform (FFT) algorithms, and is responsible for the same slow roundoff accumulation of those FFTs.[2][3]

https://eng.libretexts.org/Bookshelves/Electrical_Engineering/Book%3A_Fast_Fourier_Transforms_(Burrus)/10%3A_Implementing_FFTs_in_Practice/10.8%3A_Numerical_Accuracy_in_FFTs
However, these encouraging error-growth rates only apply if the trigonometric “twiddle” factors in the FFT algorithm are computed very accurately. Many FFT implementations, including FFTW and common manufacturer-optimized libraries, therefore use precomputed tables of twiddle factors calculated by means of standard library functions (which compute trigonometric constants to roughly machine precision). The other common method to compute twiddle factors is to use a trigonometric recurrence formula—this saves memory (and cache), but almost all recurrences have errors that grow as O(n‾√) , O(n) or even O(n2) which lead to corresponding errors in the FFT.

...

There are, in fact, trigonometric recurrences with the same logarithmic error growth as the FFT, but these seem more difficult to implement efficiently; they require that a table of Θ(logn) values be stored and updated as the recurrence progresses. Instead, in order to gain at least some of the benefits of a trigonometric recurrence (reduced memory pressure at the expense of more arithmetic), FFTW includes several ways to compute a much smaller twiddle table, from which the desired entries can be computed accurately on the fly using a bounded number (usually <3) of complex multiplications. For example, instead of a twiddle table with n entries ωkn , FFTW can use two tables with Θ(n‾√) entries each, so that ωkn is computed by multiplying an entry in one table (indexed with the low-order bits of k ) by an entry in the other table (indexed with the high-order bits of k ).

[ed.: Nicholas Higham's "Accuracy and Stability of Numerical Algorithms" seems like a good reference for this kind of analysis.]
nibble  pdf  papers  programming  systems  numerics  nitty-gritty  intricacy  approximation  accuracy  types  sci-comp  multi  q-n-a  stackex  hmm  oly-programming  accretion  formal-methods  yak-shaving  wiki  reference  algorithms  yoga  ground-up  divide-and-conquer  fourier  books  tidbits  chart  caltech  nostalgia 
may 2019 by nhaliday
One week of bugs
If I had to guess, I'd say I probably work around hundreds of bugs in an average week, and thousands in a bad week. It's not unusual for me to run into a hundred new bugs in a single week. But I often get skepticism when I mention that I run into multiple new (to me) bugs per day, and that this is inevitable if we don't change how we write tests. Well, here's a log of one week of bugs, limited to bugs that were new to me that week. After a brief description of the bugs, I'll talk about what we can do to improve the situation. The obvious answer to spend more effort on testing, but everyone already knows we should do that and no one does it. That doesn't mean it's hopeless, though.

...

Here's where I'm supposed to write an appeal to take testing more seriously and put real effort into it. But we all know that's not going to work. It would take 90k LOC of tests to get Julia to be as well tested as a poorly tested prototype (falsely assuming linear complexity in size). That's two person-years of work, not even including time to debug and fix bugs (which probably brings it closer to four of five years). Who's going to do that? No one. Writing tests is like writing documentation. Everyone already knows you should do it. Telling people they should do it adds zero information1.

Given that people aren't going to put any effort into testing, what's the best way to do it?

Property-based testing. Generative testing. Random testing. Concolic Testing (which was done long before the term was coined). Static analysis. Fuzzing. Statistical bug finding. There are lots of options. Some of them are actually the same thing because the terminology we use is inconsistent and buggy. I'm going to arbitrarily pick one to talk about, but they're all worth looking into.

...

There are a lot of great resources out there, but if you're just getting started, I found this description of types of fuzzers to be one of those most helpful (and simplest) things I've read.

John Regehr has a udacity course on software testing. I haven't worked through it yet (Pablo Torres just pointed to it), but given the quality of Dr. Regehr's writing, I expect the course to be good.

For more on my perspective on testing, there's this.

Everything's broken and nobody's upset: https://www.hanselman.com/blog/EverythingsBrokenAndNobodysUpset.aspx
https://news.ycombinator.com/item?id=4531549

https://hypothesis.works/articles/the-purpose-of-hypothesis/
From the perspective of a user, the purpose of Hypothesis is to make it easier for you to write better tests.

From my perspective as the primary author, that is of course also a purpose of Hypothesis. I write a lot of code, it needs testing, and the idea of trying to do that without Hypothesis has become nearly unthinkable.

But, on a large scale, the true purpose of Hypothesis is to drag the world kicking and screaming into a new and terrifying age of high quality software.

Software is everywhere. We have built a civilization on it, and it’s only getting more prevalent as more services move online and embedded and “internet of things” devices become cheaper and more common.

Software is also terrible. It’s buggy, it’s insecure, and it’s rarely well thought out.

This combination is clearly a recipe for disaster.

The state of software testing is even worse. It’s uncontroversial at this point that you should be testing your code, but it’s a rare codebase whose authors could honestly claim that they feel its testing is sufficient.

Much of the problem here is that it’s too hard to write good tests. Tests take up a vast quantity of development time, but they mostly just laboriously encode exactly the same assumptions and fallacies that the authors had when they wrote the code, so they miss exactly the same bugs that you missed when they wrote the code.

Preventing the Collapse of Civilization [video]: https://news.ycombinator.com/item?id=19945452
- Jonathan Blow

NB: DevGAMM is a game industry conference

- loss of technological knowledge (Antikythera mechanism, aqueducts, etc.)
- hardware driving most gains, not software
- software's actually less robust, often poorly designed and overengineered these days
- *list of bugs he's encountered recently*:
https://youtu.be/pW-SOdj4Kkk?t=1387
- knowledge of trivia becomes more than general, deep knowledge
- does at least acknowledge value of DRY, reusing code, abstraction saving dev time
techtariat  dan-luu  tech  software  error  list  debugging  linux  github  robust  checking  oss  troll  lol  aphorism  webapp  email  google  facebook  games  julia  pls  compilers  communication  mooc  browser  rust  programming  engineering  random  jargon  formal-methods  expert-experience  prof  c(pp)  course  correctness  hn  commentary  video  presentation  carmack  pragmatic  contrarianism  pessimism  sv  unix  rhetoric  critique  worrydream  hardware  performance  trends  multiplicative  roots  impact  comparison  history  iron-age  the-classics  mediterranean  conquest-empire  gibbon  technology  the-world-is-just-atoms  flux-stasis  increase-decrease  graphics  hmm  idk  systems  os  abstraction  intricacy  worse-is-better/the-right-thing  build-packaging  microsoft  osx  apple  reflection  assembly  things  knowledge  detail-architecture  thick-thin  trivia  info-dynamics  caching  frameworks  generalization  systematic-ad-hoc  universalism-particularism  analytical-holistic  structure  tainter  libraries  tradeoffs  prepping  threat-modeling  network-structure  writing  risk  local-glob 
may 2019 by nhaliday
Is backing up a MySQL database in Git a good idea? - Software Engineering Stack Exchange
*no: list of alternatives*

https://stackoverflow.com/questions/115369/do-you-use-source-control-for-your-database-items
Top 2 answers contradict each other but both agree that you should at least version the schema and other scripts.

My impression is that the guy linked in the accepted answer is arguing for a minority practice.
q-n-a  stackex  programming  engineering  dbs  vcs  git  debate  critique  backup  best-practices  flux-stasis  nitty-gritty  gotchas  init  advice  code-organizing  multi  hmm  idk  contrarianism  rhetoric  links  system-design 
may 2019 by nhaliday
Measuring fitness heritability: Life history traits versus morphological traits in humans - Gavrus‐Ion - 2017 - American Journal of Physical Anthropology - Wiley Online Library
Traditional interpretation of Fisher's Fundamental Theorem of Natural Selection is that life history traits (LHT), which are closely related with fitness, show lower heritabilities, whereas morphological traits (MT) are less related with fitness and they are expected to show higher heritabilities.

...

LHT heritabilities ranged from 2.3 to 34% for the whole sample, with men showing higher heritabilities (4–45%) than women (0‐23.7%). Overall, MT presented higher heritability values than most of LHT, ranging from 0 to 40.5% in craniofacial indices, and from 13.8 to 32.4% in craniofacial angles. LHT showed considerable additive genetic variance values, similar to MT, but also high environmental variance values, and most of them presenting a higher evolutionary potential than MT.
study  biodet  behavioral-gen  population-genetics  hmm  contrarianism  levers  inference  variance-components  fertility  life-history  demographics  embodied  prediction  contradiction  empirical  sib-study 
may 2019 by nhaliday
haskell - Using -with-rtsopts ghc option as a pragma - Stack Overflow
When you specify that pragma at the top of the file, this is instead what happens (with ghc --make algo.hs):

ghc -c algo.hs -rtsopts -with-rtsopts=-K32M
ghc -o algo -package somepackage algo.o
The OPTIONS_GHC pragma tells the compiler about options to add when compiling that specific module into an object file. Because -rtsopts is a linker option (it tells GHC to link in a different set of command-line handling stuff), you can't specify it when compiling an object file. You must specify it when linking, and such options cannot be specified in a module header.
q-n-a  stackex  programming  haskell  functional  gotchas  hmm  oly  space-complexity  build-packaging 
may 2019 by nhaliday
c++ - Pointer to class data member "::*" - Stack Overflow
[ed.: First encountered in emil-e/rapidcheck (gen::set).]

Is this checked statically? That is, does the compiler allow me to pass an arbitrary value or does it check that every passed pointer to member pFooMember is created using &T::*fooMember? I think it's feasible to do that?
q-n-a  stackex  programming  pls  c(pp)  gotchas  weird  trivia  hmm  explanation  types  oop  static-dynamic  direct-indirect  atoms  lexical 
may 2019 by nhaliday
maintenance - Why do dynamic languages make it more difficult to maintain large codebases? - Software Engineering Stack Exchange
Now here is the key point I have been building up to: there is a strong correlation between a language being dynamically typed and a language also lacking all the other facilities that make lowering the cost of maintaining a large codebase easier, and that is the key reason why it is more difficult to maintain a large codebase in a dynamic language. And similarly there is a correlation between a language being statically typed and having facilities that make programming in the larger easier.
programming  worrydream  plt  hmm  comparison  pls  carmack  techtariat  types  engineering  productivity  pro-rata  input-output  correlation  best-practices  composition-decomposition  error  causation  confounding  devtools  jvm  scala  open-closed  cost-benefit  static-dynamic  design  system-design 
may 2019 by nhaliday
Delta debugging - Wikipedia
good overview of with examples: https://www.csm.ornl.gov/~sheldon/bucket/Automated-Debugging.pdf

Not as useful for my usecases (mostly contest programming) as QuickCheck. Input is generally pretty structured and I don't have a long history of code in VCS. And when I do have the latter git-bisect is probably enough.

good book tho: http://www.whyprogramsfail.com/toc.php
WHY PROGRAMS FAIL: A Guide to Systematic Debugging\
wiki  reference  programming  systems  debugging  c(pp)  python  tools  devtools  links  hmm  formal-methods  divide-and-conquer  vcs  git  search  yak-shaving  pdf  white-paper  multi  examples  stories  books  unit  caltech  recommendations  advanced  correctness 
may 2019 by nhaliday
Science - Wikipedia
In Northern Europe, the new technology of the printing press was widely used to publish many arguments, including some that disagreed widely with contemporary ideas of nature. René Descartes and Francis Bacon published philosophical arguments in favor of a new type of non-Aristotelian science. Descartes emphasized individual thought and argued that mathematics rather than geometry should be used in order to study nature. Bacon emphasized the importance of experiment over contemplation. Bacon further questioned the Aristotelian concepts of formal cause and final cause, and promoted the idea that science should study the laws of "simple" natures, such as heat, rather than assuming that there is any specific nature, or "formal cause," of each complex type of thing. This new modern science began to see itself as describing "laws of nature". This updated approach to studies in nature was seen as mechanistic. Bacon also argued that science should aim for the first time at practical inventions for the improvement of all human life.

Age of Enlightenment

...

During this time, the declared purpose and value of science became producing wealth and inventions that would improve human lives, in the materialistic sense of having more food, clothing, and other things. In Bacon's words, "the real and legitimate goal of sciences is the endowment of human life with new inventions and riches", and he discouraged scientists from pursuing intangible philosophical or spiritual ideas, which he believed contributed little to human happiness beyond "the fume of subtle, sublime, or pleasing speculation".[72]
article  wiki  reference  science  philosophy  letters  history  iron-age  mediterranean  the-classics  medieval  europe  the-great-west-whale  early-modern  ideology  telos-atelos  ends-means  new-religion  weird  enlightenment-renaissance-restoration-reformation  culture  the-devil  anglo  big-peeps  giants  religion  theos  tip-of-tongue  hmm  truth  dirty-hands  engineering  roots  values  formal-values  quotes  causation  forms-instances  technology  logos 
august 2018 by nhaliday
Commentary: Predictions and the brain: how musical sounds become rewarding
https://twitter.com/AOEUPL_PHE/status/1004807377076604928
https://archive.is/FgNHG
did i just learn something big?

Prerecorded music has ABSOLUTELY NO
SURVIVAL reward. Zero. It does not help
with procreation (well, unless you're the
one making the music, then you get
endless sex) and it does not help with
individual survival.
As such, one must seriously self test
(n=1) prerecorded music actually holds
you back.
If you're reading this and you try no
music for 2 weeks and fail, hit me up. I
have some mind blowing stuff to show
you in how you can control others with
music.
study  psychology  cog-psych  yvain  ssc  models  speculation  music  art  aesthetics  evolution  evopsych  accuracy  meta:prediction  neuro  neuro-nitgrit  neurons  error  roots  intricacy  hmm  wire-guided  machiavelli  dark-arts  predictive-processing  reinforcement  multi  science-anxiety 
june 2018 by nhaliday
What's Wrong With Growing Blobs of Brain Tissue? - The Atlantic
These increasingly complex organoids aren't conscious—but we might not know when they cross that line.

I don't know why you would even *want* to do this tbh... What's the application?
news  org:mag  popsci  hmm  :/  dignity  morality  ethics  formal-values  philosophy  biotech  neuro  dennett  within-without  weird  wtf  ed-yong  brain-scan  medicine  science 
april 2018 by nhaliday
My March 28 talk at MIT - Marginal REVOLUTION
What happens when a simulated system becomes more real than the system itself?  Will the internet become “more real” than the world of ideas it is mirroring? Do we academics live in a simulacra?  If the “alt right” exists mainly on the internet, does that make it more or less powerful?  Do all innovations improve system quality, and if so why is a lot of food worse than before and home design was better in 1910-1930?  How does the world of ideas fit into this picture?
econotariat  marginal-rev  links  quotes  presentation  hmm  simulation  realness  internet  academia  gnon  🐸  subculture  innovation  food  trends  architecture  history  mostly-modern  pre-ww2 
march 2018 by nhaliday
Antinomia Imediata – experiments in a reaction from the left
https://antinomiaimediata.wordpress.com/lrx/
So, what is the Left Reaction? First of all, it’s reaction: opposition to the modern rationalist establishment, the Cathedral. It opposes the universalist Jacobin program of global government, favoring a fractured geopolitics organized through long-evolved complex systems. It’s profoundly anti-socialist and anti-communist, favoring market economy and individualism. It abhors tribalism and seeks a realistic plan for dismantling it (primarily informed by HBD and HBE). It looks at modernity as a degenerative ratchet, whose only way out is intensification (hence clinging to crypto-marxist market-driven acceleration).

How come can any of this still be in the *Left*? It defends equality of power, i.e. freedom. This radical understanding of liberty is deeply rooted in leftist tradition and has been consistently abhored by the Right. LRx is not democrat, is not socialist, is not progressist and is not even liberal (in its current, American use). But it defends equality of power. It’s utopia is individual sovereignty. It’s method is paleo-agorism. The anti-hierarchy of hunter-gatherer nomads is its understanding of the only realistic objective of equality.

...

In more cosmic terms, it seeks only to fulfill the Revolution’s side in the left-right intelligence pump: mutation or creation of paths. Proudhon’s antinomy is essentially about this: the collective force of the socius, evinced in moral standards and social organization vs the creative force of the individuals, that constantly revolutionize and disrupt the social body. The interplay of these forces create reality (it’s a metaphysics indeed): the Absolute (socius) builds so that the (individualistic) Revolution can destroy so that the Absolute may adapt, and then repeat. The good old formula of ‘solve et coagula’.

Ultimately, if the Neoreaction promises eternal hell, the LRx sneers “but Satan is with us”.

https://antinomiaimediata.wordpress.com/2016/12/16/a-statement-of-principles/
Liberty is to be understood as the ability and right of all sentient beings to dispose of their persons and the fruits of their labor, and nothing else, as they see fit. This stems from their self-awareness and their ability to control and choose the content of their actions.

...

Equality is to be understood as the state of no imbalance of power, that is, of no subjection to another sentient being. This stems from their universal ability for empathy, and from their equal ability for reason.

...

It is important to notice that, contrary to usual statements of these two principles, my standpoint is that Liberty and Equality here are not merely compatible, meaning they could coexist in some possible universe, but rather they are two sides of the same coin, complementary and interdependent. There can be NO Liberty where there is no Equality, for the imbalance of power, the state of subjection, will render sentient beings unable to dispose of their persons and the fruits of their labor[1], and it will limit their ability to choose over their rightful jurisdiction. Likewise, there can be NO Equality without Liberty, for restraining sentient beings’ ability to choose and dispose of their persons and fruits of labor will render some more powerful than the rest, and establish a state of subjection.

https://antinomiaimediata.wordpress.com/2017/04/18/flatness/
equality is the founding principle (and ultimately indistinguishable from) freedom. of course, it’s only in one specific sense of “equality” that this sentence is true.

to try and eliminate the bullshit, let’s turn to networks again:

any nodes’ degrees of freedom is the number of nodes they are connected to in a network. freedom is maximum when the network is symmetrically connected, i. e., when all nodes are connected to each other and thus there is no topographical hierarchy (middlemen) – in other words, flatness.

in this understanding, the maximization of freedom is the maximization of entropy production, that is, of intelligence. As Land puts it:

https://antinomiaimediata.wordpress.com/category/philosophy/mutualism/
gnon  blog  stream  politics  polisci  ideology  philosophy  land  accelerationism  left-wing  right-wing  paradox  egalitarianism-hierarchy  civil-liberty  power  hmm  revolution  analytical-holistic  mutation  selection  individualism-collectivism  tribalism  us-them  modernity  multi  tradeoffs  network-structure  complex-systems  cybernetics  randy-ayndy  insight  contrarianism  metameta  metabuch  characterization  cooperate-defect  n-factor  altruism  list  coordination  graphs  visual-understanding  cartoons  intelligence  entropy-like  thermo  information-theory  order-disorder  decentralized  distribution  degrees-of-freedom  analogy  graph-theory  extrema  evolution  interdisciplinary  bio  differential  geometry  anglosphere  optimate  nascent-state  deep-materialism  new-religion  cool  mystic  the-classics  self-interest  interests  reason  volo-avolo  flux-stasis  invariance  government  markets  paying-rent  cost-benefit  peace-violence  frontier  exit-voice  nl-and-so-can-you  war  track-record  usa  history  mostly-modern  world-war  military  justice  protestant-cathol 
march 2018 by nhaliday
Prisoner's dilemma - Wikipedia
caveat to result below:
An extension of the IPD is an evolutionary stochastic IPD, in which the relative abundance of particular strategies is allowed to change, with more successful strategies relatively increasing. This process may be accomplished by having less successful players imitate the more successful strategies, or by eliminating less successful players from the game, while multiplying the more successful ones. It has been shown that unfair ZD strategies are not evolutionarily stable. The key intuition is that an evolutionarily stable strategy must not only be able to invade another population (which extortionary ZD strategies can do) but must also perform well against other players of the same type (which extortionary ZD players do poorly, because they reduce each other's surplus).[14]

Theory and simulations confirm that beyond a critical population size, ZD extortion loses out in evolutionary competition against more cooperative strategies, and as a result, the average payoff in the population increases when the population is bigger. In addition, there are some cases in which extortioners may even catalyze cooperation by helping to break out of a face-off between uniform defectors and win–stay, lose–switch agents.[8]

https://alfanl.com/2018/04/12/defection/
Nature boils down to a few simple concepts.

Haters will point out that I oversimplify. The haters are wrong. I am good at saying a lot with few words. Nature indeed boils down to a few simple concepts.

In life, you can either cooperate or defect.

Used to be that defection was the dominant strategy, say in the time when the Roman empire started to crumble. Everybody complained about everybody and in the end nothing got done. Then came Jesus, who told people to be loving and cooperative, and boom: 1800 years later we get the industrial revolution.

Because of Jesus we now find ourselves in a situation where cooperation is the dominant strategy. A normie engages in a ton of cooperation: with the tax collector who wants more and more of his money, with schools who want more and more of his kid’s time, with media who wants him to repeat more and more party lines, with the Zeitgeist of the Collective Spirit of the People’s Progress Towards a New Utopia. Essentially, our normie is cooperating himself into a crumbling Western empire.

Turns out that if everyone blindly cooperates, parasites sprout up like weeds until defection once again becomes the standard.

The point of a post-Christian religion is to once again create conditions for the kind of cooperation that led to the industrial revolution. This necessitates throwing out undead Christianity: you do not blindly cooperate. You cooperate with people that cooperate with you, you defect on people that defect on you. Christianity mixed with Darwinism. God and Gnon meet.

This also means we re-establish spiritual hierarchy, which, like regular hierarchy, is a prerequisite for cooperation. It is this hierarchical cooperation that turns a household into a force to be reckoned with, that allows a group of men to unite as a front against their enemies, that allows a tribe to conquer the world. Remember: Scientology bullied the Cathedral’s tax department into submission.

With a functioning hierarchy, men still gossip, lie and scheme, but they will do so in whispers behind closed doors. In your face they cooperate and contribute to the group’s wellbeing because incentives are thus that contributing to group wellbeing heightens status.

Without a functioning hierarchy, men gossip, lie and scheme, but they do so in your face, and they tell you that you are positively deluded for accusing them of gossiping, lying and scheming. Seeds will not sprout in such ground.

Spiritual dominance is established in the same way any sort of dominance is established: fought for, taken. But the fight is ritualistic. You can’t force spiritual dominance if no one listens, or if you are silenced the ritual is not allowed to happen.

If one of our priests is forbidden from establishing spiritual dominance, that is a sure sign an enemy priest is in better control and has vested interest in preventing you from establishing spiritual dominance..

They defect on you, you defect on them. Let them suffer the consequences of enemy priesthood, among others characterized by the annoying tendency that very little is said with very many words.

https://contingentnotarbitrary.com/2018/04/14/rederiving-christianity/
To recap, we started with a secular definition of Logos and noted that its telos is existence. Given human nature, game theory and the power of cooperation, the highest expression of that telos is freely chosen universal love, tempered by constant vigilance against defection while maintaining compassion for the defectors and forgiving those who repent. In addition, we must know the telos in order to fulfill it.

In Christian terms, looks like we got over half of the Ten Commandments (know Logos for the First, don’t defect or tempt yourself to defect for the rest), the importance of free will, the indestructibility of evil (group cooperation vs individual defection), loving the sinner and hating the sin (with defection as the sin), forgiveness (with conditions), and love and compassion toward all, assuming only secular knowledge and that it’s good to exist.

Iterated Prisoner's Dilemma is an Ultimatum Game: http://infoproc.blogspot.com/2012/07/iterated-prisoners-dilemma-is-ultimatum.html
The history of IPD shows that bounded cognition prevented the dominant strategies from being discovered for over over 60 years, despite significant attention from game theorists, computer scientists, economists, evolutionary biologists, etc. Press and Dyson have shown that IPD is effectively an ultimatum game, which is very different from the Tit for Tat stories told by generations of people who worked on IPD (Axelrod, Dawkins, etc., etc.).

...

For evolutionary biologists: Dyson clearly thinks this result has implications for multilevel (group vs individual selection):
... Cooperation loses and defection wins. The ZD strategies confirm this conclusion and make it sharper. ... The system evolved to give cooperative tribes an advantage over non-cooperative tribes, using punishment to give cooperation an evolutionary advantage within the tribe. This double selection of tribes and individuals goes way beyond the Prisoners' Dilemma model.

implications for fractionalized Europe vis-a-vis unified China?

and more broadly does this just imply we're doomed in the long run RE: cooperation, morality, the "good society", so on...? war and group-selection is the only way to get a non-crab bucket civilization?

Iterated Prisoner’s Dilemma contains strategies that dominate any evolutionary opponent:
http://www.pnas.org/content/109/26/10409.full
http://www.pnas.org/content/109/26/10409.full.pdf
https://www.edge.org/conversation/william_h_press-freeman_dyson-on-iterated-prisoners-dilemma-contains-strategies-that

https://en.wikipedia.org/wiki/Ultimatum_game

analogy for ultimatum game: the state gives the demos a bargain take-it-or-leave-it, and...if the demos refuses...violence?

The nature of human altruism: http://sci-hub.tw/https://www.nature.com/articles/nature02043
- Ernst Fehr & Urs Fischbacher

Some of the most fundamental questions concerning our evolutionary origins, our social relations, and the organization of society are centred around issues of altruism and selfishness. Experimental evidence indicates that human altruism is a powerful force and is unique in the animal world. However, there is much individual heterogeneity and the interaction between altruists and selfish individuals is vital to human cooperation. Depending on the environment, a minority of altruists can force a majority of selfish individuals to cooperate or, conversely, a few egoists can induce a large number of altruists to defect. Current gene-based evolutionary theories cannot explain important patterns of human altruism, pointing towards the importance of both theories of cultural evolution as well as gene–culture co-evolution.

...

Why are humans so unusual among animals in this respect? We propose that quantitatively, and probably even qualitatively, unique patterns of human altruism provide the answer to this question. Human altruism goes far beyond that which has been observed in the animal world. Among animals, fitness-reducing acts that confer fitness benefits on other individuals are largely restricted to kin groups; despite several decades of research, evidence for reciprocal altruism in pair-wise repeated encounters4,5 remains scarce6–8. Likewise, there is little evidence so far that individual reputation building affects cooperation in animals, which contrasts strongly with what we find in humans. If we randomly pick two human strangers from a modern society and give them the chance to engage in repeated anonymous exchanges in a laboratory experiment, there is a high probability that reciprocally altruistic behaviour will emerge spontaneously9,10.

However, human altruism extends far beyond reciprocal altruism and reputation-based cooperation, taking the form of strong reciprocity11,12. Strong reciprocity is a combination of altruistic rewarding, which is a predisposition to reward others for cooperative, norm-abiding behaviours, and altruistic punishment, which is a propensity to impose sanctions on others for norm violations. Strong reciprocators bear the cost of rewarding or punishing even if they gain no individual economic benefit whatsoever from their acts. In contrast, reciprocal altruists, as they have been defined in the biological literature4,5, reward and punish only if this is in their long-term self-interest. Strong reciprocity thus constitutes a powerful incentive for cooperation even in non-repeated interactions and when reputation gains are absent, because strong reciprocators will reward those who cooperate and punish those who defect.

...

We will show that the interaction between selfish and strongly reciprocal … [more]
concept  conceptual-vocab  wiki  reference  article  models  GT-101  game-theory  anthropology  cultural-dynamics  trust  cooperate-defect  coordination  iteration-recursion  sequential  axelrod  discrete  smoothness  evolution  evopsych  EGT  economics  behavioral-econ  sociology  new-religion  deep-materialism  volo-avolo  characterization  hsu  scitariat  altruism  justice  group-selection  decision-making  tribalism  organizing  hari-seldon  theory-practice  applicability-prereqs  bio  finiteness  multi  history  science  social-science  decision-theory  commentary  study  summary  giants  the-trenches  zero-positive-sum  🔬  bounded-cognition  info-dynamics  org:edge  explanation  exposition  org:nat  eden  retention  long-short-run  darwinian  markov  equilibrium  linear-algebra  nitty-gritty  competition  war  explanans  n-factor  europe  the-great-west-whale  occident  china  asia  sinosphere  orient  decentralized  markets  market-failure  cohesion  metabuch  stylized-facts  interdisciplinary  physics  pdf  pessimism  time  insight  the-basilisk  noblesse-oblige  the-watchers  ideas  l 
march 2018 by nhaliday
China’s Ideological Spectrum
We find that public preferences are weakly constrained, and the configuration of preferences is multidimensional, but the latent traits of these dimensions are highly correlated. Those who prefer authoritarian rule are more likely to support nationalism, state intervention in the economy, and traditional social values; those who prefer democratic institutions and values are more likely to support market reforms but less likely to be nationalistic and less likely to support traditional social values. This latter set of preferences appears more in provinces with higher levels of development and among wealthier and better-educated respondents.

Enlightened One-Party Rule? Ideological Differences between Chinese Communist Party Members and the Mass Public: https://journals.sagepub.com/doi/abs/10.1177/1065912919850342
A popular view of nondemocratic regimes is that they draw followers mainly from those with an illiberal, authoritarian mind-set. We challenge this view by arguing that there exist a different class of autocracies that rule with a relatively enlightened base. Leveraging multiple nationally representative surveys from China over the past decade, we substantiate this claim by estimating and comparing the ideological preferences of Chinese Communist Party members and ordinary citizens. We find that party members on average hold substantially more modern and progressive views than the public on issues such as gender equality, political pluralism, and openness to international exchange. We also explore two mechanisms that may account for this party–public value gap—selection and socialization. We find that while education-based selection is the most dominant mechanism overall, socialization also plays a role, especially among older and less educated party members.

https://twitter.com/chenchenzh/status/1140929230072623104
https://archive.is/ktcOY
Does this control for wealth and education?
--
Perhaps about half the best educated youth joined party.
pdf  study  economics  polisci  sociology  politics  ideology  coalitions  china  asia  things  phalanges  dimensionality  degrees-of-freedom  markets  democracy  capitalism  communism  authoritarianism  government  leviathan  tradition  values  correlation  exploratory  nationalism-globalism  heterodox  sinosphere  multi  antidemos  class  class-warfare  enlightenment-renaissance-restoration-reformation  left-wing  egalitarianism-hierarchy  gender  contrarianism  hmm  regularizer  poll  roots  causation  endogenous-exogenous  selection  network-structure  education  twitter  social  commentary  critique  backup 
march 2018 by nhaliday
Adam Smith, David Hume, Liberalism, and Esotericism - Call for Papers - Elsevier
https://twitter.com/davidmanheim/status/963071765995032576
https://archive.is/njT4P
A very good economics journal--famously an outlet for rigorous, outside the box thinking--is publishing a special issue on hidden meanings in the work of two of the world's greatest thinkers.

Another sign the new Straussian age is upon us: Bayesians update accordingly!
big-peeps  old-anglo  economics  hmm  roots  politics  ideology  political-econ  philosophy  straussian  history  early-modern  britain  anglo  speculation  questions  events  multi  twitter  social  commentary  discussion  backup  econotariat  garett-jones  spearhead 
february 2018 by nhaliday
Information Processing: US Needs a National AI Strategy: A Sputnik Moment?
FT podcasts on US-China competition and AI: http://infoproc.blogspot.com/2018/05/ft-podcasts-on-us-china-competition-and.html

A new recommended career path for effective altruists: China specialist: https://80000hours.org/articles/china-careers/
Our rough guess is that it would be useful for there to be at least ten people in the community with good knowledge in this area within the next few years.

By “good knowledge” we mean they’ve spent at least 3 years studying these topics and/or living in China.

We chose ten because that would be enough for several people to cover each of the major areas listed (e.g. 4 within AI, 2 within biorisk, 2 within foreign relations, 1 in another area).

AI Policy and Governance Internship: https://www.fhi.ox.ac.uk/ai-policy-governance-internship/

https://www.fhi.ox.ac.uk/deciphering-chinas-ai-dream/
https://www.fhi.ox.ac.uk/wp-content/uploads/Deciphering_Chinas_AI-Dream.pdf
Deciphering China’s AI Dream
The context, components, capabilities, and consequences of
China’s strategy to lead the world in AI

Europe’s AI delusion: https://www.politico.eu/article/opinion-europes-ai-delusion/
Brussels is failing to grasp threats and opportunities of artificial intelligence.
By BRUNO MAÇÃES

When the computer program AlphaGo beat the Chinese professional Go player Ke Jie in a three-part match, it didn’t take long for Beijing to realize the implications.

If algorithms can already surpass the abilities of a master Go player, it can’t be long before they will be similarly supreme in the activity to which the classic board game has always been compared: war.

As I’ve written before, the great conflict of our time is about who can control the next wave of technological development: the widespread application of artificial intelligence in the economic and military spheres.

...

If China’s ambitions sound plausible, that’s because the country’s achievements in deep learning are so impressive already. After Microsoft announced that its speech recognition software surpassed human-level language recognition in October 2016, Andrew Ng, then head of research at Baidu, tweeted: “We had surpassed human-level Chinese recognition in 2015; happy to see Microsoft also get there for English less than a year later.”

...

One obvious advantage China enjoys is access to almost unlimited pools of data. The machine-learning technologies boosting the current wave of AI expansion are as good as the amount of data they can use. That could be the number of people driving cars, photos labeled on the internet or voice samples for translation apps. With 700 or 800 million Chinese internet users and fewer data protection rules, China is as rich in data as the Gulf States are in oil.

How can Europe and the United States compete? They will have to be commensurately better in developing algorithms and computer power. Sadly, Europe is falling behind in these areas as well.

...

Chinese commentators have embraced the idea of a coming singularity: the moment when AI surpasses human ability. At that point a number of interesting things happen. First, future AI development will be conducted by AI itself, creating exponential feedback loops. Second, humans will become useless for waging war. At that point, the human mind will be unable to keep pace with robotized warfare. With advanced image recognition, data analytics, prediction systems, military brain science and unmanned systems, devastating wars might be waged and won in a matter of minutes.

...

The argument in the new strategy is fully defensive. It first considers how AI raises new threats and then goes on to discuss the opportunities. The EU and Chinese strategies follow opposite logics. Already on its second page, the text frets about the legal and ethical problems raised by AI and discusses the “legitimate concerns” the technology generates.

The EU’s strategy is organized around three concerns: the need to boost Europe’s AI capacity, ethical issues and social challenges. Unfortunately, even the first dimension quickly turns out to be about “European values” and the need to place “the human” at the center of AI — forgetting that the first word in AI is not “human” but “artificial.”

https://twitter.com/mr_scientism/status/983057591298351104
https://archive.is/m3Njh
US military: "LOL, China thinks it's going to be a major player in AI, but we've got all the top AI researchers. You guys will help us develop weapons, right?"

US AI researchers: "No."

US military: "But... maybe just a computer vision app."

US AI researchers: "NO."

https://www.theverge.com/2018/4/4/17196818/ai-boycot-killer-robots-kaist-university-hanwha
https://www.nytimes.com/2018/04/04/technology/google-letter-ceo-pentagon-project.html
https://twitter.com/mr_scientism/status/981685030417326080
https://archive.is/3wbHm
AI-risk was a mistake.
hsu  scitariat  commentary  video  presentation  comparison  usa  china  asia  sinosphere  frontier  technology  science  ai  speedometer  innovation  google  barons  deepgoog  stories  white-paper  strategy  migration  iran  human-capital  corporation  creative  alien-character  military  human-ml  nationalism-globalism  security  investing  government  games  deterrence  defense  nuclear  arms  competition  risk  ai-control  musk  optimism  multi  news  org:mag  europe  EU  80000-hours  effective-altruism  proposal  article  realness  offense-defense  war  biotech  altruism  language  foreign-lang  philosophy  the-great-west-whale  enhancement  foreign-policy  geopolitics  anglo  jobs  career  planning  hmm  travel  charity  tech  intel  media  teaching  tutoring  russia  india  miri-cfar  pdf  automation  class  labor  polisci  society  trust  n-factor  corruption  leviathan  ethics  authoritarianism  individualism-collectivism  revolution  economics  inequality  civic  law  regulation  data  scale  pro-rata  capital  zero-positive-sum  cooperate-defect  distribution  time-series  tre 
february 2018 by nhaliday
The Space Trilogy - Wikipedia
Out of the Silent Planet:

Weston makes a long speech justifying his proposed invasion of Malacandra on "progressive" and evolutionary grounds, which Ransom attempts to translate into Malacandrian, thus laying bare the brutality and crudity of Weston's ambitions.

Oyarsa listens carefully to Weston's speech and acknowledges that the scientist is acting out of a sense of duty to his species, and not mere greed. This renders him more mercifully disposed towards the scientist, who accepts that he may die while giving Man the means to continue. However, on closer examination Oyarsa points out that Weston's loyalty is not to Man's mind – or he would equally value the intelligent alien minds already inhabiting Malacandra, instead of seeking to displace them in favour of humanity; nor to Man's body – since, as Weston is well aware of and at ease with, Man's physical form will alter over time, and indeed would have to in order to adapt to Weston's programme of space exploration and colonisation. It seems then that Weston is loyal only to "the seed" – Man's genome – which he seeks to propagate. When Oyarsa questions why this is an intelligible motivation for action, Weston's eloquence fails him and he can only articulate that if Oyarsa does not understand Man's basic loyalty to Man then he, Weston, cannot possibly instruct him.

...

Perelandra:

The rafts or floating islands are indeed Paradise, not only in the sense that they provide a pleasant and care-free life (until the arrival of Weston) but also in the sense that Ransom is for weeks and months naked in the presence of a beautiful naked woman without once lusting after her or being tempted to seduce her. This is because of the perfection in that world.

The plot thickens when Professor Weston arrives in a spaceship and lands in a part of the ocean quite close to the Fixed Land. He at first announces to Ransom that he is a reformed man, but appears to still be in search of power. Instead of the strictly materialist attitude he displayed when first meeting Ransom, he asserts he had become aware of the existence of spiritual beings and pledges allegiance to what he calls the "Life-Force." Ransom, however, disagrees with Weston's position that the spiritual is inherently good, and indeed Weston soon shows signs of demonic possession.

In this state, the possessed Weston finds the Queen and tries to tempt her into defying Maleldil's orders by spending a night on the Fixed Land. Ransom, perceiving this, believes that he must act as a counter-tempter. Well versed in the Bible and Christian theology, Ransom realises that if the pristine Queen, who has never heard of Evil, succumbs to the tempter's arguments, the Fall of Man will be re-enacted on Perelandra. He struggles through day after day of lengthy arguments illustrating various approaches to temptation, but the demonic Weston shows super-human brilliance in debate (though when "off-duty" he displays moronic, asinine behaviour and small-minded viciousness) and moreover appears never to need sleep.

With the demonic Weston on the verge of winning, the desperate Ransom hears in the night what he gradually realises is a Divine voice, commanding him to physically attack the Tempter. Ransom is reluctant, and debates with the divine (inner) voice for the entire duration of the night. A curious twist is introduced here; whereas the name "Ransom" is said to be derived from the title "Ranolf's Son", it can also refer to a reward given in exchange for a treasured life. Recalling this, and recalling that his God would (and has) sacrificed Himself in a similar situation, Ransom decides to confront the Tempter outright.

Ransom attacks his opponent bare-handed, using only physical force. Weston's body is unable to withstand this despite the Tempter's superior abilities of rhetoric, and so the Tempter flees. Ultimately Ransom chases him over the ocean, Weston fleeing and Ransom chasing on the backs of giant and friendly fish. During a fleeting truce, the "real" Weston appears to momentarily re-inhabit his body, and recount his experience of Hell, wherein the damned soul is not consigned to pain or fire, as supposed by popular eschatology, but is absorbed into the Devil, losing all independent existence.
fiction  scifi-fantasy  tip-of-tongue  literature  big-peeps  religion  christianity  theos  space  xenobio  analogy  myth  eden  deep-materialism  new-religion  sanctity-degradation  civil-liberty  exit-voice  speaking  truth  realness  embodied  fighting  old-anglo  group-selection  war  paying-rent  counter-revolution  morality  parable  competition  the-basilisk  gnosis-logos  individualism-collectivism  language  physics  science  evolution  conquest-empire  self-interest  hmm  intricacy  analytical-holistic  tradeoffs  paradox  heterodox  narrative  philosophy  expansionism  genetics  duty  us-them  interests  nietzschean  parallax  the-devil  the-self 
january 2018 by nhaliday
Fermi paradox - Wikipedia
Rare Earth hypothesis: https://en.wikipedia.org/wiki/Rare_Earth_hypothesis
Fine-tuned Universe: https://en.wikipedia.org/wiki/Fine-tuned_Universe
something to keep in mind:
Puddle theory is a term coined by Douglas Adams to satirize arguments that the universe is made for man.[54][55] As stated in Adams' book The Salmon of Doubt:[56]
Imagine a puddle waking up one morning and thinking, “This is an interesting world I find myself in, an interesting hole I find myself in, fits me rather neatly, doesn't it? In fact, it fits me staggeringly well, must have been made to have me in it!” This is such a powerful idea that as the sun rises in the sky and the air heats up and as, gradually, the puddle gets smaller and smaller, it's still frantically hanging on to the notion that everything's going to be all right, because this World was meant to have him in it, was built to have him in it; so the moment he disappears catches him rather by surprise. I think this may be something we need to be on the watch out for.
article  concept  paradox  wiki  reference  fermi  anthropic  space  xenobio  roots  speculation  ideas  risk  threat-modeling  civilization  nihil  🔬  deep-materialism  new-religion  futurism  frontier  technology  communication  simulation  intelligence  eden  war  nuclear  deterrence  identity  questions  multi  explanans  physics  theos  philosophy  religion  chemistry  bio  hmm  idk  degrees-of-freedom  lol  troll  existence 
january 2018 by nhaliday
Christianity in China | Council on Foreign Relations
projected to outpace CCP membership soon

This fascinating map shows the new religious breakdown in China: http://www.businessinsider.com/new-religious-breakdown-in-china-14

Map Showing the Distribution of Christians in China: http://www.epm.org/resources/2010/Oct/18/map-showing-distribution-christians-china/

Christianity in China: https://en.wikipedia.org/wiki/Christianity_in_China
Accurate data on Chinese Christians is hard to access. According to the most recent internal surveys there are approximately 31 million Christians in China today (2.3% of the total population).[5] On the other hand, some international Christian organizations estimate there are tens of millions more, which choose not to publicly identify as such.[6] The practice of religion continues to be tightly controlled by government authorities.[7] Chinese over the age of 18 are only permitted to join officially sanctioned Christian groups registered with the government-approved Protestant Three-Self Church and China Christian Council and the Chinese Patriotic Catholic Church.[8]

In Xi we trust - Is China cracking down on Christianity?: http://www.dw.com/en/in-xi-we-trust-is-china-cracking-down-on-christianity/a-42224752A

In China, Unregistered Churches Are Driving a Religious Revolution: https://www.theatlantic.com/international/archive/2017/04/china-unregistered-churches-driving-religious-revolution/521544/

Cracks in the atheist edifice: https://www.economist.com/news/briefing/21629218-rapid-spread-christianity-forcing-official-rethink-religion-cracks

Jesus won’t save you — President Xi Jinping will, Chinese Christians told: https://www.washingtonpost.com/news/worldviews/wp/2017/11/14/jesus-wont-save-you-president-xi-jinping-will-chinese-christians-told/

http://www.sixthtone.com/news/1001611/noodles-for-the-messiah-chinas-creative-christian-hymns

https://www.reuters.com/article/us-pope-china-exclusive/exclusive-china-vatican-deal-on-bishops-ready-for-signing-source-idUSKBN1FL67U
Catholics in China are split between those in “underground” communities that recognize the pope and those belonging to a state-controlled Catholic Patriotic Association where bishops are appointed by the government in collaboration with local Church communities.

http://www.bbc.com/news/world-asia-china-42914029
The underground churches recognise only the Vatican's authority, whereas the Chinese state churches refuse to accept the authority of the Pope.

There are currently about 100 Catholic bishops in China, with some approved by Beijing, some approved by the Vatican and, informally, many now approved by both.

...

Under the agreement, the Vatican would be given a say in the appointment of future bishops in China, a Vatican source told news agency Reuters.

For Beijing, an agreement with the Vatican could allow them more control over the country's underground churches.

Globally, it would also enhance China's prestige - to have the world's rising superpower engaging with one of the world's major religions.

Symbolically, it would the first sign of rapprochement between China and the Catholic church in more than half a century.

The Vatican is the only European state that maintains formal diplomatic relations with Taiwan. It is currently unclear if an agreement between China and the Vatican would affect this in any way.

What will this mean for the country's Catholics?

There are currently around 10 million Roman Catholics in China.

https://www.washingtonpost.com/world/asia_pacific/china-vatican-deal-on-bishops-reportedly-ready-for-signing/2018/02/01/2adfc6b2-0786-11e8-b48c-b07fea957bd5_story.html

http://www.catholicherald.co.uk/news/2018/02/06/china-is-the-best-implementer-of-catholic-social-doctrine-says-vatican-bishop/
The chancellor of the Pontifical Academy of Social Sciences praised the 'extraordinary' Communist state

“Right now, those who are best implementing the social doctrine of the Church are the Chinese,” a senior Vatican official has said.

Bishop Marcelo Sánchez Sorondo, chancellor of the Pontifical Academy of Social Sciences, praised the Communist state as “extraordinary”, saying: “You do not have shantytowns, you do not have drugs, young people do not take drugs”. Instead, there is a “positive national conscience”.

The bishop told the Spanish-language edition of Vatican Insider that in China “the economy does not dominate politics, as happens in the United States, something Americans themselves would say.”

Bishop Sánchez Sorondo said that China was implementing Pope Francis’s encyclical Laudato Si’ better than many other countries and praised it for defending Paris Climate Accord. “In that, it is assuming a moral leadership that others have abandoned”, he added.

...

As part of the diplomacy efforts, Bishop Sánchez Sorondo visited the country. “What I found was an extraordinary China,” he said. “What people don’t realise is that the central value in China is work, work, work. There’s no other way, fundamentally it is like St Paul said: he who doesn’t work, doesn’t eat.”

China reveals plan to remove ‘foreign influence’ from Catholic Church: http://catholicherald.co.uk/news/2018/06/02/china-reveals-plan-to-remove-foreign-influence-from-catholic-church1/

China, A Fourth Rome?: http://thermidormag.com/china-a-fourth-rome/
As a Chinaman born in the United States, I find myself able to speak to both places and neither. By accidents of fortune, however – or of providence, rather – I have identified more with China even as I have lived my whole life in the West. English is my third language, after Cantonese and Mandarin, even if I use it to express my intellectually most complex thoughts; and though my best of the three in writing, trained by the use of Latin, it is the vehicle of a Chinese soul. So it is in English that for the past year I have memed an idea as unconventional as it is ambitious, unto the Europæans a stumbling-block, and unto the Chinese foolishness: #China4thRome.

This idea I do not attempt to defend rigorously, between various powers’ conflicting claims to carrying on the Roman heritage; neither do I intend to claim that Moscow, which has seen itself as a Third Rome after the original Rome and then Constantinople, is fallen. Instead, I think back to the division of the Roman empire, first under Diocletian’s Tetrarchy and then at the death of Theodosius I, the last ruler of the undivided Roman empire. In the second partition, at the death of Theodosius, Arcadius became emperor of the East, with his capital in Constantinople, and Honorius emperor of the West, with his capital in Milan and then Ravenna. That the Roman empire did not stay uniformly strong under a plurality of emperors is not the point. What is significant about the administrative division of the Roman empire among several emperors is that the idea of Rome can be one even while its administration is diverse.

By divine providence, the Christian religion – and through it, Rome – has spread even through the bourgeois imperialism of the 19th and 20th centuries. Across the world, the civil calendar of common use is that of Rome, reckoned from 1 January; few places has Roman law left wholly untouched. Nevertheless, never have we observed in the world of Roman culture an ethnogenetic pattern like that of the Chinese empire as described by the prologue of Luo Guanzhong’s Romance of the Three Kingdoms 三國演義: ‘The empire, long divided, must unite; long united, must divide. Thus it has ever been.’1 According to classical Chinese cosmology, the phrase rendered the empire is more literally all under heaven 天下, the Chinese œcumene being its ‘all under heaven’ much as a Persian proverb speaks of the old Persian capital of Isfahan: ‘Esfahān nesf-e jahān ast,’ Isfahan is half the world. As sociologist Fei Xiaotong describes it in his 1988 Tanner Lecture ‘Plurality and Unity in the Configuration of the Chinese People’,

...

And this Chinese œcumene has united and divided for centuries, even as those who live in it have recognized a fundamental unity. But Rome, unlike the Chinese empire, has lived on in multiple successor polities, sometimes several at once, without ever coming back together as one empire administered as one. Perhaps something of its character has instead uniquely suited it to being the spirit of a kind of broader world empire. As Dante says in De Monarchia, ‘As the human race, then, has an end, and this end is a means necessary to the universal end of nature, it follows that nature must have the means in view.’ He continues,

If these things are true, there is no doubt but that nature set apart in the world a place and a people for universal sovereignty; otherwise she would be deficient in herself, which is impossible. What was this place, and who this people, moreover, is sufficiently obvious in what has been said above, and in what shall be added further on. They were Rome and her citizens or people. On this subject our Poet [Vergil] has touched very subtly in his sixth book [of the Æneid], where he brings forward Anchises prophesying in these words to Aeneas, father of the Romans: ‘Verily, that others shall beat out the breathing bronze more finely, I grant you; they shall carve the living feature in the marble, plead causes with more eloquence, and trace the movements of the heavens with a rod, and name the rising stars: thine, O Roman, be the care to rule the peoples with authority; be thy arts these, to teach men the way of peace, to show mercy to the subject, and to overcome the proud.’ And the disposition of place he touches upon lightly in the fourth book, when he introduces Jupiter speaking of Aeneas to Mercury in this fashion: ‘Not such a one did his most beautiful mother promise to us, nor for this twice rescue him from Grecian arms; rather was he to be the man to govern Italy teeming with empire and tumultuous with war.’ Proof enough has been given that the Romans were by nature ordained for sovereignty. Therefore the Roman … [more]
org:ngo  trends  foreign-policy  china  asia  hmm  idk  religion  christianity  theos  anomie  meaningness  community  egalitarianism-hierarchy  protestant-catholic  demographics  time-series  government  leadership  nationalism-globalism  org:data  comparison  sinosphere  civic  the-bones  power  great-powers  thucydides  multi  maps  data  visualization  pro-rata  distribution  geography  within-group  wiki  reference  article  news  org:lite  org:biz  islam  buddhism  org:euro  authoritarianism  antidemos  leviathan  regulation  civil-liberty  chart  absolute-relative  org:mag  org:rec  org:anglo  org:foreign  music  culture  gnon  org:popup  🐸  memes(ew)  essay  rhetoric  conquest-empire  flux-stasis  spreading  paradox  analytical-holistic  tradeoffs  solzhenitsyn  spengler  nietzschean  europe  the-great-west-whale  occident  orient  literature  big-peeps  history  medieval  mediterranean  enlightenment-renaissance-restoration-reformation  expansionism  early-modern  society  civilization  world  MENA  capital  capitalism  innovation  race  alien-character  optimat 
january 2018 by nhaliday
Why do stars twinkle?
According to many astronomers and educators, twinkle (stellar scintillation) is caused by atmospheric structure that works like ordinary lenses and prisms. Pockets of variable temperature - and hence index of refraction - randomly shift and focus starlight, perceived by eye as changes in brightness. Pockets also disperse colors like prisms, explaining the flashes of color often seen in bright stars. Stars appear to twinkle more than planets because they are points of light, whereas the twinkling points on planetary disks are averaged to a uniform appearance. Below, figure 1 is a simulation in glass of the kind of turbulence structure posited in the lens-and-prism theory of stellar scintillation, shown over the Penrose tile floor to demonstrate the random lensing effects.

However appealing and ubiquitous on the internet, this popular explanation is wrong, and my aim is to debunk the myth. This research is mostly about showing that the lens-and-prism theory just doesn't work, but I also have a stellar list of references that explain the actual cause of scintillation, starting with two classic papers by C.G. Little and S. Chandrasekhar.
nibble  org:junk  space  sky  visuo  illusion  explanans  physics  electromag  trivia  cocktail  critique  contrarianism  explanation  waves  simulation  experiment  hmm  magnitude  atmosphere  roots  idk 
december 2017 by nhaliday
Religion in ancient Rome - Wikipedia
Religious persecution in the Roman Empire: https://en.wikipedia.org/wiki/Religious_persecution_in_the_Roman_Empire
The religion of the Christians and Jews was monotheistic in contrast to the polytheism of the Romans.[16] The Romans tended towards syncretism, seeing the same gods under different names in different places of the Empire. This being so, they were generally tolerant and accommodating towards new deities and the religious experiences of other peoples who formed part of their wider Empire.[17] This general tolerance was not extended to religions that were hostile to the state nor any that claimed exclusive rights to religious beliefs and practice.[17]

By its very nature the exclusive faith of the Jews and Christians set them apart from other people, but whereas the former group was in the main contained within a single national, ethnic grouping, in the Holy Land and Jewish diaspora—the non-Jewish adherents of the sect such as Proselytes and God-fearers being considered negligible—the latter was active and successful in seeking converts for the new religion and made universal claims not limited to a single geographical area.[17] Whereas the Masoretic Text, of which the earliest surviving copy dates from the 9th century AD, teaches that "the Gods of the gentiles are nothing", the corresponding passage in the Greek Septuagint, used by the early Christian Church, asserted that "all the gods of the heathens are devils."[18] The same gods whom the Romans believed had protected and blessed their city and its wider empire during the many centuries they had been worshipped were now demonized[19] by the early Christian Church.[20][21]

Persecution of Christians in the Roman Empire: https://en.wikipedia.org/wiki/Persecution_of_Christians_in_the_Roman_Empire
"The exclusive sovereignty of Christ clashed with Caesar's claims to his own exclusive sovereignty."[4]:87 The Roman empire practiced religious syncretism and did not demand loyalty to one god, but they did demand preeminent loyalty to the state, and this was expected to be demonstrated through the practices of the state religion with numerous feast and festival days throughout the year.[6]:84-90[7] The nature of Christian monotheism prevented Christians from participating in anything involving 'other gods'.[8]:60 Christians did not participate in feast days or processionals or offer sacrifices or light incense to the gods; this produced hostility.[9] They refused to offer incense to the Roman emperor, and in the minds of the people, the "emperor, when viewed as a god, was ... the embodiment of the Roman empire"[10], so Christians were seen as disloyal to both.[4]:87[11]:23 In Rome, "religion could be tolerated only as long as it contributed to the stability of the state" which would "brook no rival for the allegiance of its subjects. The state was the highest good in a union of state and religion."[4]:87 In Christian monotheism the state was not the highest good.[4]:87[8]:60

...

According to the Christian apologist Tertullian, some governors in Africa helped accused Christians secure acquittals or refused to bring them to trial.[15]:117 Overall, Roman governors were more interested in making apostates than martyrs: one proconsul of Asia, Arrius Antoninus, when confronted with a group of voluntary martyrs during one of his assize tours, sent a few to be executed and snapped at the rest, "If you want to die, you wretches, you can use ropes or precipices."[15]:137

...

Political leaders in the Roman Empire were also public cult leaders. Roman religion revolved around public ceremonies and sacrifices; personal belief was not as central an element as it is in many modern faiths. Thus while the private beliefs of Christians may have been largely immaterial to many Roman elites, this public religious practice was in their estimation critical to the social and political well-being of both the local community and the empire as a whole. Honoring tradition in the right way — pietas — was key to stability and success.[25]
history  iron-age  mediterranean  the-classics  wiki  reference  article  letters  religion  theos  institutions  culture  society  lived-experience  gender  christianity  judaism  conquest-empire  time  sequential  social-capital  multi  rot  zeitgeist  domestication  gibbon  alien-character  the-founding  janus  alignment  government  hmm  aphorism  quotes  tradition  duty  leviathan  ideology  ritual  myth  individualism-collectivism  privacy  trivia  cocktail  death  realness  fire  paganism 
november 2017 by nhaliday
The weirdest people in the world?
Abstract: Behavioral scientists routinely publish broad claims about human psychology and behavior in the world’s top journals based on samples drawn entirely from Western, Educated, Industrialized, Rich, and Democratic (WEIRD) societies. Researchers – often implicitly – assume that either there is little variation across human populations, or that these “standard subjects” are as representative of the species as any other population. Are these assumptions justified? Here, our review of the comparative database from across the behavioral sciences suggests both that there is substantial variability in experimental results across populations and that WEIRD subjects are particularly unusual compared with the rest of the species – frequent outliers. The domains reviewed include visual perception, fairness, cooperation, spatial reasoning, categorization and inferential induction, moral reasoning, reasoning styles, self-concepts and related motivations, and the heritability of IQ. The findings suggest that members of WEIRD societies, including young children, are among the least representative populations one could find for generalizing about humans. Many of these findings involve domains that are associated with fundamental aspects of psychology, motivation, and behavior – hence, there are no obvious a priori grounds for claiming that a particular behavioral phenomenon is universal based on sampling from a single subpopulation. Overall, these empirical patterns suggests that we need to be less cavalier in addressing questions of human nature on the basis of data drawn from this particularly thin, and rather unusual, slice of humanity. We close by proposing ways to structurally re-organize the behavioral sciences to best tackle these challenges.
pdf  study  microfoundations  anthropology  cultural-dynamics  sociology  psychology  social-psych  cog-psych  iq  biodet  behavioral-gen  variance-components  psychometrics  psych-architecture  visuo  spatial  morality  individualism-collectivism  n-factor  justice  egalitarianism-hierarchy  cooperate-defect  outliers  homo-hetero  evopsych  generalization  henrich  europe  the-great-west-whale  occident  organizing  🌞  universalism-particularism  applicability-prereqs  hari-seldon  extrema  comparison  GT-101  ecology  EGT  reinforcement  anglo  language  gavisti  heavy-industry  marginal  absolute-relative  reason  stylized-facts  nature  systematic-ad-hoc  analytical-holistic  science  modernity  behavioral-econ  s:*  illusion  cool  hmm  coordination  self-interest  social-norms  population  density  humanity  sapiens  farmers-and-foragers  free-riding  anglosphere  cost-benefit  china  asia  sinosphere  MENA  world  developing-world  neurons  theory-of-mind  network-structure  nordic  orient  signum  biases  usa  optimism  hypocrisy  humility  within-without  volo-avolo  domes 
november 2017 by nhaliday
Fish on Friday | West Hunter
There are parts of Europe, Switzerland and Bavaria for example, that are seriously iodine deficient. This used to be a problem. I wonder if fish on Friday ameliorated it: A three-ounce serving size of cod provides your body with 99 micrograms of iodine, or 66% of the recommended amount per day.

Thinking further, it wasn’t just Fridays: there were ~130 days a years when the Catholic Church banned flesh.

Gwern on modern iodine-deficiency: https://westhunt.wordpress.com/2017/10/28/fish-on-friday/#comment-97137
population surveys indicate lots of people are iodine-insufficient even in the US or UK where the problem should’ve been permanently solved a century ago
west-hunter  scitariat  discussion  ideas  speculation  sapiens  europe  the-great-west-whale  history  medieval  germanic  religion  christianity  protestant-catholic  institutions  food  diet  nutrition  metabolic  iq  neuro  unintended-consequences  multi  gwern  poast  hmm  planning  parenting  developmental  public-health  gotchas  biodet  deep-materialism  health  embodied-street-fighting  ritual  roots  explanans 
october 2017 by nhaliday
An investigation of the unexpectedly high fertility of secular, native-born Jews in Israel: Population Studies: Vol 70, No 2
Secular, native-born Jews in Israel enjoy the socio-economic status of many affluent populations living in other democratic countries, but have above-replacement period and cohort fertility. This study revealed a constellation of interrelated factors which together characterize the socio-economic, cultural, and political environment of this fertility behaviour and set it apart from that of other advanced societies. The factors are: a combination of state and family support for childbearing; a dual emphasis on the social importance of women's employment and fertility; policies that support working mothers within a conservative welfare regime; a family system in which parents provide significant financial and caregiving aid to their adult children; relatively egalitarian gender-role attitudes and household behaviour; the continuing importance of familist ideology and of marriage as a social institution; the role of Jewish nationalism and collective behaviour in a religious society characterized by ethno-national conflict; and a discourse which defines women as the biological reproducers of the nation.

https://twitter.com/tcjfs/status/904137844834398209
https://archive.is/2RVjo
Fertility trends in Israel and Palestinian territories

https://twitter.com/tcjfs/status/923612344009351168
https://archive.is/FJ7Fn
https://archive.is/8vq6O
https://archive.is/qxpmX
my impression is the evidence actually favors propaganda effects over tax credits and shit. but I need to gather it all together at some pt
study  sociology  polisci  biophysical-econ  demographics  fertility  demographic-transition  intervention  wonkish  hmm  track-record  MENA  israel  judaism  🎩  gender  egalitarianism-hierarchy  tribalism  us-them  ethnocentrism  religion  labor  pdf  piracy  the-bones  microfoundations  life-history  dignity  nationalism-globalism  multi  twitter  social  commentary  gnon  unaffiliated  right-wing  backup  propaganda  status  fashun  hari-seldon 
october 2017 by nhaliday
[1709.01149] Biotechnology and the lifetime of technical civilizations
The number of people able to end Earth's technical civilization has heretofore been small. Emerging dual-use technologies, such as biotechnology, may give similar power to thousands or millions of individuals. To quantitatively investigate the ramifications of such a marked shift on the survival of both terrestrial and extraterrestrial technical civilizations, this paper presents a two-parameter model for civilizational lifespans, i.e. the quantity L in Drake's equation for the number of communicating extraterrestrial civilizations. One parameter characterizes the population lethality of a civilization's biotechnology and the other characterizes the civilization's psychosociology. L is demonstrated to be less than the inverse of the product of these two parameters. Using empiric data from Pubmed to inform the biotechnology parameter, the model predicts human civilization's median survival time as decades to centuries, even with optimistic psychosociological parameter values, thereby positioning biotechnology as a proximate threat to human civilization. For an ensemble of civilizations having some median calculated survival time, the model predicts that, after 80 times that duration, only one in 1024 civilizations will survive -- a tempo and degree of winnowing compatible with Hanson's "Great Filter." Thus, assuming that civilizations universally develop advanced biotechnology, before they become vigorous interstellar colonizers, the model provides a resolution to the Fermi paradox.
preprint  article  gedanken  threat-modeling  risk  biotech  anthropic  fermi  ratty  hanson  models  xenobio  space  civilization  frontier  hmm  speedometer  society  psychology  social-psych  anthropology  cultural-dynamics  disease  parasites-microbiome  maxim-gun  prepping  science-anxiety  technology  magnitude  scale  data  prediction  speculation  ideas  🌞  org:mat  study  offense-defense  arms  unintended-consequences  spreading  explanans  sociality  cybernetics 
october 2017 by nhaliday
Does Learning to Read Improve Intelligence? A Longitudinal Multivariate Analysis in Identical Twins From Age 7 to 16
Stuart Richie, Bates, Plomin

SEM: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4354297/figure/fig03/

The variance explained by each path in the diagrams included here can be calculated by squaring its path weight. To take one example, reading differences at age 12 in the model shown in Figure​Figure33 explain 7% of intelligence differences at age 16 (.262). However, since our measures are of differences, they are likely to include substantial amounts of noise: Measurement error may produce spurious differences. To remove this error variance, we can take an estimate of the reliability of the measures (generally high, since our measures are normed, standardized tests), which indicates the variance expected purely by the reliability of the measure, and subtract it from the observed variance between twins in our sample. Correcting for reliability in this way, the effect size estimates are somewhat larger; to take the above example, the reliability-corrected effect size of age 12 reading differences on age 16 intelligence differences is around 13% of the “signal” variance. It should be noted that the age 12 reading differences themselves are influenced by many previous paths from both reading and intelligence, as illustrated in Figure​Figure33.

...

The present study provided compelling evidence that improvements in reading ability, themselves caused purely by the nonshared environment, may result in improvements in both verbal and nonverbal cognitive ability, and may thus be a factor increasing cognitive diversity within families (Plomin, 2011). These associations are present at least as early as age 7, and are not—to the extent we were able to test this possibility—driven by differences in reading exposure. Since reading is a potentially remediable ability, these findings have implications for reading instruction: Early remediation of reading problems might not only aid in the growth of literacy, but may also improve more general cognitive abilities that are of critical importance across the life span.

Does Reading Cause Later Intelligence? Accounting for Stability in Models of Change: http://sci-hub.tw/10.1111/cdev.12669
Results from a state–trait model suggest that reported effects of reading ability on later intelligence may be artifacts of previously uncontrolled factors, both environmental in origin and stable during this developmental period, influencing both constructs throughout development.
study  albion  scitariat  spearhead  psychology  cog-psych  psychometrics  iq  intelligence  eden  language  psych-architecture  longitudinal  twin-study  developmental  environmental-effects  studying  🌞  retrofit  signal-noise  intervention  causation  graphs  graphical-models  flexibility  britain  neuro-nitgrit  effect-size  variance-components  measurement  multi  sequential  time  composition-decomposition  biodet  behavioral-gen  direct-indirect  systematic-ad-hoc  debate  hmm  pdf  piracy  flux-stasis 
september 2017 by nhaliday
GOP tax plan would provide major gains for richest 1%, uneven benefits for the middle class, report says - The Washington Post
https://twitter.com/ianbremmer/status/913863513038311426
https://archive.is/PYRx9
Trump tweets: For his voters.
Tax plan: Something else entirely.
https://twitter.com/tcjfs/status/913864779256692737
https://archive.is/5bzQz
This is appallingly stupid if accurate

https://www.nytimes.com/interactive/2017/11/28/upshot/what-the-tax-bill-would-look-like-for-25000-middle-class-families.html
https://www.nytimes.com/interactive/2017/11/30/us/politics/tax-cuts-increases-for-your-income.html

Treasury Removes Paper at Odds With Mnuchin’s Take on Corporate-Tax Cut’s Winners: https://www.wsj.com/articles/treasury-removes-paper-at-odds-with-mnuchins-take-on-corporate-tax-cuts-winners-1506638463

Tax changes for graduate students under the Tax Cuts and Jobs Act: https://bcide.gitlab.io/post/gop-tax-plan/
H.R.1 – 155th Congress (Tax Cuts and Jobs Act) 1 proposes changes to the US Tax Code that threatens to destroy the finances of STEM graduate students nationwide. The offending provision, 1204(a)(3), strikes section 117(d) 2 of the US Tax Code. This means that under the proposal, tuition waivers are considered taxable income.

For graduate students, this means an increase of thousands of dollars in owed federal taxes. Below I show a calculation for my own situation. The short of it is this: My federal taxes increase from ~7.5% of my income to ~31%. I will owe about $6300 more in federal taxes under this legislation. Like many other STEM students, my choices would be limited to taking on significant debt or quitting my program entirely.

The Republican War on College: https://www.theatlantic.com/business/archive/2017/11/republican-college/546308/

Trump's plan to tax colleges will harm higher education — but it's still a good idea: http://www.businessinsider.com/trump-tax-plan-taxing-colleges-is-a-good-idea-2017-11
- James Miller

The Republican Tax Plan Is a Disaster for Families With Children: http://www.motherjones.com/kevin-drum/2017/11/the-republican-tax-plan-is-a-disaster-for-families-with-children/
- Kevin Drum

The gains from cutting corporate tax rates: http://marginalrevolution.com/marginalrevolution/2017/11/corporate-taxes-2.html
I’ve been reading in this area on and off since the 1980s, and I really don’t think these are phony results.

Entrepreneurship and State Taxation: https://www.federalreserve.gov/econres/feds/files/2018003pap.pdf
We find that new firm employment is negatively—and disproportionately—affected by corporate tax rates. We find little evidence of an effect of personal and sales taxes on entrepreneurial outcomes.

https://www.nytimes.com/2017/11/26/us/politics/johnson-amendment-churches-taxes-politics.html
nobody in the comments section seems to have even considered the comparison with universities

The GOP Tax Bills Are Infrastructure Bills Too. Here’s Why.: http://www.governing.com/topics/transportation-infrastructure/gov-republican-tax-bills-impact-infrastructure.html
news  org:rec  trump  current-events  wonkish  policy  taxes  data  analysis  visualization  money  monetary-fiscal  compensation  class  hmm  :/  coalitions  multi  twitter  social  commentary  gnon  unaffiliated  right-wing  backup  class-warfare  redistribution  elite  vampire-squid  crooked  journos-pundits  tactics  strategy  politics  increase-decrease  pro-rata  labor  capital  distribution  corporation  corruption  anomie  counter-revolution  higher-ed  academia  nascent-state  mathtariat  phd  grad-school  org:mag  left-wing  econotariat  marginal-rev  links  study  summary  economics  econometrics  endogenous-exogenous  natural-experiment  longitudinal  regularizer  religion  christianity  org:gov  infrastructure  transportation  cracker-econ  org:lite  org:biz  crosstab  dynamic  let-me-see  cost-benefit  entrepreneurialism  branches  geography  usa  within-group 
september 2017 by nhaliday
New Theory Cracks Open the Black Box of Deep Learning | Quanta Magazine
A new idea called the “information bottleneck” is helping to explain the puzzling success of today’s artificial-intelligence algorithms — and might also explain how human brains learn.

sounds like he's just talking about autoencoders?
news  org:mag  org:sci  popsci  announcement  research  deep-learning  machine-learning  acm  information-theory  bits  neuro  model-class  big-surf  frontier  nibble  hmm  signal-noise  deepgoog  expert  ideas  wild-ideas  summary  talks  video  israel  roots  physics  interdisciplinary  ai  intelligence  shannon  giants  arrows  preimage  lifts-projections  composition-decomposition  characterization  markov  gradient-descent  papers  liner-notes  experiment  hi-order-bits  generalization  expert-experience  explanans  org:inst  speedometer 
september 2017 by nhaliday
WLGR: The Julian marriage laws (nos. 120-123, etc.)
In 18 B.C., the Emperor Augustus turned his attention to social problems at Rome. Extravagance and adultery were widespread. Among the upper classes, marriage was increasingly infrequent and, many couples who did marry failed to produce offspring. Augustus, who hoped thereby to elevate both the morals and the numbers of the upper classes in Rome, and to increase the population of native Italians in Italy, enacted laws to encourage marriage and having children (lex Julia de maritandis ordinibus), including provisions establishing adultery as a crime.

Jus trium liberorum: https://en.wikipedia.org/wiki/Jus_trium_liberorum
The ius trium liberorum, meaning “the right of three children” in Latin,[1] was a privilege rewarded to Roman citizens who had borne at least three children or freedmen who had borne at least four children.[2] It was a direct result of the Lex Iulia and the Lex Papia Poppaea, bodies of legislation introduced by Augustus in 18 BC and 9 AD, respectively.[3] These bodies of legislation were conceived to grow the dwindling population of the Roman upper classes. The intent of the jus trium liberorum has caused scholars to interpret it as eugenic legislation.[4] Men who had received the jus trium liberorum were excused from munera. Women with jus trium liberorum were no longer submitted to tutela mulierum and could receive inheritances otherwise bequest to their children.[5] The public reaction to the jus trium liberorum was largely to find loopholes, however. The prospect of having a large family was still not appealing.[6] A person who caught a citizen in violation in this law was entitled to a portion of the inheritance involved, creating a lucrative business for professional spies.[7] The spies became so pervasive that the reward was reduced to a quarter of its previous size.[8] As time went on the ius trium liberorum was granted to those by consuls as rewards for general good deeds, holding important professions or as personal favors, not just prolific propagation.[9] Eventually the ius trium liberorum was repealed in 534 AD by Justinian.[10]

The Purpose of the Lex Iulia et Papia Poppaea: https://sci-hub.tw/https://www.jstor.org/stable/3292043

Roman Monogamy: http://laurabetzig.org/pdf/RomanMonogamy.pdf
- Laura Betzig

Mating in Rome was polygynous; marriage was monogamous. In the years 18BC and AD 9 the first Roman emperor, Augustus, backed the lex Julia and the lex Papia Poppaea, his “moral” legislation. It rewarded members of the senatorial aristocracy who married and had children; and it punished celibacy and childlessness, which were common. To many historians, that suggests Romans were reluctant to reproduce. To me, it suggests they kept the number of their legitimate children small to keep the number of their illegitimate children large. Marriage in Rome shares these features with marriage in other empires with highly polygynous mating: inheritances were raised by inbreeding; relatedness to heirs was raised by marrying virgins, praising and enforcing chastity in married women, and discouraging widow remarriage; heirs were limited— and inheritances concentrated—by monogamous marriage, patriliny, and primogeniture; and back-up heirs were got by divorce and remarriage, concubinage, and adoption. The “moral” legislation interfered with each of these. Among other things, it diverted inheritances by making widows remarry; it lowered relatedness to heirs by making adultery subject to public, rather than private, sanctions; and it dispersed estates by making younger sons and daughters take legitimate spouses and make legitimate heirs. Augustus' “moral” legislation, like canon law in Europe later on, was not, as it first appears, an act of reproductive altruism. It was, in fact, a form of reproductive competition.

Did moral decay destroy the ancient world?: http://www.roger-pearse.com/weblog/2014/01/17/did-moral-decay-destroy-the-ancient-world/

hmmm...:
https://www.thenation.com/article/im-a-marxist-feminist-slut-how-do-i-find-an-open-relationship/
https://www.indy100.com/article/worst-decision-you-can-ever-make-have-a-child-science-research-parent-sleep-sex-money-video-7960906

https://twitter.com/tcjfs/status/913087174224044033
https://archive.is/LRpzH
Cato the Elder speaks on proposed repeal of the Oppian Law (https://en.wikipedia.org/wiki/Lex_Oppia) - from Livy's History of Rome, Book 34

"What pretext in the least degree respectable is put forward for this female insurrection? 'That we may shine,' they say."

The Crisis of the Third Century as Seen by Contemporaries: https://grbs.library.duke.edu/article/viewFile/9021/4625
"COMPLAINTS OF EVIL TIMES are to be found in all centuries which
have left a literature behind them. But in the Roman Empire
the decline is acknowledged in a manner which leaves no
room for doubt."

Morals, Politics, and the Fall of the Roman Republic: https://sci-hub.tw/https://www.jstor.org/stable/642930

https://en.wikipedia.org/wiki/Roman_historiography#Livy
The purpose of writing Ab Urbe Condita was twofold: the first was to memorialize history and the second was to challenge his generation to rise to that same level. He was preoccupied with morality, using history as a moral essay. He connects a nation’s success with its high level of morality, and conversely a nation’s failure with its moral decline. Livy believed that there had been a moral decline in Rome, and he lacked the confidence that Augustus could reverse it. Though he shared Augustus’ ideals, he was not a “spokesman for the regime”. He believed that Augustus was necessary, but only as a short term measure.

Livy and Roman Historiography: http://www.wheelockslatin.com/answerkeys/handouts/ch7_Livy_and_Roman_Historiography.pdf

Imperial Expansion and Moral Decline in the Roman Republic: https://sci-hub.tw/https://www.jstor.org/stable/4435293
org:junk  history  iron-age  mediterranean  the-classics  canon  gibbon  life-history  dysgenics  class  hmm  law  antidemos  authoritarianism  government  policy  rot  zeitgeist  legacy  values  demographics  demographic-transition  fertility  population  gender  crime  criminal-justice  leviathan  morality  counter-revolution  nascent-state  big-peeps  aristos  statesmen  death  religion  christianity  theos  multi  letters  reflection  duty  altruism  honor  temperance  civilization  sex  sexuality  the-bones  twitter  social  commentary  gnon  unaffiliated  right-wing  quotes  pic  wiki  isteveish  aphorism  study  essay  reference  people  anomie  intervention  studying  ideas  sulla  pdf  piracy  conquest-empire  hari-seldon  anthropology  cultural-dynamics  interests  self-interest  incentives  class-warfare  social-norms  number 
september 2017 by nhaliday
Social Animal House: The Economic and Academic Consequences of Fraternity Membership by Jack Mara, Lewis Davis, Stephen Schmidt :: SSRN
We exploit changes in the residential and social environment on campus to identify the economic and academic consequences of fraternity membership at a small Northeastern college. Our estimates suggest that these consequences are large, with fraternity membership lowering student GPA by approximately 0.25 points on the traditional four-point scale, but raising future income by approximately 36%, for those students whose decision about membership is affected by changes in the environment. These results suggest that fraternity membership causally produces large gains in social capital, which more than outweigh its negative effects on human capital for potential members. Alcohol-related behavior does not explain much of the effects of fraternity membership on either the human capital or social capital effects. These findings suggest that college administrators face significant trade-offs when crafting policies related to Greek life on campus.

- III. Methodology has details
- it's an instrumental variable method paper

Table 5: Fraternity Membership and Grades

Do High School Sports Build or Reveal Character?: http://ftp.iza.org/dp11110.pdf
We examine the extent to which participation in high school athletics has beneficial effects on future education, labor market, and health outcomes. Due to the absence of plausible instruments in observational data, we use recently developed methods that relate selection on observables with selection on unobservables to estimate bounds on the causal effect of athletics participation. We analyze these effects in the US separately for men and women using three different nationally representative longitudinal data sets that each link high school athletics participation with later-life outcomes. We do not find consistent evidence of individual benefits reported in many previous studies – once we have accounted for selection, high school athletes are no more likely to attend college, earn higher wages, or participate in the labor force. However, we do find that men (but not women) who participated in high school athletics are more likely to exercise regularly as adults. Nevertheless, athletes are no less likely to be obese.

Online Social Network Effects in Labor Markets: Evidence From Facebook's Entry into College Campuses: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3381938
My estimates imply that access to Facebook for 4 years of college causes a 2.7 percentile increase in a cohort's average earnings, relative to the earnings of other individuals born in the same year.

https://marginalrevolution.com/marginalrevolution/2019/05/might-facebook-boost-wages.html
What Clockwork_Prior said. I was a college freshman when facebook first made its appearance and so I know that facebook's entry/exit cannot be treated as a quasi-random with respect to earnings. Facebook began at harvard, then expanded to other ivy league schools + places like stanford/MIT/CMU, before expanding into a larger set of universities.

Presuming the author is using a differences-in-differences research design, the estimates would be biased as they would essentially be calculating averaging earnings difference between Elite schools and non elite schools. If the sample is just restricted to the period where schools were simply elite, the problem still exist because facebook originated at Harvard and this becomes a comparison of Harvard earnings v.s. other schools.
study  economics  econometrics  natural-experiment  endo-exo  policy  wonkish  higher-ed  long-term  planning  social-capital  human-capital  labor  gender  cohesion  sociology  social-structure  trivia  cocktail  🎩  effect-size  intervention  compensation  money  education  ethanol  usa  northeast  causation  counterfactual  methodology  demographics  age-generation  race  curvature  regression  convexity-curvature  nonlinearity  cost-benefit  endogenous-exogenous  branches  econotariat  marginal-rev  commentary  summary  facebook  internet  social  media  tech  network-structure  recruiting  career  hmm  idk  strategy  elite  time  confounding  pdf  broad-econ  microfoundations  sports  null-result  selection  health  fitness  fitsci  org:ngo  white-paper  input-output  obesity 
september 2017 by nhaliday
The GRE is useful; range restriction is a thing – Gene Expression
As an empirical matter I do think that it is likely many universities will follow the University of Michigan in dropping the GRE as a requirement. There will be some resistance within academia, but there is a lot of reluctance to vocally defend the GRE in public, especially from younger faculty who fear the social and professional repercussions (every time a discussion pops up about the GRE I get a lot of Twitter DMs from people who believe in the utility of the GRE but don’t want to be seen defending it in public because they fear becoming the target of accusations of an -ism). My prediction is that after the GRE is gone people will simply rely on other proxies.
gnxp  scitariat  commentary  trends  academia  grad-school  phd  psychometrics  progression  prediction  hmm  egalitarianism-hierarchy  general-survey 
september 2017 by nhaliday
Overcoming Bias : Why Ethnicity, Class, & Ideology? 
Individual humans can be described via many individual features that are useful in predicting what they do. Such features include gender, age, personality, intelligence, ethnicity, income, education, profession, height, geographic location, and so on. Different features are more useful for predicting different kinds of behavior.

One kind of human behavior is coalition politics; we join together into coalitions within political and other larger institutions. People in the same coalition tend to have features in common, though which exact features varies by time and place. But while in principle the features that describe coalitions could vary arbitrarily by time and place, we in actual fact see more consistent patterns.

...

You might be right about small scale coalitions, such as cliques, gangs, and clubs. And you might even be right about larger scale political coalitions in the ancient world. But you’d be wrong about our larger scale political coalitions today. While there are often weak correlations with such features, larger scale political coalitions are not mainly based on the main individual features of gender, age, etc. Instead, they are more often based on ethnicity, class, and “political ideology” preferences. While ideology is famously difficult to characterize, and it does vary by time and place, it is also somewhat consistent across time and space.
ratty  hanson  speculation  ideas  questions  hmm  idk  politics  polisci  ideology  coalitions  anthropology  sociology  coordination  tribalism  properties  things  phalanges  roots  demographics  race  class  curiosity  stylized-facts  impetus  organizing  interests  hari-seldon  sociality  cybernetics 
august 2017 by nhaliday
THE GROWING IMPORTANCE OF SOCIAL SKILLS IN THE LABOR MARKET*
key fact: cognitive ability is not growing in importance, but non-cognitive ability is

The labor market increasingly rewards social skills. Between 1980 and 2012, jobs requiring high levels of social interaction grew by nearly 12 percentage points as a share of the U.S. labor force. Math-intensive but less social jobs—including many STEM occupations—shrank by 3.3 percentage points over the same period. Employment and wage growth was particularly strong for jobs requiring high levels of both math skill and social skill. To understand these patterns, I develop a model of team production where workers “trade tasks” to exploit their comparative advantage. In the model, social skills reduce coordination costs, allowing workers to specialize and work together more efficiently. The model generates predictions about sorting and the relative returns to skill across occupations, which I investigate using data from the NLSY79 and the NLSY97. Using a comparable set of skill measures and covariates across survey waves, I find that the labor market return to social skills was much greater in the 2000s than in the mid 1980s and 1990s. JEL Codes: I20, I24, J01, J23, J24, J31

The Increasing Complementarity between Cognitive and Social Skills: http://econ.ucsb.edu/~weinberg/MathSocialWeinberger.pdf

The Changing Roles of Education and Ability in Wage Determination: http://business.uow.edu.au/content/groups/public/@web/@commerce/@research/documents/doc/uow130116.pdf

Intelligence and socioeconomic success: A meta-analytic review of longitudinal research: http://www.emilkirkegaard.dk/en/wp-content/uploads/Intelligence-and-socioeconomic-success-A-meta-analytic-review-of-longitudinal-research.pdf
Moderator analyses showed that the relationship between intelligence and success is dependent on the age of the sample but there is little evidence of any historical trend in the relationship.

https://twitter.com/khazar_milkers/status/898996206973603840
https://archive.is/7gLXv
that feelio when america has crossed an inflection point and EQ is obviously more important for success in todays society than IQ
I think this is how to understand a lot of "corporate commitment to diversity" stuff.Not the only reason ofc, but reason it's so impregnable
compare: https://pinboard.in/u:nhaliday/b:e9ac3d38e7a1
and: https://pinboard.in/u:nhaliday/b:a38f5756170d

g-reliant skills seem most susceptible to automation: https://fredrikdeboer.com/2017/06/14/g-reliant-skills-seem-most-susceptible-to-automation/

THE ERROR TERM: https://spottedtoad.wordpress.com/2018/02/19/the-error-term/
Imagine an objective function- something you want to maximize or minimize- with both a deterministic and a random component.

...

Part of y is rules-based and rational, part is random and outside rational control. Obviously, the ascent of civilization has, to the extent it has taken place, been based on focusing energies on those parts of the world that are responsive to rational interpretation and control.

But an interesting thing happens once automated processes are able to take over the mapping of patterns onto rules. The portion of the world that is responsive to algorithmic interpretation is also the rational, rules-based portion, almost tautologically. But in terms of our actual objective functions- the real portions of the world that we are trying to affect or influence- subtracting out the portion susceptible to algorithms does not eliminate the variation or make it unimportant. It simply makes it much more purely random rather than only partially so.

The interesting thing, to me, is that economic returns accumulate to the random portion of variation just as to the deterministic portion. In fact, if everybody has access to the same algorithms, the returns may well be largely to the random portion. The efficient market hypothesis in action, more or less.

...

But more generally, as more and more of the society comes under algorithmic control, as various forms of automated intelligence become ubiquitous, the remaining portion, and the portion for which individual workers are rewarded, might well become more irrational, more random, less satisfying, less intelligent.

Golden age for team players: https://news.harvard.edu/gazette/story/2017/10/social-skills-increasingly-valuable-to-employers-harvard-economist-finds/
Strong social skills increasingly valuable to employers, study finds

Number of available jobs by skill set (over time)

Changes in hourly wages by skill set (over time)

https://twitter.com/GarettJones/status/947904725294260224
https://archive.is/EEQA9
A resolution for the new year: Remember that intelligence is a predictor of social intelligence!
pdf  study  economics  econometrics  trends  labor  intelligence  iq  personality  psych-architecture  compensation  human-capital  🎩  data  regularizer  hmm  career  planning  long-term  stylized-facts  management  polarization  stagnation  inequality  leadership  longitudinal  chart  zeitgeist  s-factor  history  mostly-modern  usa  correlation  gnon  🐸  twitter  social  memes(ew)  pic  discussion  diversity  managerial-state  unaffiliated  left-wing  automation  gender  backup  westminster  multi  working-stiff  news  org:edu  time-series  :/  coordination  collaboration  money  medicine  law  teaching  education  tech  dirty-hands  engineering  supply-demand  ratty  large-factor  signal-noise  order-disorder  random  technocracy  branches  unintended-consequences  ai  prediction  speculation  theory-of-mind 
august 2017 by nhaliday
The Gulf Stream Myth
1. Fifty percent of the winter temperature difference across the North Atlantic is caused by the eastward atmospheric transport of heat released by the ocean that was absorbed and stored in the summer.
2. Fifty percent is caused by the stationary waves of the atmospheric flow.
3. The ocean heat transport contributes a small warming across the basin.

Is the Gulf Stream responsible for Europe’s mild winters?: http://ocp.ldeo.columbia.edu/res/div/ocp/gs/pubs/Seager_etal_QJ_2002.pdf
org:junk  environment  temperature  climate-change  usa  europe  comparison  hmm  regularizer  trivia  cocktail  error  oceans  chart  atmosphere  multi  pdf  study  earth  geography 
august 2017 by nhaliday
Is the U.S. Aggregate Production Function Cobb-Douglas? New Estimates of the Elasticity of Substitution∗
world-wide: http://www.socsci.uci.edu/~duffy/papers/jeg2.pdf
https://www.weforum.org/agenda/2016/01/is-the-us-labour-share-as-constant-as-we-thought
https://www.economicdynamics.org/meetpapers/2015/paper_844.pdf
We find that IPP capital entirely explains the observed decline of the US labor share, which otherwise is secularly constant over the past 65 years for structures and equipment capital. The labor share decline simply reflects the fact that the US economy is undergoing a transition toward a larger IPP sector.
https://ideas.repec.org/p/red/sed015/844.html
http://www.robertdkirkby.com/blog/2015/summary-of-piketty-i/
https://www.brookings.edu/bpea-articles/deciphering-the-fall-and-rise-in-the-net-capital-share/
The Fall of the Labor Share and the Rise of Superstar Firms: http://www.nber.org/papers/w23396
The Decline of the U.S. Labor Share: https://www.brookings.edu/wp-content/uploads/2016/07/2013b_elsby_labor_share.pdf
Table 2 has industry disaggregation
Estimating the U.S. labor share: https://www.bls.gov/opub/mlr/2017/article/estimating-the-us-labor-share.htm

Why Workers Are Losing to Capitalists: https://www.bloomberg.com/view/articles/2017-09-20/why-workers-are-losing-to-capitalists
Automation and offshoring may be conspiring to reduce labor's share of income.
pdf  study  economics  growth-econ  econometrics  usa  data  empirical  analysis  labor  capital  econ-productivity  manifolds  magnitude  multi  world  🎩  piketty  econotariat  compensation  inequality  winner-take-all  org:ngo  org:davos  flexibility  distribution  stylized-facts  regularizer  hmm  history  mostly-modern  property-rights  arrows  invariance  industrial-org  trends  wonkish  roots  synthesis  market-power  efficiency  variance-components  business  database  org:gov  article  model-class  models  automation  nationalism-globalism  trade  news  org:mag  org:biz  org:bv  noahpinion  explanation  summary  methodology  density  polarization  map-territory  input-output 
july 2017 by nhaliday
Does Management Matter? Evidence from India
We have shown that management matters, with improvements in management practices improving plant-level outcomes. One response from economists might then be to argue that poor management can at most be a short-run problem, since in the long run better managed firms should take over the market. Yet many of our firms have been in business for 20 years and more.

One reason why better run firms do not dominate the market is constraints on growth derived from limited managerial span of control. In every firm in our sample only members of the owning family have positions with major decision-making power over finance, purchasing, operations or employment. Non-family members are given only lower-level managerial positions with authority only over basic day-to-day activities. The principal reason is that family members do not trust non-family members. For example, they are concerned if they let their plant managers procure yarn they may do so at inflated rates from friends and receive kick-backs.

A key reason for this inability to decentralize is the poor rule of law in India. Even if directors found managers stealing, their ability to successfully prosecute them and recover the assets is minimal because of the inefficiency of Indian civil courts. A compounding reason for the inability to decentralize in Indian firms is bad management practices, as this means the owners cannot keep good track of materials and finance, so may not even able to identify mismanagement or theft within their firms.30

As a result of this inability to delegate, firms can expand beyond the size that can be managed by a single director only if other family members are available to share directorial duties. Thus, an important predictor of firm size was the number of male family members of the owners. In particular, the number of brothers and sons of the leading director has a correlation of 0.689 with the total employment of the firm, compared to a correlation between employment and the average management score of 0.223. In fact the best managed firm in our sample had only one (large) production plant, in large part because the owner had no brothers or sons to help run a larger organization. This matches the ideas of the Lucas (1978) span of control model, that there are diminishing returns to how much additional productivity better management technology can generate from a single manager. In the Lucas model, the limits to firm growth restrict the ability of highly productive firms to drive lower productivity ones from the market. In our Indian firms, this span of control restriction is definitely binding, so unproductive firms are able to survive because more productive firms cannot expand.

https://twitter.com/pseudoerasmus/status/885915088951095296

http://marginalrevolution.com/marginalrevolution/2017/03/india-much-entrepreneurial-society-united-states-thats-problem.html
However, when we reverse the employment statistic–only ~15% of Indians work for a firm compared to approximately 90% of US workers we see the problem. Entrepreneurship in India isn’t a choice, it’s a requirement. Indian entrepreneurship is a consequence of India’s failed economy. As a I wrote in my Cato paper with Goldschlag, less developed countries in general, not just India, have more entrepreneurs.

...

The modal size of an Indian firm is 1 employee and the mean is just over 2. The mean number of employees in a US firm is closer to 20 but even though that is ten times the Indian number it obscures the real difference. The US has many small firms but what makes it different is that it also has large firms that employ lots of people. In fact, over half of all US workers are employed by the tiny minority (0.3%) of firms with over 500 employees.

blames colonialism, idk, might have contributed

Dishonesty and Selection into Public Service: Evidence from India: https://www.aeaweb.org/articles?id=10.1257/pol.20150029
Students in India who cheat on a simple laboratory task are more likely to prefer public sector jobs. This paper shows that cheating on this task predicts corrupt behavior by civil servants, implying that it is a meaningful predictor of future corruption. Students who demonstrate pro-social preferences are less likely to prefer government jobs, while outcomes on an explicit game and attitudinal measures to measure corruption do not systematically predict job preferences. _A screening process that chooses high-ability applicants would not alter the average propensity for corruption._ The findings imply that differential selection into government may contribute, in part, to corruption.

Where Does the Good Shepherd Go? Civic Virtue and Sorting into Public Sector Employment: http://repec.business.uzh.ch/RePEc/iso/leadinghouse/0134_lhwpaper.pdf
Our study extends the understanding of the motivational basis of public sector employment by considering civic virtue in addition to altruism and risk aversion and by investigating selection and socialization. Using a largely representative, longitudinal data set of employees in Germany including 63,101 observations of 13,673 different individuals, we find that civic virtue relates positively to public sector employment beyond altruism and risk aversion. We find evidence on selection and no evidence on socialization as an explanation for this result.

http://www.economist.com/news/books-and-arts/21716019-penchant-criminality-electoral-asset-india-worlds-biggest
Sadly, this is not a book about some small, shady corner of Indian politics: 34% of the members of parliament (MPs) in the Lok Sabha (lower house) have criminal charges filed against them; and the figure is rising (see chart). Some of the raps are peccadillos, such as rioting or unlawful assembly—par for the course in India’s raucous local politics. But over a fifth of MPs are in the dock for serious crimes, often facing reams of charges for anything from theft to intimidation and worse. (Because the Indian judicial system has a backlog of 31m cases, even serious crimes can take a decade or more to try, so few politicians have been convicted.) One can walk just about the whole way from Mumbai to Kolkata without stepping foot outside a constituency whose MP isn’t facing a charge.

...

What is more surprising is that the supply of willing criminals-cum-politicians was met with eager demand from voters. Over the past three general elections, a candidate with a rap sheet of serious charges has had an 18% chance of winning his or her race, compared with 6% for a “clean” rival. Mr Vaishnav dispels the conventional wisdom that crooks win because they can get voters to focus on caste or some other sectarian allegiance, thus overlooking their criminality. If anything, the more serious the charge, the bigger the electoral boost, as politicians well know.

As so often happens in India, poverty plays a part. India is almost unique in having adopted universal suffrage while it was still very poor. The upshot has been that underdeveloped institutions fail to deliver what citizens vote for. Getting the state to perform its most basic functions—building a school, disbursing a subsidy, repaving a road—is a job that can require banging a few heads together. Sometimes literally. Who better to represent needy constituents in these tricky situations than someone who “knows how to get things done”? If the system doesn’t work for you, a thuggish MP can be a powerful ally.

http://www.bbc.com/news/magazine-36446652
study  economics  broad-econ  growth-econ  econometrics  field-study  india  asia  pseudoE  management  industrial-org  cultural-dynamics  institutions  trust  intervention  coordination  cohesion  n-factor  kinship  orient  multi  twitter  social  commentary  econotariat  spearhead  wealth-of-nations  pop-diff  pdf  scale  gender  leviathan  econ-productivity  marginal-rev  world  developing-world  comparison  usa  business  network-structure  labor  social-structure  lived-experience  entrepreneurialism  hmm  microfoundations  culture  corruption  anomie  crooked  human-capital  technocracy  government  data  crime  criminology  north-weingast-like  news  org:rec  org:biz  org:anglo  politics  populism  incentives  transportation  society  GT-101  integrity  🎩  endo-exo  cooperate-defect  ethics  attaq  selection  europe  the-great-west-whale  germanic  correlation  altruism  outcome-risk  uncertainty  impetus  longitudinal  civic  public-goodish  organizing  endogenous-exogenous 
july 2017 by nhaliday
Econometric Modeling as Junk Science
The Credibility Revolution in Empirical Economics: How Better Research Design Is Taking the Con out of Econometrics: https://www.aeaweb.org/articles?id=10.1257/jep.24.2.3

On data, experiments, incentives and highly unconvincing research – papers and hot beverages: https://papersandhotbeverages.wordpress.com/2015/10/31/on-data-experiments-incentives-and-highly-unconvincing-research/
In my view, it has just to do with the fact that academia is a peer monitored organization. In the case of (bad) data collection papers, issues related to measurement are typically boring. They are relegated to appendices, no one really has an incentive to monitor it seriously. The problem is similar in formal theory: no one really goes through the algebra in detail, but it is in principle feasible to do it, and, actually, sometimes these errors are detected. If discussing the algebra of a proof is almost unthinkable in a seminar, going into the details of data collection, measurement and aggregation is not only hard to imagine, but probably intrinsically infeasible.

Something different happens for the experimentalist people. As I was saying, I feel we have come to a point in which many papers are evaluated based on the cleverness and originality of the research design (“Using the World Cup qualifiers as an instrument for patriotism!? Woaw! how cool/crazy is that! I wish I had had that idea”). The sexiness of the identification strategy has too often become a goal in itself. When your peers monitor you paying more attention to the originality of the identification strategy than to the research question, you probably have an incentive to mine reality for ever crazier discontinuities. It is true methodologists have been criticized in the past for analogous reasons, such as being guided by the desire to increase mathematical complexity without a clear benefit. But, if you work with pure formal theory or statistical theory, your work is not meant to immediately answer question about the real world, but instead to serve other researchers in their quest. This is something that can, in general, not be said of applied CI work.

https://twitter.com/pseudoerasmus/status/662007951415238656
This post should have been entitled “Zombies who only think of their next cool IV fix”
https://twitter.com/pseudoerasmus/status/662692917069422592
massive lust for quasi-natural experiments, regression discontinuities
barely matters if the effects are not all that big
I suppose even the best of things must reach their decadent phase; methodological innov. to manias……

https://twitter.com/cblatts/status/920988530788130816
Following this "collapse of small-N social psych results" business, where do I predict econ will collapse? I see two main contenders.
One is lab studies. I dallied with these a few years ago in a Kenya lab. We ran several pilots of N=200 to figure out the best way to treat
and to measure the outcome. Every pilot gave us a different stat sig result. I could have written six papers concluding different things.
I gave up more skeptical of these lab studies than ever before. The second contender is the long run impacts literature in economic history
We should be very suspicious since we never see a paper showing that a historical event had no effect on modern day institutions or dvpt.
On the one hand I find these studies fun, fascinating, and probably true in a broad sense. They usually reinforce a widely believed history
argument with interesting data and a cute empirical strategy. But I don't think anyone believes the standard errors. There's probably a HUGE
problem of nonsignificant results staying in the file drawer. Also, there are probably data problems that don't get revealed, as we see with
the recent Piketty paper (http://marginalrevolution.com/marginalrevolution/2017/10/pikettys-data-reliable.html). So I take that literature with a vat of salt, even if I enjoy and admire the works
I used to think field experiments would show little consistency in results across place. That external validity concerns would be fatal.
In fact the results across different samples and places have proven surprisingly similar across places, and added a lot to general theory
Last, I've come to believe there is no such thing as a useful instrumental variable. The ones that actually meet the exclusion restriction
are so weird & particular that the local treatment effect is likely far different from the average treatment effect in non-transparent ways.
Most of the other IVs don't plausibly meet the e clue ion restriction. I mean, we should be concerned when the IV estimate is always 10x
larger than the OLS coefficient. This I find myself much more persuaded by simple natural experiments that use OLS, diff in diff, or
discontinuities, alongside randomized trials.

What do others think are the cliffs in economics?
PS All of these apply to political science too. Though I have a special extra target in poli sci: survey experiments! A few are good. I like
Dan Corstange's work. But it feels like 60% of dissertations these days are experiments buried in a survey instrument that measure small
changes in response. These at least have large N. But these are just uncontrolled labs, with negligible external validity in my mind.
The good ones are good. This method has its uses. But it's being way over-applied. More people have to make big and risky investments in big
natural and field experiments. Time to raise expectations and ambitions. This expectation bar, not technical ability, is the big advantage
economists have over political scientists when they compete in the same space.
(Ok. So are there any friends and colleagues I haven't insulted this morning? Let me know and I'll try my best to fix it with a screed)

HOW MUCH SHOULD WE TRUST DIFFERENCES-IN-DIFFERENCES ESTIMATES?∗: https://economics.mit.edu/files/750
Most papers that employ Differences-in-Differences estimation (DD) use many years of data and focus on serially correlated outcomes but ignore that the resulting standard errors are inconsistent. To illustrate the severity of this issue, we randomly generate placebo laws in state-level data on female wages from the Current Population Survey. For each law, we use OLS to compute the DD estimate of its “effect” as well as the standard error of this estimate. These conventional DD standard errors severely understate the standard deviation of the estimators: we find an “effect” significant at the 5 percent level for up to 45 percent of the placebo interventions. We use Monte Carlo simulations to investigate how well existing methods help solve this problem. Econometric corrections that place a specific parametric form on the time-series process do not perform well. Bootstrap (taking into account the auto-correlation of the data) works well when the number of states is large enough. Two corrections based on asymptotic approximation of the variance-covariance matrix work well for moderate numbers of states and one correction that collapses the time series information into a “pre” and “post” period and explicitly takes into account the effective sample size works well even for small numbers of states.

‘METRICS MONDAY: 2SLS–CHRONICLE OF A DEATH FORETOLD: http://marcfbellemare.com/wordpress/12733
As it turns out, Young finds that
1. Conventional tests tend to overreject the null hypothesis that the 2SLS coefficient is equal to zero.
2. 2SLS estimates are falsely declared significant one third to one half of the time, depending on the method used for bootstrapping.
3. The 99-percent confidence intervals (CIs) of those 2SLS estimates include the OLS point estimate over 90 of the time. They include the full OLS 99-percent CI over 75 percent of the time.
4. 2SLS estimates are extremely sensitive to outliers. Removing simply one outlying cluster or observation, almost half of 2SLS results become insignificant. Things get worse when removing two outlying clusters or observations, as over 60 percent of 2SLS results then become insignificant.
5. Using a Durbin-Wu-Hausman test, less than 15 percent of regressions can reject the null that OLS estimates are unbiased at the 1-percent level.
6. 2SLS has considerably higher mean squared error than OLS.
7. In one third to one half of published results, the null that the IVs are totally irrelevant cannot be rejected, and so the correlation between the endogenous variable(s) and the IVs is due to finite sample correlation between them.
8. Finally, fewer than 10 percent of 2SLS estimates reject instrument irrelevance and the absence of OLS bias at the 1-percent level using a Durbin-Wu-Hausman test. It gets much worse–fewer than 5 percent–if you add in the requirement that the 2SLS CI that excludes the OLS estimate.

Methods Matter: P-Hacking and Causal Inference in Economics*: http://ftp.iza.org/dp11796.pdf
Applying multiple methods to 13,440 hypothesis tests reported in 25 top economics journals in 2015, we show that selective publication and p-hacking is a substantial problem in research employing DID and (in particular) IV. RCT and RDD are much less problematic. Almost 25% of claims of marginally significant results in IV papers are misleading.

https://twitter.com/NoamJStein/status/1040887307568664577
Ever since I learned social science is completely fake, I've had a lot more time to do stuff that matters, like deadlifting and reading about Mediterranean haplogroups
--
Wait, so, from fakest to realest IV>DD>RCT>RDD? That totally matches my impression.

https://twitter.com/wwwojtekk/status/1190731344336293889
https://archive.is/EZu0h
Great (not completely new but still good to have it in one place) discussion of RCTs and inference in economics by Deaton, my favorite sentences (more general than just about RCT) below
Randomization in the tropics revisited: a theme and eleven variations: https://scholar.princeton.edu/sites/default/files/deaton/files/deaton_randomization_revisited_v3_2019.pdf
org:junk  org:edu  economics  econometrics  methodology  realness  truth  science  social-science  accuracy  generalization  essay  article  hmm  multi  study  🎩  empirical  causation  error  critique  sociology  criminology  hypothesis-testing  econotariat  broad-econ  cliometrics  endo-exo  replication  incentives  academia  measurement  wire-guided  intricacy  twitter  social  discussion  pseudoE  effect-size  reflection  field-study  stat-power  piketty  marginal-rev  commentary  data-science  expert-experience  regression  gotchas  rant  map-territory  pdf  simulation  moments  confidence  bias-variance  stats  endogenous-exogenous  control  meta:science  meta-analysis  outliers  summary  sampling  ensembles  monte-carlo  theory-practice  applicability-prereqs  chart  comparison  shift  ratty  unaffiliated  garett-jones 
june 2017 by nhaliday
Secular rise in economically valuable personality traits
small decline starting at YOB~1980:
Growing evidence suggests that the Flynn effect has ended and may have reversed in Western Europe (32, 33, 44–46). The last three birth cohorts in our data coincide with the peak in cognitive test scores in Finland (31). There is no clear trend for personality scores between these cohorts, which suggests that the end of the Flynn effect could also be reflected in personality traits. However, the data on these three birth cohorts are not fully comparable with our main data, and thus, it is not possible to make strong conclusions from them.
pdf  study  org:nat  psychology  cog-psych  social-psych  personality  iq  flynn  trends  dysgenics  hmm  rot  discipline  leadership  extra-introversion  gender  class  compensation  labor  europe  nordic  microfoundations 
june 2017 by nhaliday
Double world GDP | Open Borders: The Case
Economics and Emigration: Trillion-Dollar Bills on the Sidewalk?: https://www.aeaweb.org/articles?id=10.1257/jep.25.3.83
https://openborders.info/innovation-case/
https://www.economist.com/news/world-if/21724907-yes-it-would-be-disruptive-potential-gains-are-so-vast-objectors-could-be-bribed
The Openness-Equality Trade-Off in Global Redistribution: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2509305
https://www.wsj.com/articles/opening-our-borders-would-overwhelm-america-1492366053
Immigration, Justice, and Prosperity: http://quillette.com/2017/07/29/immigration-justice-prosperity/

Some Countries Are Much Richer Than Others. Is That Unjust?: http://quillette.com/2017/07/23/countries-much-richer-others-unjust/
But we shouldn’t automatically assume that wealth disparities across the world are unjust and that the developed world owes aid as a matter of justice. This is because the best way to make sense of the Great Divergence is that certain economic and political institutions, namely those that facilitated economic growth, arose in some countries and not others. Thus perhaps the benevolent among us should also try to encourage – by example rather than force – the development of such institutions in places where they do not exist.

An Argument Against Open Borders and Liberal Hubris: http://quillette.com/2017/08/27/argument-open-borders-liberal-hubris/
We do not have open borders but we are experiencing unprecedented demographic change. What progressives should remember is that civilisation is not a science laboratory. The consequences of failed experiments endure. That is the main virtue of gradual change; we can test new waters and not leap into their depths.

A Radical Solution to Global Income Inequality: Make the U.S. More Like Qatar: https://newrepublic.com/article/120179/how-reduce-global-income-inequality-open-immigration-policies

Why nation-states are good: https://aeon.co/essays/capitalists-need-the-nation-state-more-than-it-needs-them
The nation-state remains the best foundation for capitalism, and hyper-globalisation risks destroying it
- Dani Rodrik
Given the non-uniqueness of practices and institutions enabling capitalism, it’s not surprising that nation-states also resolve key social trade-offs differently. The world does not agree on how to balance equality against opportunity, economic security against innovation, health and environmental risks against technological innovation, stability against dynamism, economic outcomes against social and cultural values, and many other consequences of institutional choice. Developing nations have different institutional requirements than rich nations. There are, in short, strong arguments against global institutional harmonisation.
org:ngo  wonkish  study  summary  commentary  economics  growth-econ  policy  migration  econ-metrics  prediction  counterfactual  intervention  multi  news  org:rec  org:anglo  org:biz  nl-and-so-can-you  rhetoric  contrarianism  politics  reflection  usa  current-events  equilibrium  org:mag  org:popup  spearhead  institutions  hive-mind  wealth-of-nations  divergence  chart  links  innovation  entrepreneurialism  business  human-capital  regularizer  attaq  article  microfoundations  idk  labor  class  macro  insight  world  hmm  proposal  inequality  nationalism-globalism  developing-world  whiggish-hegelian  albion  us-them  tribalism  econotariat  cracker-econ  essay  big-peeps  unintended-consequences  humility  elite  vampire-squid  markets  capitalism  trade  universalism-particularism  exit-voice  justice  diversity  homo-hetero 
june 2017 by nhaliday
Haecceity - Wikipedia
Haecceity (/hɛkˈsiːɪti, hiːk-/; from the Latin haecceitas, which translates as "thisness") is a term from medieval scholastic philosophy, first coined by followers of Duns Scotus to denote a concept that he seems to have originated: the discrete qualities, properties or characteristics of a thing that make it a particular thing. Haecceity is a person's or object's thisness, the individualising difference between the concept "a man" and the concept "Socrates" (i.e., a specific person).[1] Haecceity is a literal translation of the equivalent term in Aristotle's Greek to ti esti (τὸ τί ἐστι)[2] or "the what (it) is."
jargon  philosophy  hmm  idk  wiki  reference  concept  conceptual-vocab 
june 2017 by nhaliday
Suspicious Banana on Twitter: ""platonic forms" seem more sinister when you realize that integers were reaching down into his head and giving him city planning advice https://t.co/4qaTdwOlry"
https://en.wikipedia.org/wiki/5040_(number)
Plato mentions in his Laws that 5040 is a convenient number to use for dividing many things (including both the citizens and the land of a state) into lesser parts. He remarks that this number can be divided by all the (natural) numbers from 1 to 12 with the single exception of 11 (however, it is not the smallest number to have this property; 2520 is). He rectifies this "defect" by suggesting that two families could be subtracted from the citizen body to produce the number 5038, which is divisible by 11. Plato also took notice of the fact that 5040 can be divided by 12 twice over. Indeed, Plato's repeated insistence on the use of 5040 for various state purposes is so evident that it is written, "Plato, writing under Pythagorean influences, seems really to have supposed that the well-being of the city depended almost as much on the number 5040 as on justice and moderation."[1]

https://en.wikipedia.org/wiki/Plato%27s_number
"Now for divine begettings there is a period comprehended by a perfect number, and for mortal by the first in which augmentations dominating and dominated when they have attained to three distances and four limits of the assimilating and the dissimilating, the waxing and the waning, render all things conversable and commensurable [546c] with one another, whereof a basal four-thirds wedded to the pempad yields two harmonies at the third augmentation, the one the product of equal factors taken one hundred times, the other of equal length one way but oblong,-one dimension of a hundred numbers determined by the rational diameters of the pempad lacking one in each case, or of the irrational lacking two; the other dimension of a hundred cubes of the triad. And this entire geometrical number is determinative of this thing, of better and inferior births."[3]

Shortly after Plato's time his meaning apparently did not cause puzzlement as Aristotle's casual remark attests.[6] Half a millennium later, however, it was an enigma for the Neoplatonists, who had a somewhat mystic penchant and wrote frequently about it, proposing geometrical and numerical interpretations. Next, for nearly a thousand years, Plato's texts disappeared and it is only in the Renaissance that the enigma briefly resurfaced. During the 19th century, when classical scholars restored original texts, the problem reappeared. Schleiermacher interrupted his edition of Plato for a decade while attempting to make sense of the paragraph. Victor Cousin inserted a note that it has to be skipped in his French translation of Plato's works. In the early 20th century, scholarly findings suggested a Babylonian origin for the topic.[7]

https://en.wikipedia.org/wiki/Pythagoreanism
https://www.jstor.org/stable/638781

Socrates: Surely we agree nothing more virtuous than sacrificing each newborn infant while reciting the factors of 39,916,800?

Turgidas: Uh

different but interesting: https://aeon.co/essays/can-we-hope-to-understand-how-the-greeks-saw-their-world
Another explanation for the apparent oddness of Greek perception came from the eminent politician and Hellenist William Gladstone, who devoted a chapter of his Studies on Homer and the Homeric Age (1858) to ‘perceptions and use of colour’. He too noticed the vagueness of the green and blue designations in Homer, as well as the absence of words covering the centre of the ‘blue’ area. Where Gladstone differed was in taking as normative the Newtonian list of colours (red, orange, yellow, green, blue, indigo, violet). He interpreted the Greeks’ supposed linguistic poverty as deriving from an imperfect discrimination of prismatic colours. The visual organ of the ancients was still in its infancy, hence their strong sensitivity to light rather than hue, and the related inability to clearly distinguish one hue from another. This argument fit well with the post-Darwinian climate of the late 19th century, and came to be widely believed. Indeed, it prompted Nietzsche’s own judgment, and led to a series of investigations that sought to prove that the Greek chromatic categories do not fit in with modern taxonomies.

Today, no one thinks that there has been a stage in the history of humanity when some colours were ‘not yet’ being perceived. But thanks to our modern ‘anthropological gaze’ it is accepted that every culture has its own way of naming and categorising colours. This is not due to varying anatomical structures of the human eye, but to the fact that different ocular areas are stimulated, which triggers different emotional responses, all according to different cultural contexts.
postrat  carcinisation  twitter  social  discussion  lol  hmm  :/  history  iron-age  mediterranean  the-classics  cocktail  trivia  quantitative-qualitative  mystic  simler  weird  multi  wiki  👽  dennett  article  philosophy  alien-character  news  org:mag  org:popup  literature  quotes  poetry  concrete  big-peeps  nietzschean  early-modern  europe  germanic  visuo  language  foreign-lang  embodied  oceans  h2o  measurement  fluid  forms-instances  westminster  lexical 
june 2017 by nhaliday
Living with Inequality - Reason.com
That's why I propose the creation of the Tenth Commandment Club. The tenth commandment—"You shall not covet"—is a foundation of social peace. The Nobel Laureate economist Vernon Smith noted the tenth commandment along with the eighth (you shall not steal) in his Nobel toast, saying that they "provide the property right foundations for markets, and warned that petty distributional jealousy must not be allowed to destroy" those foundations. If academics, pundits, and columnists would avowedly reject covetousness, would openly reject comparisons between the average (extremely fortunate) American and the average billionaire, would mock people who claimed that frugal billionaires are a systematic threat to modern life, then soon our time could be spent discussing policy issues that really matter.

Enlightenment -> social justice: https://twitter.com/GarettJones/status/866448789825105920
US reconquista: https://twitter.com/AngloRemnant/status/865980569397731329
https://archive.is/SR8OI
envy and psychology textbooks: https://twitter.com/tcjfs/status/887115182257917952

various Twitter threads: https://twitter.com/search?q=GarettJones+inequality

http://www.npr.org/sections/goatsandsoda/2017/09/13/542261863/cash-aid-changed-this-family-s-life-so-why-is-their-government-skeptical

Civilization means saying no to the poor: https://bonald.wordpress.com/2017/11/18/civilization-means-saying-no-to-the-poor/
Although I instinctively dislike him, I do agree with Professor Scott on one point: “exploitation” really is the essence of civilization, whether by exploitation one simply means authority as described by those insensible to its moral force or more simply the refusal of elites to divulge their resources to the poor.

In fact, no human creation of lasting worth could ever be made without a willingness to tell the poor to *** off. If we really listened to the demands of social justice, if we really let compassion be our guide, we could have no art, no music, no science, no religion, no philosophy, no architecture beyond the crudest shelters. The poor are before us, their need perpetually urgent. It is inexcusable for us ever to build a sculpture, a cathedral, a particle accelerator. And the poor, we have it on two good authorities (the other being common sense), will be with us always. What we give for their needs today will have disappeared tomorrow, and they will be hungry again. Imagine if some Savonarola had come to Florence a century or two earlier and convinced the Florentine elite to open their hearts and their wallets to the poor in preference for worldly vanities. All that wealth would have been squandered on the poor and would have disappeared without a trace. Instead, we got the Renaissance.

https://twitter.com/tcjfs/status/904169207293730816
https://archive.is/tYZAi
Reward the lawless; punish the law abiding. Complete inversion which will eventually drive us back to the 3rd world darkness whence we came.

https://twitter.com/tcjfs/status/917492530308112384
https://archive.is/AeXEs
This idea that a group is only honorable in virtue of their victimization is such a pernicious one.
for efficiency, just have "Victims of WASPs Day." A kind of All Victims' Day. Otherwise U.S. calendar will be nothing but days of grievance.
Bonald had a good bit on this (of course).
https://bonald.wordpress.com/2016/08/05/catholics-must-resist-cosmopolitan-universalism/
Steve King is supposedly stupid for claiming that Western Civilization is second to none. One might have supposed that Catholics would take some pride as Catholics in Western civilization, a thing that was in no small part our creation. Instead, the only history American Catholics are to remember is being poor and poorly regarded recent immigrants in America.

https://twitter.com/AngloRemnant/status/917612415243706368
https://archive.is/NDjwK
Don't even bother with the rat race if you value big family. I won the race, & would've been better off as a dentist in Peoria.
.. College prof in Athens, OH. Anesthesiologist in Knoxville. State govt bureaucrat in Helena.
.. This is the formula: Middle America + regulatory capture white-collar job. anyone attempting real work in 2017 america is a RETARD.
.. Also unclear is why anyone in the US would get married. knock your girl up and put that litter on Welfare.
You: keep 50% of your earnings after taxes. 25% is eaten by cost of living. save the last 25%, hope our bankrupt gov doesn't expropriate l8r
The main difference in this country between welfare and 7-figure income is the quality of your kitchen cabinets.

wtf: https://www.bls.gov/ooh/healthcare/dentists.htm
$159,770 per year
$76.81 per hour

18% (Much faster than average)

http://study.com/how_long_does_it_take_to_be_a_dentist.html
Admission into dental school is highly competitive. Along with undergraduate performance, students are evaluated for their Dental Admissions Test (DAT) scores. Students have the opportunity to take this test before graduating college. After gaining admission into dental school, students can go on to complete four years of full-time study to earn the Doctor of Dental Surgery or Doctor of Dental Medicine. Students typically spend the first two years learning general and dental science in classroom and laboratory settings. They may take courses like oral anatomy, histology and pathology. In the final years, dental students participate in clinical practicums, gaining supervised, hands-on experience in dental clinics.

https://twitter.com/AngloRemnant/status/985935089250062337
https://archive.is/yIXfk
https://archive.is/Qscq7
https://archive.is/IQQhU
Career ideas for the minimally ambitious dissident who wants to coast, shitpost, & live well:
- econ phd -> business school prof
- dentistry
- 2 years of banking/consulting -> F500 corp dev or strategy
- gov't bureaucrat in a state capital
--
Bad career ideas, for contrast:
- law
- humanities prof
- IT
- anything 'creative'

[ed.: Personally, I'd also throw in 'actuary' (though keep in mind ~20% risk of automation).]

https://twitter.com/DividualsTweet/status/1143214978142527488
https://archive.is/yzgVA
Best life advice: try getting a boring, not very high status but decently paying job. Like programming payroll software. SJWs are uninterested.
news  org:mag  rhetoric  contrarianism  econotariat  garett-jones  economics  growth-econ  piketty  inequality  winner-take-all  morality  values  critique  capital  capitalism  class  envy  property-rights  justice  religion  christianity  theos  aphorism  egalitarianism-hierarchy  randy-ayndy  aristos  farmers-and-foragers  redistribution  right-wing  peace-violence  🎩  multi  twitter  social  discussion  reflection  ideology  democracy  civil-liberty  welfare-state  history  early-modern  mostly-modern  politics  polisci  government  enlightenment-renaissance-restoration-reformation  counter-revolution  unaffiliated  gnon  modernity  commentary  psychology  cog-psych  social-psych  academia  westminster  social-science  biases  bootstraps  search  left-wing  discrimination  order-disorder  civilization  current-events  race  identity-politics  incentives  law  leviathan  social-norms  rot  fertility  strategy  planning  hmm  long-term  career  s-factor  regulation  managerial-state  dental  supply-demand  progression  org:gov 
june 2017 by nhaliday
« earlier      
per page:    204080120160

bundles : emojiprops

related tags

-_-  2016-election  80000-hours  :)  :/  aaronson  ability-competence  abortion-contraception-embryo  absolute-relative  abstraction  academia  accelerationism  accretion  accuracy  acemoglu  acm  acmtariat  aDNA  advanced  adversarial  advertising  advice  aesthetics  africa  age-generation  age-of-discovery  aggregator  aging  agriculture  ai  ai-control  akrasia  albion  algebra  algebraic-complexity  algorithms  alien-character  alignment  allodium  alt-inst  altruism  amazon  american-nations  analogy  analysis  analytical-holistic  anarcho-tyranny  anglo  anglosphere  announcement  anomie  anonymity  anthropic  anthropology  antidemos  antiquity  aphorism  api  apollonian-dionysian  app  apple  applicability-prereqs  applications  approximation  arbitrage  archaeology  archaics  architecture  aristos  arms  arrows  art  article  ascetic  asia  assembly  assimilation  assortative-mating  atmosphere  atoms  attaq  attention  audio  authoritarianism  autism  automation  autor  aversion  axelrod  axioms  backup  bandits  bangbang  barons  bayesian  beauty  behavioral-econ  behavioral-gen  being-right  ben-recht  benevolence  berkeley  best-practices  better-explained  betting  bias-variance  biases  bible  bifl  big-list  big-peeps  big-picture  big-surf  big-yud  bio  biodet  biohacking  bioinformatics  biophysical-econ  biotech  bitcoin  bits  blog  blowhards  books  bootstraps  borjas  bostrom  bots  bounded-cognition  brain-scan  branches  brands  brexit  britain  broad-econ  browser  buddhism  build-packaging  business  business-models  c(pp)  c:*  c:**  c:***  caching  calculation  calculator  california  caltech  canada  cancer  canon  capital  capitalism  carcinisation  cardio  career  carmack  cartoons  CAS  causation  cause  censorship  certificates-recognition  chan  chapman  characterization  charity  chart  cheatsheet  checking  checklists  chemistry  chicago  china  christianity  christopher-lasch  civic  civil-liberty  civilization  cjones-like  clarity  class  class-warfare  classic  classification  clever-rats  climate-change  clinton  cliometrics  cloud  clown-world  coalitions  coarse-fine  cocktail  code-organizing  cog-psych  cohesion  cold-war  collaboration  columbia  comedy  comics  coming-apart  commentary  communication  communism  community  commutativity  comparison  compensation  competition  compilers  complement-substitute  complex-systems  complexity  composition-decomposition  compression  computation  computer-vision  concentration-of-measure  concept  conceptual-vocab  concrete  concurrency  conference  confidence  config  confluence  confounding  confucian  confusion  conquest-empire  consulting-freelance  consumerism  contest  context  contracts  contradiction  contrarianism  control  convergence  convexity-curvature  cooking  cool  cooperate-defect  coordination  core-rats  corporation  correctness  correlation  corruption  cosmetic  cost-benefit  cost-disease  counter-revolution  counterexample  counterfactual  coupling-cohesion  courage  course  cracker-econ  cracker-prog  creative  crime  criminal-justice  criminology  CRISPR  critique  crooked  crosstab  crux  crypto  crypto-anarchy  cryptocurrency  cs  cultural-dynamics  culture  culture-war  curiosity  current-events  curvature  cybernetics  cycles  cynicism-idealism  dan-luu  dark-arts  darwinian  data  data-science  data-structures  database  dataset  dataviz  dbs  death  debate  debt  debugging  decentralized  decision-making  decision-theory  deep-learning  deep-materialism  deepgoog  defense  definite-planning  degrees-of-freedom  dementia  democracy  demographic-transition  demographics  dennett  density  dental  dependence-independence  descriptive  design  desktop  detail-architecture  deterrence  developing-world  developmental  devtools  diaspora  diet  differential  dignity  dimensionality  diogenes  direct-indirect  direction  dirty-hands  discipline  discovery  discrete  discrimination  discussion  disease  distributed  distribution  divergence  diversity  divide-and-conquer  diy  documentary  documentation  domestication  dotnet  douthatish  drama  drugs  DSL  duality  duplication  duty  dynamic  dynamical  dysgenics  early-modern  earth  easterly  eastern-europe  ecology  econ-metrics  econ-productivity  econometrics  economics  econotariat  ecosystem  ed-yong  eden  eden-heaven  editors  education  EEA  effect-size  effective-altruism  efficiency  egalitarianism-hierarchy  ego-depletion  EGT  eh  elections  electromag  elegance  elite  email  embedded-cognition  embeddings  embodied  embodied-cognition  embodied-pack  embodied-street-fighting  emergent  emotion  empirical  ems  end-times  endo-exo  endocrine  endogenous-exogenous  ends-means  endurance  energy-resources  engineering  enhancement  enlightenment-renaissance-restoration-reformation  ensembles  entertainment  entrepreneurialism  entropy-like  environment  environmental-effects  envy  epidemiology  epistemic  equilibrium  ergo  eric-kaufmann  error  error-handling  essay  estimate  ethanol  ethical-algorithms  ethics  ethnocentrism  EU  europe  events  evidence-based  evolution  evopsych  examples  exegesis-hermeneutics  existence  exit-voice  exocortex  expansionism  expectancy  experiment  expert  expert-experience  explanans  explanation  exploration-exploitation  exploratory  exposition  expression-survival  externalities  extra-introversion  extratricky  extrema  facebook  failure  faq  farmers-and-foragers  fashun  FDA  features  fermi  fertility  feudal  feynman  fiction  field-study  fighting  film  finance  finiteness  fire  fitness  fitsci  flexibility  fluid  flux-stasis  flynn  focus  food  foreign-lang  foreign-policy  form-design  formal-methods  formal-values  forms-instances  forum  fourier  frameworks  free-riding  freelance  french  frequency  frequentist  frisson  frontend  frontier  functional  fungibility-liquidity  futurism  gallic  galor-like  game-theory  games  garett-jones  gavisti  gbooks  gedanken  gelman  gender  gender-diff  gene-drift  gene-flow  general-survey  generalization  genetic-correlation  genetic-load  genetics  genomics  geoengineering  geography  geometry  geopolitics  germanic  get-fit  giants  gibbon  gig-econ  gilens-page  git  github  gnon  gnosis-logos  gnxp  god-man-beast-victim  golang  good-evil  google  gotchas  government  gowers  grad-school  gradient-descent  graph-theory  graphical-models  graphics  graphs  gray-econ  great-powers  greg-egan  gregory-clark  grokkability  grokkability-clarity  ground-up  group-level  group-selection  growth  growth-econ  growth-mindset  grugq  GT-101  gtd  guessing  guide  guilt-shame  GWAS  gwern  GxE  h2o  habit  hacker  haidt  hanson  happy-sad  hard-tech  hardness  hardware  hari-seldon  haskell  hci  health  healthcare  heavy-industry  heavyweights  henrich  hetero-advantage  heterodox  heuristic  hg  hi-order-bits  hidden-motives  hierarchy  high-variance  higher-ed  hiit  history  hive-mind  hmm  hn  homepage  homo-hetero  honor  horror  housing  howto  hsu  huge-data-the-biggest  human-bean  human-capital  human-ml  human-study  humanity  humility  hypochondria  hypocrisy  hypothesis-testing  ide  ideas  identity  identity-politics  ideology  idk  iidness  illusion  immune  impact  impetus  impro  incentives  increase-decrease  india  indie  individualism-collectivism  industrial-org  industrial-revolution  inequality  inference  info-dynamics  info-econ  info-foraging  infographic  information-theory  infrastructure  inhibition  init  inner-product  innovation  input-output  insight  instinct  institutions  insurance  integration-extension  integrity  intel  intellectual-property  intelligence  interdisciplinary  interests  interface  interface-compatibility  internet  interpretation  intersection-connectedness  intervention  interview  interview-prep  intricacy  intuition  invariance  investigative-journo  investing  ios  iq  iran  iraq-syria  iron-age  islam  israel  isteveish  iteration-recursion  janus  japan  jargon  javascript  jobs  journos-pundits  judaism  judgement  julia  justice  jvm  keyboard  kinship  knowledge  korea  krugman  kumbaya-kult  labor  land  language  large-factor  latent-variables  latex  latin-america  lattice  law  leadership  leaks  learning  learning-theory  lectures  lee-kuan-yew  left-wing  legacy  legibility  len:long  len:short  lens  lesswrong  let-me-see  letters  levers  leviathan  lexical  libraries  life-history  lifehack  lifts-projections  limits  linear-algebra  linearity  liner-notes  linguistics  links  linux  list  literature  lived-experience  llvm  lmao  local-global  logic  logistics  logos  lol  long-short-run  long-term  longevity  longform  longitudinal  love-hate  low-hanging  lower-bounds  machiavelli  machine-learning  macro  madisonian  magnitude  malaise  malthus  management  managerial-state  manifolds  map-territory  maps  marginal  marginal-rev  market-failure  market-power  markets  markov  martial  matching  math  math.AT  math.CA  math.CO  math.DS  math.GR  math.NT  mathtariat  matrix-factorization  maxim-gun  meaningness  measure  measurement  mechanics  media  medicine  medieval  mediterranean  memes(ew)  MENA  mena4  mendel-randomization  mental-math  meta-analysis  meta:medicine  meta:prediction  meta:reading  meta:research  meta:rhetoric  meta:science  meta:war  metabolic  metabuch  metal-to-virtual  metameta  methodology  metrics  michael-nielsen  micro  microbiz  microfoundations  microsoft  midwest  migrant-crisis  migration  military  mindful  minimalism  minimum-viable  miri-cfar  mit  mixing  ML-MAP-E  mobile  mobility  model-class  model-organism  models  modernity  mokyr-allen-mccloskey  moloch  moments  monetary-fiscal  money  money-for-time  monte-carlo  mooc  morality  mostly-modern  motivation