nhaliday + comparison   464

c++ - Which is faster: Stack allocation or Heap allocation - Stack Overflow
On my machine, using g++ 3.4.4 on Windows, I get "0 clock ticks" for both stack and heap allocation for anything less than 100000 allocations, and even then I get "0 clock ticks" for stack allocation and "15 clock ticks" for heap allocation. When I measure 10,000,000 allocations, stack allocation takes 31 clock ticks and heap allocation takes 1562 clock ticks.

so maybe around 100x difference? what does that work out to in terms of total workload?

hmm:
http://vlsiarch.eecs.harvard.edu/wp-content/uploads/2017/02/asplos17mallacc.pdf
Recent work shows that dynamic memory allocation consumes nearly 7% of all cycles in Google datacenters.

That's not too bad actually. Seems like I shouldn't worry about shifting from heap to stack/globals unless profiling says it's important, particularly for non-oly stuff.
q-n-a  stackex  programming  c(pp)  systems  memory-management  performance  intricacy  comparison  benchmarks  data  objektbuch  empirical  google  papers  nibble  time  measure  pro-rata  distribution  multi  pdf  oly-programming 
9 hours ago by nhaliday
LeetCode - The World's Leading Online Programming Learning Platform
very much targeted toward interview prep
https://www.quora.com/Is-LeetCode-Online-Judges-premium-membership-really-worth-it
This data is especially valuable because you get to know a company's interview style beforehand. For example, most questions that appeared in Facebook interviews have short solution typically not more than 30 lines of code. Their interview process focus on your ability to write clean, concise code. On the other hand, Google style interviews lean more on the analytical side and is algorithmic heavy, typically with multiple solutions to a question - each with a different run time complexity.
programming  tech  career  working-stiff  recruiting  interview-prep  algorithms  problem-solving  oly-programming  multi  q-n-a  qra  comparison  stylized-facts  facebook  google  cost-benefit  homo-hetero 
2 days ago by nhaliday
C++ Core Guidelines
This document is a set of guidelines for using C++ well. The aim of this document is to help people to use modern C++ effectively. By “modern C++” we mean effective use of the ISO C++ standard (currently C++17, but almost all of our recommendations also apply to C++14 and C++11). In other words, what would you like your code to look like in 5 years’ time, given that you can start now? In 10 years’ time?

https://isocpp.github.io/CppCoreGuidelines/
“Within C++ is a smaller, simpler, safer language struggling to get out.” – Bjarne Stroustrup

...

The guidelines are focused on relatively higher-level issues, such as interfaces, resource management, memory management, and concurrency. Such rules affect application architecture and library design. Following the rules will lead to code that is statically type safe, has no resource leaks, and catches many more programming logic errors than is common in code today. And it will run fast - you can afford to do things right.

We are less concerned with low-level issues, such as naming conventions and indentation style. However, no topic that can help a programmer is out of bounds.

Our initial set of rules emphasize safety (of various forms) and simplicity. They may very well be too strict. We expect to have to introduce more exceptions to better accommodate real-world needs. We also need more rules.

...

The rules are designed to be supported by an analysis tool. Violations of rules will be flagged with references (or links) to the relevant rule. We do not expect you to memorize all the rules before trying to write code.

contrary:
https://aras-p.info/blog/2018/12/28/Modern-C-Lamentations/
This will be a long wall of text, and kinda random! My main points are:
1. C++ compile times are important,
2. Non-optimized build performance is important,
3. Cognitive load is important. I don’t expand much on this here, but if a programming language or a library makes me feel stupid, then I’m less likely to use it or like it. C++ does that a lot :)
programming  engineering  pls  best-practices  systems  c(pp)  guide  metabuch  objektbuch  reference  cheatsheet  elegance  frontier  libraries  intricacy  advanced  advice  recommendations  big-picture  novelty  lens  philosophy  state  error  types  concurrency  memory-management  performance  abstraction  plt  compilers  expert-experience  multi  checking  devtools  flux-stasis  safety  system-design  techtariat  time  measure  dotnet  comparison  examples  build-packaging  thinking  worse-is-better/the-right-thing  cost-benefit  tradeoffs  essay  commentary  oop  correctness 
5 days ago by nhaliday
An Efficiency Comparison of Document Preparation Systems Used in Academic Research and Development
The choice of an efficient document preparation system is an important decision for any academic researcher. To assist the research community, we report a software usability study in which 40 researchers across different disciplines prepared scholarly texts with either Microsoft Word or LaTeX. The probe texts included simple continuous text, text with tables and subheadings, and complex text with several mathematical equations. We show that LaTeX users were slower than Word users, wrote less text in the same amount of time, and produced more typesetting, orthographical, grammatical, and formatting errors. On most measures, expert LaTeX users performed even worse than novice Word users. LaTeX users, however, more often report enjoying using their respective software. We conclude that even experienced LaTeX users may suffer a loss in productivity when LaTeX is used, relative to other document preparation systems. Individuals, institutions, and journals should carefully consider the ramifications of this finding when choosing document preparation strategies, or requiring them of authors.

...

However, our study suggests that LaTeX should be used as a document preparation system only in cases in which a document is heavily loaded with mathematical equations. For all other types of documents, our results suggest that LaTeX reduces the user’s productivity and results in more orthographical, grammatical, and formatting errors, more typos, and less written text than Microsoft Word over the same duration of time. LaTeX users may argue that the overall quality of the text that is created with LaTeX is better than the text that is created with Microsoft Word. Although this argument may be true, the differences between text produced in more recent editions of Microsoft Word and text produced in LaTeX may be less obvious than it was in the past. Moreover, we believe that the appearance of text matters less than the scientific content and impact to the field. In particular, LaTeX is also used frequently for text that does not contain a significant amount of mathematical symbols and formula. We believe that the use of LaTeX under these circumstances is highly problematic and that researchers should reflect on the criteria that drive their preferences to use LaTeX over Microsoft Word for text that does not require significant mathematical representations.

...

A second decision criterion that factors into the choice to use a particular software system is reflection about what drives certain preferences. A striking result of our study is that LaTeX users are highly satisfied with their system despite reduced usability and productivity. From a psychological perspective, this finding may be related to motivational factors, i.e., the driving forces that compel or reinforce individuals to act in a certain way to achieve a desired goal. A vital motivational factor is the tendency to reduce cognitive dissonance. According to the theory of cognitive dissonance, each individual has a motivational drive to seek consonance between their beliefs and their actual actions. If a belief set does not concur with the individual’s actual behavior, then it is usually easier to change the belief rather than the behavior [6]. The results from many psychological studies in which people have been asked to choose between one of two items (e.g., products, objects, gifts, etc.) and then asked to rate the desirability, value, attractiveness, or usefulness of their choice, report that participants often reduce unpleasant feelings of cognitive dissonance by rationalizing the chosen alternative as more desirable than the unchosen alternative [6, 7]. This bias is usually unconscious and becomes stronger as the effort to reject the chosen alternative increases, which is similar in nature to the case of learning and using LaTeX.

...

Given these numbers it remains an open question to determine the amount of taxpayer money that is spent worldwide for researchers to use LaTeX over a more efficient document preparation system, which would free up their time to advance their respective field. Some publishers may save a significant amount of money by requesting or allowing LaTeX submissions because a well-formed LaTeX document complying with a well-designed class file (template) is much easier to bring into their publication workflow. However, this is at the expense of the researchers’ labor time and effort. We therefore suggest that leading scientific journals should consider accepting submissions in LaTeX only if this is justified by the level of mathematics presented in the paper. In all other cases, we think that scholarly journals should request authors to submit their documents in Word or PDF format. We believe that this would be a good policy for two reasons. First, we think that the appearance of the text is secondary to the scientific merit of an article and its impact to the field. And, second, preventing researchers from producing documents in LaTeX would save time and money to maximize the benefit of research and development for both the research team and the public.

[ed.: I sense some salt.

And basically no description of how "# errors" was calculated.]

https://news.ycombinator.com/item?id=8797002
I question the validity of their methodology.
At no point in the paper is exactly what is meant by a "formatting error" or a "typesetting error" defined. From what I gather, the participants in the study were required to reproduce the formatting and layout of the sample text. In theory, a LaTeX file should strictly be a semantic representation of the content of the document; while TeX may have been a raw typesetting language, this is most definitely not the intended use case of LaTeX and is overall a very poor test of its relative advantages and capabilities.
The separation of the semantic definition of the content from the rendering of the document is, in my opinion, the most important feature of LaTeX. Like CSS, this allows the actual formatting to be abstracted away, allowing plain (marked-up) content to be written without worrying about typesetting.
Word has some similar capabilities with styles, and can be used in a similar manner, though few Word users actually use the software properly. This may sound like a relatively insignificant point, but in practice, almost every Word document I have seen has some form of inconsistent formatting. If Word disallowed local formatting changes (including things such as relative spacing of nested bullet points), forcing all formatting changes to be done in document-global styles, it would be a far better typesetting system. Also, the users would be very unhappy.
Yes, LaTeX can undeniably be a pain in the arse, especially when it comes to trying to get figures in the right place; however the combination of a simple, semantic plain-text representation with a flexible and professional typesetting and rendering engine are undeniable and completely unaddressed by this study.
--
It seems that the test was heavily biased in favor of WYSIWYG.
Of course that approach makes it very simple to reproduce something, as has been tested here. Even simpler would be to scan the document and run OCR. The massive problem with both approaches (WYSIWYG and scanning) is that you can't generalize any of it. You're doomed repeating it forever.
(I'll also note the other significant issue with this study: when the ratings provided by participants came out opposite of their test results, they attributed it to irrational bias.)

https://www.nature.com/articles/d41586-019-01796-1
Over the past few years however, the line between the tools has blurred. In 2017, Microsoft made it possible to use LaTeX’s equation-writing syntax directly in Word, and last year it scrapped Word’s own equation editor. Other text editors also support elements of LaTeX, allowing newcomers to use as much or as little of the language as they like.

https://news.ycombinator.com/item?id=20191348
study  hmm  academia  writing  publishing  yak-shaving  technical-writing  software  tools  comparison  latex  scholar  regularizer  idk  microsoft  evidence-based  science  desktop  time  efficiency  multi  hn  commentary  critique  news  org:sci  flux-stasis  duplication  metrics  biases 
8 days ago by nhaliday
Hardware is unforgiving
Today, anyone with a CS 101 background can take Geoffrey Hinton's course on neural networks and deep learning, and start applying state of the art machine learning techniques in production within a couple months. In software land, you can fix minor bugs in real time. If it takes a whole day to run your regression test suite, you consider yourself lucky because it means you're in one of the few environments that takes testing seriously. If the architecture is fundamentally flawed, you pull out your copy of Feathers' “Working Effectively with Legacy Code” and you apply minor fixes until you're done.

This isn't to say that software isn't hard, it's just a different kind of hard: the sort of hard that can be attacked with genius and perseverance, even without experience. But, if you want to build a ship, and you "only" have a decade of experience with carpentry, milling, metalworking, etc., well, good luck. You're going to need it. With a large ship, “minor” fixes can take days or weeks, and a fundamental flaw means that your ship sinks and you've lost half a year of work and tens of millions of dollars. By the time you get to something with the complexity of a modern high-performance microprocessor, a minor bug discovered in production costs three months and five million dollars. A fundamental flaw in the architecture will cost you five years and hundreds of millions of dollars2.

Physical mistakes are costly. There's no undo and editing isn't simply a matter of pressing some keys; changes consume real, physical resources. You need enough wisdom and experience to avoid common mistakes entirely – especially the ones that can't be fixed.
techtariat  comparison  software  hardware  programming  engineering  nitty-gritty  realness  roots  explanans  startups  tech  sv  the-world-is-just-atoms  examples  stories  economics  heavy-industry  hard-tech  cs  IEEE  oceans  trade  korea  asia  recruiting  britain  anglo  expert-experience  growth-econ  world  developing-world  books  recommendations  intricacy  dan-luu  age-generation  system-design  correctness 
10 days ago by nhaliday
The End of the Editor Wars » Linux Magazine
Moreover, even if you assume a broad margin of error, the pollings aren't even close. With all the various text editors available today, Vi and Vim continue to be the choice of over a third of users, while Emacs well back in the pack, no longer a competitor for the most popular text editor.

https://www.quora.com/Are-there-more-Emacs-or-Vim-users
I believe Vim is actually more popular, but it's hard to find any real data on it. The best source I've seen is the annual StackOverflow developer survey where 15.2% of developers used Vim compared to a mere 3.2% for Emacs.

Oddly enough, the report noted that "Data scientists and machine learning developers are about 3 times more likely to use Emacs than any other type of developer," which is not necessarily what I would have expected.

[ed. NB: Vim still dominates overall.]

https://pinboard.in/u:nhaliday/b:6adc1b1ef4dc

Time To End The vi/Emacs Debate: https://cacm.acm.org/blogs/blog-cacm/226034-time-to-end-the-vi-emacs-debate/fulltext

Vim, Emacs and their forever war. Does it even matter any more?: https://blog.sourcerer.io/vim-emacs-and-their-forever-war-does-it-even-matter-any-more-697b1322d510
Like an episode of “Silicon Valley”, a discussion of Emacs vs. Vim used to have a polarizing effect that would guarantee a stimulating conversation, regardless of an engineer’s actual alignment. But nowadays, diehard Emacs and Vim users are getting much harder to find. Maybe I’m in the wrong orbit, but looking around today, I see that engineers are equally or even more likely to choose any one of a number of great (for any given definition of ‘great’) modern editors or IDEs such as Sublime Text, Visual Studio Code, Atom, IntelliJ (… or one of its siblings), Brackets, Visual Studio or Xcode, to name a few. It’s not surprising really — many top engineers weren’t even born when these editors were at version 1.0, and GUIs (for better or worse) hadn’t been invented.

...

… both forums have high traffic and up-to-the-minute comment and discussion threads. Some of the available statistics paint a reasonably healthy picture — Stackoverflow’s 2016 developer survey ranks Vim 4th out of 24 with 26.1% of respondents in the development environments category claiming to use it. Emacs came 15th with 5.2%. In combination, over 30% is, actually, quite impressive considering they’ve been around for several decades.

What’s odd, however, is that if you ask someone — say a random developer — to express a preference, the likelihood is that they will favor for one or the other even if they have used neither in anger. Maybe the meme has spread so widely that all responses are now predominantly ritualistic, and represent something more fundamental than peoples’ mere preference for an editor? There’s a rather obvious political hypothesis waiting to be made — that Emacs is the leftist, socialist, centralized state, while Vim represents the right and the free market, specialization and capitalism red in tooth and claw.

How is Emacs/Vim used in companies like Google, Facebook, or Quora? Are there any libraries or tools they share in public?: https://www.quora.com/How-is-Emacs-Vim-used-in-companies-like-Google-Facebook-or-Quora-Are-there-any-libraries-or-tools-they-share-in-public
In Google there's a fair amount of vim and emacs. I would say at least every other engineer uses one or another.

Among Software Engineers, emacs seems to be more popular, about 2:1. Among Site Reliability Engineers, vim is more popular, about 9:1.
--
People use both at Facebook, with (in my opinion) slightly better tooling for Emacs than Vim. We share a master.emacs and master.vimrc file, which contains the bare essentials (like syntactic highlighting for the Hack language). We also share a Ctags file that's updated nightly with a cron script.

Beyond the essentials, there's a group for Emacs users at Facebook that provides tips, tricks, and major-modes created by people at Facebook. That's where Adam Hupp first developed his excellent mural-mode (ahupp/mural), which does for Ctags what iDo did for file finding and buffer switching.
--
For emacs, it was very informal at Google. There wasn't a huge community of Emacs users at Google, so there wasn't much more than a wiki and a couple language styles matching Google's style guides.

https://trends.google.com/trends/explore?date=all&geo=US&q=%2Fm%2F07zh7,%2Fm%2F01yp0m

https://www.quora.com/Why-is-interest-in-Emacs-dropping
And it is still that. It’s just that emacs is no longer unique, and neither is Lisp.

Dynamically typed scripting languages with garbage collection are a dime a dozen now. Anybody in their right mind developing an extensible text editor today would just use python, ruby, lua, or JavaScript as the extension language and get all the power of Lisp combined with vibrant user communities and millions of lines of ready-made libraries that Stallman and Steele could only dream of in the 70s.

In fact, in many ways emacs and elisp have fallen behind: 40 years after Lambda, the Ultimate Imperative, elisp is still dynamically scoped, and it still doesn’t support multithreading — when I try to use dired to list the files on a slow NFS mount, the entire editor hangs just as thoroughly as it might have in the 1980s. And when I say “doesn’t support multithreading,” I don’t mean there is some other clever trick for continuing to do work while waiting on a system call, like asynchronous callbacks or something. There’s start-process which forks a whole new process, and that’s about it. It’s a concurrency model straight out of 1980s UNIX land.

But being essentially just a decent text editor has robbed emacs of much of its competitive advantage. In a world where every developer tool is scriptable with languages and libraries an order of magnitude more powerful than cranky old elisp, the reason to use emacs is not that it lets a programmer hit a button and evaluate the current expression interactively (which must have been absolutely amazing at one point in the past).

https://www.reddit.com/r/emacs/comments/bh5kk7/why_do_many_new_users_still_prefer_vim_over_emacs/

more general comparison, not just popularity:
Differences between Emacs and Vim: https://stackoverflow.com/questions/1430164/differences-between-Emacs-and-vim

https://www.reddit.com/r/emacs/comments/9hen7z/what_are_the_benefits_of_emacs_over_vim/

Technical Interview Performance by Editor/OS/Language: https://triplebyte.com/blog/technical-interview-performance-by-editor-os-language
[ed.: I'm guessing this is confounded to all hell.]

The #1 most common editor we see used in interviews is Sublime Text, with Vim close behind.

Emacs represents a fairly small market share today at just about a quarter the userbase of Vim in our interviews. This nicely matches the 4:1 ratio of Google Search Trends for the two editors.

...

Vim takes the prize here, but PyCharm and Emacs are close behind. We’ve found that users of these editors tend to pass our interview at an above-average rate.

On the other end of the spectrum is Eclipse: it appears that someone using either Vim or Emacs is more than twice as likely to pass our technical interview as an Eclipse user.

...

In this case, we find that the average Ruby, Swift, and C# users tend to be stronger, with Python and Javascript in the middle of the pack.

...

Here’s what happens after we select engineers to work with and send them to onsites:

[Python does best.]

There are no wild outliers here, but let’s look at the C++ segment. While C++ programmers have the most challenging time passing Triplebyte’s technical interview on average, the ones we choose to work with tend to have a relatively easier time getting offers at each onsite.

The Rise of Microsoft Visual Studio Code: https://triplebyte.com/blog/editor-report-the-rise-of-visual-studio-code
This chart shows the rates at which each editor's users pass our interview compared to the mean pass rate for all candidates. First, notice the preeminence of Emacs and Vim! Engineers who use these editors pass our interview at significantly higher rates than other engineers. And the effect size is not small. Emacs users pass our interview at a rate 50% higher than other engineers. What could explain this phenomenon? One possible explanation is that Vim and Emacs are old school. You might expect their users to have more experience and, thus, to do better. However, notice that VS Code is the third best editor—and it is brand new. This undercuts that narrative a bit (and makes VS Code look even more dominant).

Do Emacs and Vim users have some other characteristic that makes them more likely to succeed during interviews? Perhaps they tend to be more willing to invest time and effort customizing a complex editor in the short-term in order to get returns from a more powerful tool in the long-term?

...

Java and C# do have relatively low pass rates, although notice that Eclipse has a lower pass rate than Java (-21.4% vs. -16.7), so we cannot fully explain its poor performance as Java dragging it down.

Also, what's going on with Go? Go programmers are great! To dig deeper into these questions, I looked at editor usage by language:

...

Another finding from this chart is the difference between VS Code and Sublime. VS Code is primarily used for JavaScript development (61%) but less frequently for Python development (22%). With Sublime, the numbers are basically reversed (51% Python and 30% JavaScript). It's interesting that VS Code users pass interviews at a higher rate than Sublime engineers, even though they predominately use a language with a lower success rate (JavaSript).

To wrap things up, I sliced the data by experience level and location. Here you can see language usage by experience level:

...

Then there's editor usage by experience level:

...

Take all of this with a grain of salt. I want to end by saying that we don't think any of this is causative. That is, I don't recommend that you start using Emacs and Go (or stop using… [more]
news  linux  oss  tech  editors  devtools  tools  comparison  ranking  flux-stasis  trends  ubiquity  unix  increase-decrease  multi  q-n-a  qra  data  poll  stackex  sv  facebook  google  integration-extension  org:med  politics  stereotypes  coalitions  decentralized  left-wing  right-wing  chart  scale  time-series  distribution  top-n  list  discussion  ide  parsimony  intricacy  cost-benefit  tradeoffs  confounding  analysis  crosstab  pls  python  c(pp)  jvm  microsoft  golang  hmm  correlation  debate  critique 
11 days ago by nhaliday
Lindy effect - Wikipedia
The Lindy effect is a theory that the future life expectancy of some non-perishable things like a technology or an idea is proportional to their current age, so that every additional period of survival implies a longer remaining life expectancy.[1] Where the Lindy effect applies, mortality rate decreases with time. In contrast, living creatures and mechanical things follow a bathtub curve where, after "childhood", the mortality rate increases with time. Because life expectancy is probabilistically derived, a thing may become extinct before its "expected" survival. In other words, one needs to gauge both the age and "health" of the thing to determine continued survival.
wiki  reference  concept  metabuch  ideas  street-fighting  planning  comparison  time  distribution  flux-stasis  history  measure  correlation  arrows  branches  pro-rata  manifolds  aging  stylized-facts  age-generation  robust  technology  thinking  cost-benefit  conceptual-vocab  methodology  threat-modeling  efficiency  neurons  tools  track-record 
12 days ago by nhaliday
Interview with Donald Knuth | Interview with Donald Knuth | InformIT
Andrew Binstock and Donald Knuth converse on the success of open source, the problem with multicore architecture, the disappointing lack of interest in literate programming, the menace of reusable code, and that urban legend about winning a programming contest with a single compilation.
nibble  interview  giants  expert-experience  programming  cs  software  contrarianism  carmack  oss  prediction  trends  linux  concurrency  desktop  comparison  checking  debugging  stories  engineering  hmm  idk  algorithms  books  debate  flux-stasis  duplication  parsimony  best-practices  writing  documentation  latex  intricacy  structure  hardware  caching  workflow  editors  composition-decomposition  coupling-cohesion  exposition  technical-writing  thinking 
13 days ago by nhaliday
Football Still Americans' Favorite Sport to Watch
37% say football is their favorite sport to watch, by far the most for any sport
Baseball is at its lowest point ever, with only 9% saying it is their favorite
Football has slipped in popularity from its peak of 43% in 2006 and 2007

WASHINGTON, D.C. -- American football, under attack from critics in recent years, has lost some of its popularity but is still the champion of U.S. spectator sports -- picked by 37% of U.S. adults as their favorite sport to watch. The next-most-popular sports are basketball, favored by 11%, and baseball, favored by 9%.

http://www.businessinsider.com/popularity-nfl-mlb-nba-2015-2
news  org:data  data  time-series  history  mostly-modern  poll  measure  usa  scale  sports  vulgar  trivia  org:biz  multi  comparison  ranking  human-bean  ubiquity 
15 days ago by nhaliday
classification - ImageNet: what is top-1 and top-5 error rate? - Cross Validated
Now, in the case of top-1 score, you check if the top class (the one having the highest probability) is the same as the target label.

In the case of top-5 score, you check if the target label is one of your top 5 predictions (the 5 ones with the highest probabilities).
nibble  q-n-a  overflow  machine-learning  deep-learning  metrics  comparison  ranking  top-n  classification  computer-vision  benchmarks  dataset  accuracy  error  jargon 
22 days ago by nhaliday
Analysis of Current and Future Computer Science Needs via Advertised Faculty Searches for 2019 - CRN
Differences are also seen when analyzing results based on the type of institution. Positions related to Security have the highest percentages for all but top-100 institutions. The area of Artificial Intelligence/Data Mining/Machine Learning is of most interest for top-100 PhD institutions. Roughly 35% of positions for PhD institutions are in data-oriented areas. The results show a strong interest in data-oriented areas by public PhD and private PhD, MS, and BS institutions while public MS and BS institutions are most interested in Security.
org:edu  data  analysis  visualization  trends  recruiting  jobs  career  planning  academia  higher-ed  cs  tcs  machine-learning  systems  pro-rata  measure  long-term  🎓  uncertainty  progression  grad-school  phd  distribution  ranking  top-n  security  status  s-factor  comparison  homo-hetero  correlation  org:ngo  white-paper 
22 days ago by nhaliday
algorithm, algorithmic, algorithmicx, algorithm2e, algpseudocode = confused - TeX - LaTeX Stack Exchange
algorithm2e is only one currently maintained, but answerer prefers style of algorithmicx, and after perusing the docs, so do I
q-n-a  stackex  libraries  list  recommendations  comparison  publishing  cs  programming  algorithms  tools 
23 days ago by nhaliday
bibliographies - bibtex vs. biber and biblatex vs. natbib - TeX - LaTeX Stack Exchange
- bibtex and biber are external programs that process bibliography information and act (roughly) as the interface between your .bib file and your LaTeX document.
- natbib and biblatex are LaTeX packages that format citations and bibliographies; natbib works only with bibtex, while biblatex (at the moment) works with both bibtex and biber.)

natbib
The natbib package has been around for quite a long time, and although still maintained, it is fair to say that it isn't being further developed. It is still widely used, and very reliable.

Advantages
...
- The resulting bibliography code can be pasted directly into a document (often required for journal submissions). See Biblatex: submitting to a journal.

...

biblatex
The biblatex package is being actively developed in conjunction with the biber backend.

Advantages
*lots*

Disadvantages
- Journals and publishers may not accept documents that use biblatex if they have a house style with its own natbib compatible .bst file.
q-n-a  stackex  latex  comparison  cost-benefit  writing  scholar  technical-writing  yak-shaving  publishing 
28 days ago by nhaliday
Should I go for TensorFlow or PyTorch?
Honestly, most experts that I know love Pytorch and detest TensorFlow. Karpathy and Justin from Stanford for example. You can see Karpthy's thoughts and I've asked Justin personally and the answer was sharp: PYTORCH!!! TF has lots of PR but its API and graph model are horrible and will waste lots of your research time.

--

...

Updated Mar 12
Update after 2019 TF summit:

TL/DR: previously I was in the pytorch camp but with TF 2.0 it’s clear that Google is really going to try to have parity or try to be better than Pytorch in all aspects where people voiced concerns (ease of use/debugging/dynamic graphs). They seem to be allocating more resources on development than Facebook so the longer term currently looks promising for Google. Prior to TF 2.0 I thought that Pytorch team had more momentum. One area where FB/Pytorch is still stronger is Google is a bit more closed and doesn’t seem to release reproducible cutting edge models such as AlphaGo whereas FAIR released OpenGo for instance. Generally you will end up running into models that are only implemented in one framework of the other so chances are you might end up learning both.
q-n-a  qra  comparison  software  recommendations  cost-benefit  tradeoffs  python  libraries  machine-learning  deep-learning  data-science  sci-comp  tools  google  facebook  tech  competition  best-practices  trends  debugging  expert-experience 
4 weeks ago by nhaliday
One week of bugs
If I had to guess, I'd say I probably work around hundreds of bugs in an average week, and thousands in a bad week. It's not unusual for me to run into a hundred new bugs in a single week. But I often get skepticism when I mention that I run into multiple new (to me) bugs per day, and that this is inevitable if we don't change how we write tests. Well, here's a log of one week of bugs, limited to bugs that were new to me that week. After a brief description of the bugs, I'll talk about what we can do to improve the situation. The obvious answer to spend more effort on testing, but everyone already knows we should do that and no one does it. That doesn't mean it's hopeless, though.

...

Here's where I'm supposed to write an appeal to take testing more seriously and put real effort into it. But we all know that's not going to work. It would take 90k LOC of tests to get Julia to be as well tested as a poorly tested prototype (falsely assuming linear complexity in size). That's two person-years of work, not even including time to debug and fix bugs (which probably brings it closer to four of five years). Who's going to do that? No one. Writing tests is like writing documentation. Everyone already knows you should do it. Telling people they should do it adds zero information1.

Given that people aren't going to put any effort into testing, what's the best way to do it?

Property-based testing. Generative testing. Random testing. Concolic Testing (which was done long before the term was coined). Static analysis. Fuzzing. Statistical bug finding. There are lots of options. Some of them are actually the same thing because the terminology we use is inconsistent and buggy. I'm going to arbitrarily pick one to talk about, but they're all worth looking into.

...

There are a lot of great resources out there, but if you're just getting started, I found this description of types of fuzzers to be one of those most helpful (and simplest) things I've read.

John Regehr has a udacity course on software testing. I haven't worked through it yet (Pablo Torres just pointed to it), but given the quality of Dr. Regehr's writing, I expect the course to be good.

For more on my perspective on testing, there's this.

https://hypothesis.works/articles/the-purpose-of-hypothesis/
From the perspective of a user, the purpose of Hypothesis is to make it easier for you to write better tests.

From my perspective as the primary author, that is of course also a purpose of Hypothesis. I write a lot of code, it needs testing, and the idea of trying to do that without Hypothesis has become nearly unthinkable.

But, on a large scale, the true purpose of Hypothesis is to drag the world kicking and screaming into a new and terrifying age of high quality software.

Software is everywhere. We have built a civilization on it, and it’s only getting more prevalent as more services move online and embedded and “internet of things” devices become cheaper and more common.

Software is also terrible. It’s buggy, it’s insecure, and it’s rarely well thought out.

This combination is clearly a recipe for disaster.

The state of software testing is even worse. It’s uncontroversial at this point that you should be testing your code, but it’s a rare codebase whose authors could honestly claim that they feel its testing is sufficient.

Much of the problem here is that it’s too hard to write good tests. Tests take up a vast quantity of development time, but they mostly just laboriously encode exactly the same assumptions and fallacies that the authors had when they wrote the code, so they miss exactly the same bugs that you missed when they wrote the code.

Preventing the Collapse of Civilization [video]: https://news.ycombinator.com/item?id=19945452
- Jonathan Blow

NB: DevGAMM is a game industry conference

- loss of technological knowledge (Antikythera mechanism, aqueducts, etc.)
- hardware driving most gains, not software
- software's actually less robust, often poorly designed and overengineered these days
- *list of bugs he's encountered recently*:
https://youtu.be/pW-SOdj4Kkk?t=1387
- knowledge of trivia becomes more than general, deep knowledge
- does at least acknowledge value of DRY, reusing code, abstraction saving dev time
techtariat  dan-luu  tech  software  error  list  debugging  linux  github  robust  checking  oss  troll  lol  aphorism  webapp  email  google  facebook  games  julia  pls  compilers  communication  mooc  browser  rust  programming  engineering  random  jargon  formal-methods  expert-experience  prof  c(pp)  course  correctness  hn  commentary  video  presentation  carmack  pragmatic  contrarianism  pessimism  sv  unix  rhetoric  critique  worrydream  hardware  performance  trends  multiplicative  roots  impact  comparison  history  iron-age  the-classics  mediterranean  conquest-empire  gibbon  technology  the-world-is-just-atoms  flux-stasis  increase-decrease  graphics  hmm  idk  systems  os  abstraction  intricacy  worse-is-better/the-right-thing  build-packaging  microsoft  osx  apple  reflection  assembly  things  knowledge  detail-architecture  thick-thin  trivia  info-dynamics  caching  frameworks  generalization  systematic-ad-hoc  universalism-particularism  analytical-holistic  structure  tainter  libraries  tradeoffs  prepping  threat-modeling  network-structure  writing  risk  local-glob 
4 weeks ago by nhaliday
oop - Functional programming vs Object Oriented programming - Stack Overflow
When you anticipate a different kind of software evolution:
- Object-oriented languages are good when you have a fixed set of operations on things, and as your code evolves, you primarily add new things. This can be accomplished by adding new classes which implement existing methods, and the existing classes are left alone.
- Functional languages are good when you have a fixed set of things, and as your code evolves, you primarily add new operations on existing things. This can be accomplished by adding new functions which compute with existing data types, and the existing functions are left alone.

When evolution goes the wrong way, you have problems:
- Adding a new operation to an object-oriented program may require editing many class definitions to add a new method.
- Adding a new kind of thing to a functional program may require editing many function definitions to add a new case.

This problem has been well known for many years; in 1998, Phil Wadler dubbed it the "expression problem". Although some researchers think that the expression problem can be addressed with such language features as mixins, a widely accepted solution has yet to hit the mainstream.

What are the typical problem definitions where functional programming is a better choice?

Functional languages excel at manipulating symbolic data in tree form. A favorite example is compilers, where source and intermediate languages change seldom (mostly the same things), but compiler writers are always adding new translations and code improvements or optimizations (new operations on things). Compilation and translation more generally are "killer apps" for functional languages.
q-n-a  stackex  programming  engineering  nitty-gritty  comparison  best-practices  cost-benefit  functional  data-structures  arrows  flux-stasis  atoms  compilers  examples  pls  plt  oop  types 
5 weeks ago by nhaliday
Information Processing: Moore's Law and AI
Hint to technocratic planners: invest more in physicists, chemists, and materials scientists. The recent explosion in value from technology has been driven by physical science -- software gets way too much credit. From the former we got a factor of a million or more in compute power, data storage, and bandwidth. From the latter, we gained (perhaps) an order of magnitude or two in effectiveness: how much better are current OSes and programming languages than Unix and C, both of which are ~50 years old now?

...

Of relevance to this discussion: a big chunk of AlphaGo's performance improvement over other Go programs is due to raw compute power (link via Jess Riedel). The vertical axis is ELO score. You can see that without multi-GPU compute, AlphaGo has relatively pedestrian strength.
hsu  scitariat  comparison  software  hardware  performance  sv  tech  trends  ai  machine-learning  deep-learning  deepgoog  google  roots  impact  hard-tech  multiplicative  the-world-is-just-atoms  technology  trivia  cocktail  big-picture  hi-order-bits 
5 weeks ago by nhaliday
algorithm - Skip List vs. Binary Search Tree - Stack Overflow
Skip lists are more amenable to concurrent access/modification. Herb Sutter wrote an article about data structure in concurrent environments. It has more indepth information.

The most frequently used implementation of a binary search tree is a red-black tree. The concurrent problems come in when the tree is modified it often needs to rebalance. The rebalance operation can affect large portions of the tree, which would require a mutex lock on many of the tree nodes. Inserting a node into a skip list is far more localized, only nodes directly linked to the affected node need to be locked.
q-n-a  stackex  nibble  programming  tcs  data-structures  performance  concurrency  comparison  cost-benefit  applicability-prereqs  random  trees 
5 weeks ago by nhaliday
language design - Why does C++ need a separate header file? - Stack Overflow
C++ does it that way because C did it that way, so the real question is why did C do it that way? Wikipedia speaks a little to this.

Newer compiled languages (such as Java, C#) do not use forward declarations; identifiers are recognized automatically from source files and read directly from dynamic library symbols. This means header files are not needed.
q-n-a  stackex  programming  pls  c(pp)  compilers  trivia  roots  yak-shaving  flux-stasis  comparison  jvm 
6 weeks ago by nhaliday
c++ - Why are forward declarations necessary? - Stack Overflow
C++, while created almost 17 years later, was defined as a superset of C, and therefore had to use the same mechanism.

By the time Java rolled around in 1995, average computers had enough memory that holding a symbolic table, even for a complex project, was no longer a substantial burden. And Java wasn't designed to be backwards-compatible with C, so it had no need to adopt a legacy mechanism. C# was similarly unencumbered.

As a result, their designers chose to shift the burden of compartmentalizing symbolic declaration back off the programmer and put it on the computer again, since its cost in proportion to the total effort of compilation was minimal.
q-n-a  stackex  programming  pls  c(pp)  trivia  yak-shaving  roots  compilers  flux-stasis  comparison  jvm 
6 weeks ago by nhaliday
Braves | West Hunter
If  Amerindians had a lot fewer serious infectious diseases than Old Worlders, something else had to limit population – and it wasn’t the Pill.

Surely there was more death by violence. In principle they could have sat down and quietly starved to death, but I doubt it. Better to burn out than fade away.
west-hunter  scitariat  reflection  ideas  usa  farmers-and-foragers  history  medieval  iron-age  europe  comparison  asia  civilization  peace-violence  martial  selection  ecology  disease  parasites-microbiome  pop-diff  incentives  malthus  equilibrium 
6 weeks ago by nhaliday
maintenance - Why do dynamic languages make it more difficult to maintain large codebases? - Software Engineering Stack Exchange
Now here is the key point I have been building up to: there is a strong correlation between a language being dynamically typed and a language also lacking all the other facilities that make lowering the cost of maintaining a large codebase easier, and that is the key reason why it is more difficult to maintain a large codebase in a dynamic language. And similarly there is a correlation between a language being statically typed and having facilities that make programming in the larger easier.
programming  worrydream  plt  hmm  comparison  pls  carmack  techtariat  types  engineering  productivity  pro-rata  input-output  formal-methods  correlation  best-practices  composition-decomposition  error  causation  confounding  devtools  jvm  scala  open-closed  cost-benefit 
7 weeks ago by nhaliday
quality - Is the average number of bugs per loc the same for different programming languages? - Software Engineering Stack Exchange
Contrary to intuition, the number of errors per 1000 lines of does seem to be relatively constant, reguardless of the specific language involved. Steve McConnell, author of Code Complete and Software Estimation: Demystifying the Black Art goes over this area in some detail.

I don't have my copies readily to hand - they're sitting on my bookshelf at work - but a quick Google found a relevant quote:

Industry Average: "about 15 - 50 errors per 1000 lines of delivered code."
(Steve) further says this is usually representative of code that has some level of structured programming behind it, but probably includes a mix of coding techniques.

Quoted from Code Complete, found here: http://mayerdan.com/ruby/2012/11/11/bugs-per-line-of-code-ratio/

If memory serves correctly, Steve goes into a thorough discussion of this, showing that the figures are constant across languages (C, C++, Java, Assembly and so on) and despite difficulties (such as defining what "line of code" means).

Most importantly he has lots of citations for his sources - he's not offering unsubstantiated opinions, but has the references to back them up.

[ed.: I think this is delivered code? So after testing, debugging, etc. I'm more interested in the metric for the moment after you've gotten something to compile.]
q-n-a  stackex  programming  engineering  nitty-gritty  error  flux-stasis  books  recommendations  software  checking  debugging  pro-rata  pls  comparison  parsimony  measure  data  objektbuch  speculation  accuracy  density  correctness 
9 weeks ago by nhaliday
The Architect as Totalitarian: Le Corbusier’s baleful influence | City Journal
Le Corbusier was to architecture what Pol Pot was to social reform. In one sense, he had less excuse for his activities than Pol Pot: for unlike the Cambodian, he possessed great talent, even genius. Unfortunately, he turned his gifts to destructive ends, and it is no coincidence that he willingly served both Stalin and Vichy.
news  org:mag  right-wing  albion  gnon  isteveish  architecture  essay  rhetoric  critique  contrarianism  communism  comparison  aphorism  modernity  authoritarianism  universalism-particularism  europe  gallic  history  mostly-modern  urban-rural  revolution  art  culture 
9 weeks ago by nhaliday
Catholics Similar to Mainstream on Abortion, Stem Cells
The data show that regular churchgoing non-Catholics also have very conservative positions on moral issues. In fact, on most of the issues tested, regular churchgoers who are not Catholic are more conservative (i.e., less likely to find a given practice morally acceptable) than Catholic churchgoers.
news  org:data  poll  data  values  religion  christianity  protestant-catholic  comparison  morality  gender  sex  sexuality  time  density  theos  pro-rata  frequency  demographics  abortion-contraception-embryo  sanctity-degradation 
march 2019 by nhaliday
Verbal Edge: Borges & Buckley | Eamonn Fitzgerald: Rainy Day
At one point, Borges said that he found English “a far finer language” than Spanish and Buckley asked “Why?”

Borges: There are many reasons. Firstly, English is both a Germanic and a Latin language, those two registers.

...

And then there is another reason. And the reason is that I think that of all languages, English is the most physical. You can, for example, say “He loomed over.” You can’t very well say that in Spanish.

Buckley: Asomo?
Borges: No; they’re not exactly the same. And then, in English, you can do almost anything with verbs and prepositions. For example, to “laugh off,” to “dream away.” Those things can’t be said in Spanish.

http://www.oenewsletter.org/OEN/print.php/essays/toswell43_1/Array
J.L.B.: "You will say that it's easier for a Dane to study English than for a Spanish-speaking person to learn English or an Englishman Spanish; but I don't think this is true, because English is a Latin language as well as a Germanic one. At least half the English vocabulary is Latin. Remember that in English there are two words for every idea: one Saxon and one Latin. You can say 'Holy Ghost' or 'Holy Spirit,' 'sacred' or 'holy.' There's always a slight difference, but one that's very important for poetry, the difference between 'dark' and 'obscure' for instance, or 'regal' and 'kingly,' or 'fraternal' and 'brotherly.' In the English language almost al words representing abstract ideas come from Latin, and those for concrete ideas from Saxon, but there aren't so many concrete ideas." (P. 71) [2]

In his own words, then, Borges was fascinated by Old English and Old Norse.
interview  history  mostly-modern  language  foreign-lang  anglo  anglosphere  culture  literature  writing  mediterranean  latin-america  germanic  roots  comparison  quotes  flexibility  org:junk  multi  medieval  nordic  lexical  parallax 
february 2019 by nhaliday
T. Greer on Twitter: "Genesis 1st half of Exodus Basic passages of the Deuteronomic Covenant Select scenes from Numbers-Judges Samuel I-II Job Ecclesiastes Proverbs Select Psalms Select passages of Isiah, Jeremiah, and Ezekiel Jonah 4 Gospels+Acts Romans
https://archive.is/YtwVb
I would pair letters from Paul with Flannery O'Connor's "A Good Man is Hard to Find."

I designed a hero's journey course that included Gilgamesh, Odyssey, and Gawain and the Green Knight. Before reading Gawain you'd read the Sermon on the Mount + few parts of gospels.
The idea with that last one being that Gawain was an attempt to make a hero who (unlike Odysseus) accorded with Christian ethics. As one of its discussion points, the class can debate over how well it actually did that.
...
So I would preface Lord of the Flies with a stylized account of Hobbes and Rosseau, and we would read a great deal of Genesis alongside LOTF.

Same approach was taken to Greece and Rome. Classical myths would be paired with poems from the 1600s-1900s that alluded to them.
...
Genesis
1st half of Exodus
Basic passages of the Deuteronomic Covenant
Select scenes from Numbers-Judges
Samuel I-II
Job
Ecclesiastes
Proverbs
Select Psalms
Select passages of Isiah, Jeremiah, and Ezekiel
Jonah
4 Gospels+Acts
Romans
1 Corinthians
Hebrews
Revelation
twitter  social  discussion  backup  literature  letters  reading  canon  the-classics  history  usa  europe  the-great-west-whale  religion  christianity  ideology  philosophy  ethics  medieval  china  asia  sinosphere  comparison  culture  civilization  roots  spreading 
february 2019 by nhaliday
A cross-language perspective on speech information rate
Figure 2.

English (IREN = 1.08) shows a higher Information Rate than Vietnamese (IRVI = 1). On the contrary, Japanese exhibits the lowest IRL value of the sample. Moreover, one can observe that several languages may reach very close IRL with different encoding strategies: Spanish is characterized by a fast rate of low-density syllables while Mandarin exhibits a 34% slower syllabic rate with syllables ‘denser’ by a factor of 49%. Finally, their Information Rates differ only by 4%.

Is spoken English more efficient than other languages?: https://linguistics.stackexchange.com/questions/2550/is-spoken-english-more-efficient-than-other-languages
As a translator, I can assure you that English is no more efficient than other languages.
--
[some comments on a different answer:]
Russian, when spoken, is somewhat less efficient than English, and that is for sure. No one who has ever worked as an interpreter can deny it. You can convey somewhat more information in English than in Russian within an hour. The English language is not constrained by the rigid case and gender systems of the Russian language, which somewhat reduce the information density of the Russian language. The rules of the Russian language force the speaker to incorporate sometimes unnecessary details in his speech, which can be problematic for interpreters – user74809 Nov 12 '18 at 12:48
But in writing, though, I do think that Russian is somewhat superior. However, when it comes to common daily speech, I do not think that anyone can claim that English is less efficient than Russian. As a matter of fact, I also find Russian to be somewhat more mentally taxing than English when interpreting. I mean, anyone who has lived in the world of Russian and then moved to the world of English is certain to notice that English is somewhat more efficient in everyday life. It is not a night-and-day difference, but it is certainly noticeable. – user74809 Nov 12 '18 at 13:01
...
By the way, I am not knocking Russian. I love Russian, it is my mother tongue and the only language, in which I sound like a native speaker. I mean, I still have a pretty thick Russian accent. I am not losing it anytime soon, if ever. But like I said, living in both worlds, the Moscow world and the Washington D.C. world, I do notice that English is objectively more efficient, even if I am myself not as efficient in it as most other people. – user74809 Nov 12 '18 at 13:40

Do most languages need more space than English?: https://english.stackexchange.com/questions/2998/do-most-languages-need-more-space-than-english
Speaking as a translator, I can share a few rules of thumb that are popular in our profession:
- Hebrew texts are usually shorter than their English equivalents by approximately 1/3. To a large extent, that can be attributed to cheating, what with no vowels and all.
- Spanish, Portuguese and French (I guess we can just settle on Romance) texts are longer than their English counterparts by about 1/5 to 1/4.
- Scandinavian languages are pretty much on par with English. Swedish is a tiny bit more compact.
- Whether or not Russian (and by extension, Ukrainian and Belorussian) is more compact than English is subject to heated debate, and if you ask five people, you'll be presented with six different opinions. However, everybody seems to agree that the difference is just a couple percent, be it this way or the other.

--

A point of reference from the website I maintain. The files where we store the translations have the following sizes:

English: 200k
Portuguese: 208k
Spanish: 209k
German: 219k
And the translations are out of date. That is, there are strings in the English file that aren't yet in the other files.

For Chinese, the situation is a bit different because the character encoding comes into play. Chinese text will have shorter strings, because most words are one or two characters, but each character takes 3–4 bytes (for UTF-8 encoding), so each word is 3–12 bytes long on average. So visually the text takes less space but in terms of the information exchanged it uses more space. This Language Log post suggests that if you account for the encoding and remove redundancy in the data using compression you find that English is slightly more efficient than Chinese.

Is English more efficient than Chinese after all?: https://languagelog.ldc.upenn.edu/nll/?p=93
[Executive summary: Who knows?]

This follows up on a series of earlier posts about the comparative efficiency — in terms of text size — of different languages ("One world, how many bytes?", 8/5/2005; "Comparing communication efficiency across languages", 4/4/2008; "Mailbag: comparative communication efficiency", 4/5/2008). Hinrich Schütze wrote:
pdf  study  language  foreign-lang  linguistics  pro-rata  bits  communication  efficiency  density  anglo  japan  asia  china  mediterranean  data  multi  comparison  writing  meta:reading  measure  compression  empirical  evidence-based  experiment  analysis  chart  trivia  cocktail 
february 2019 by nhaliday
Citizendium, the Citizens' Compendium
That wikipedia alternative by the nerdy spurned co-founder of Jimmy Wales (Larry Sanger). Unfortunately looks rather empty.
wiki  reference  database  search  comparison  organization  duplication  socs-and-mops  the-devil  god-man-beast-victim  guilt-shame 
november 2018 by nhaliday
An adaptability limit to climate change due to heat stress
Despite the uncertainty in future climate-change impacts, it is often assumed that humans would be able to adapt to any possible warming. Here we argue that heat stress imposes a robust upper limit to such adaptation. Peak heat stress, quantified by the wet-bulb temperature TW, is surprisingly similar across diverse climates today. TW never exceeds 31 °C. Any exceedence of 35 °C for extended periods should induce hyperthermia in humans and other mammals, as dissipation of metabolic heat becomes impossible. While this never happens now, it would begin to occur with global-mean warming of about 7 °C, calling the habitability of some regions into question. With 11–12 °C warming, such regions would spread to encompass the majority of the human population as currently distributed. Eventual warmings of 12 °C are possible from fossil fuel burning. One implication is that recent estimates of the costs of unmitigated climate change are too low unless the range of possible warming can somehow be narrowed. Heat stress also may help explain trends in the mammalian fossil record.

Trajectories of the Earth System in the Anthropocene: http://www.pnas.org/content/early/2018/07/31/1810141115
We explore the risk that self-reinforcing feedbacks could push the Earth System toward a planetary threshold that, if crossed, could prevent stabilization of the climate at intermediate temperature rises and cause continued warming on a “Hothouse Earth” pathway even as human emissions are reduced. Crossing the threshold would lead to a much higher global average temperature than any interglacial in the past 1.2 million years and to sea levels significantly higher than at any time in the Holocene. We examine the evidence that such a threshold might exist and where it might be.
study  org:nat  environment  climate-change  humanity  existence  risk  futurism  estimate  physics  thermo  prediction  temperature  nature  walls  civilization  flexibility  rigidity  embodied  multi  manifolds  plots  equilibrium  phase-transition  oscillation  comparison  complex-systems  earth 
august 2018 by nhaliday
Reconsidering epistemological scepticism – Dividuals
I blogged before about how I consider an epistemological scepticism fully compatible with being conservative/reactionary. By epistemological scepticism I mean the worldview where concepts, categories, names, classes aren’t considered real, just useful ways to categorize phenomena, but entirely mental constructs, basically just tools. I think you can call this nominalism as well. The nominalism-realism debate was certainly about this. What follows is the pro-empirical worldview where logic and reasoning is considered highly fallible: hence you don’t think and don’t argue too much, you actually look and check things instead. You rely on experience, not reasoning.

...

Anyhow, the argument is that there are classes, which are indeed artificial, and there are kinds, which are products of natural forces, products of causality.

...

And the deeper – Darwinian – argument, unspoken but obvious, is that any being with a model of reality that does not conform to such real clumps, gets eaten by a grue.

This is impressive. It seems I have to extend my one-variable epistemology to a two-variable epistemology.

My former epistemology was that we generally categorize things according to their uses or dangers for us. So “chair” is – very roughly – defined as “anything we can sit on”. Similarly, we can categorize “predator” as “something that eats us or the animals that are useful for us”.

The unspoken argument against this is that the universe or the biosphere exists neither for us nor against us. A fox can eat your rabbits and a lion can eat you, but they don’t exist just for the sake of making your life difficult.

Hence, if you interpret phenomena only from the viewpoint of their uses or dangers for humans, you get only half the picture right. The other half is what it really is and where it came from.

Copying is everything: https://dividuals.wordpress.com/2015/12/14/copying-is-everything/
Philosophy professor Ruth Millikan’s insight that everything that gets copied from an ancestor has a proper function or teleofunction: it is whatever feature or function that made it and its ancestor selected for copying, in competition with all the other similar copiable things. This would mean Aristotelean teleology is correct within the field of copyable things, replicators, i.e. within biology, although in physics still obviously incorrect.

Darwinian Reactionary drew attention to it two years ago and I still don’t understand why didn’t it generate a bigger buzz. It is an extremely important insight.

I mean, this is what we were waiting for, a proper synthesis of science and philosophy, and a proper way to rescue Aristotelean teleology, which leads to so excellent common-sense predictions that intuitively it cannot be very wrong, yet modern philosophy always denied it.

The result from that is the briding of the fact-value gap and burying the naturalistic fallacy: we CAN derive values from facts: a thing is good if it is well suitable for its natural purpose, teleofunction or proper function, which is the purpose it was selected for and copied for, the purpose and the suitability for the purpose that made the ancestors of this thing selected for copying, instead of all the other potential, similar ancestors.

...

What was humankind selected for? I am afraid, the answer is kind of ugly.

Men were selected to compete between groups, the cooperate within groups largely for coordinating for the sake of this competition, and have a low-key competition inside the groups as well for status and leadership. I am afraid, intelligence is all about organizing elaborate tribal raids: “coalitionary arms races”. The most civilized case, least brutal but still expensive case is arms races in prestige status, not dominance status: when Ancient Athens buildt pretty buildings and modern France built the TGV and America sent a man to the Moon in order to gain “gloire” i.e. the prestige type respect and status amongst the nations, the larger groups of mankind. If you are the type who doesn’t like blood, you should probably focus on these kinds of civilized, prestige-project competitions.

Women were selected for bearing children, for having strong and intelligent sons therefore having these heritable traits themselves (HBD kind of contradicts the more radically anti-woman aspects of RedPillery: marry a weak and stupid but attractive silly-blondie type woman and your son’s won’t be that great either), for pleasuring men and in some rarer but existing cases, to be true companions and helpers of their husbands.

https://en.wikipedia.org/wiki/Four_causes
- Matter: a change or movement's material cause, is the aspect of the change or movement which is determined by the material that composes the moving or changing things. For a table, that might be wood; for a statue, that might be bronze or marble.
- Form: a change or movement's formal cause, is a change or movement caused by the arrangement, shape or appearance of the thing changing or moving. Aristotle says for example that the ratio 2:1, and number in general, is the cause of the octave.
- Agent: a change or movement's efficient or moving cause, consists of things apart from the thing being changed or moved, which interact so as to be an agency of the change or movement. For example, the efficient cause of a table is a carpenter, or a person working as one, and according to Aristotle the efficient cause of a boy is a father.
- End or purpose: a change or movement's final cause, is that for the sake of which a thing is what it is. For a seed, it might be an adult plant. For a sailboat, it might be sailing. For a ball at the top of a ramp, it might be coming to rest at the bottom.

https://en.wikipedia.org/wiki/Proximate_and_ultimate_causation
A proximate cause is an event which is closest to, or immediately responsible for causing, some observed result. This exists in contrast to a higher-level ultimate cause (or distal cause) which is usually thought of as the "real" reason something occurred.

...

- Ultimate causation explains traits in terms of evolutionary forces acting on them.
- Proximate causation explains biological function in terms of immediate physiological or environmental factors.
gnon  philosophy  ideology  thinking  conceptual-vocab  forms-instances  realness  analytical-holistic  bio  evolution  telos-atelos  distribution  nature  coarse-fine  epistemic  intricacy  is-ought  values  duplication  nihil  the-classics  big-peeps  darwinian  deep-materialism  selection  equilibrium  subjective-objective  models  classification  smoothness  discrete  schelling  optimization  approximation  comparison  multi  peace-violence  war  coalitions  status  s-factor  fashun  reputation  civilization  intelligence  competition  leadership  cooperate-defect  within-without  within-group  group-level  homo-hetero  new-religion  causation  direct-indirect  ends-means  metabuch  physics  axioms  skeleton  wiki  reference  concept  being-becoming  essence-existence  logos  real-nominal 
july 2018 by nhaliday
Jordan Peterson is Wrong About the Case for the Left
I suggest that the tension of which he speaks is fully formed and self-contained completely within conservatism. Balancing those two forces is, in fact, what conservatism is all about. Thomas Sowell, in A Conflict of Visions: Ideological Origins of Political Struggles describes the conservative outlook as (paraphrasing): “There are no solutions, only tradeoffs.”

The real tension is between balance on the right and imbalance on the left.

In Towards a Cognitive Theory of Polics in the online magazine Quillette I make the case that left and right are best understood as psychological profiles consisting of 1) cognitive style, and 2) moral matrix.

There are two predominant cognitive styles and two predominant moral matrices.

The two cognitive styles are described by Arthur Herman in his book The Cave and the Light: Plato Versus Aristotle, and the Struggle for the Soul of Western Civilization, in which Plato and Aristotle serve as metaphors for them. These two quotes from the book summarize the two styles:

Despite their differences, Plato and Aristotle agreed on many things. They both stressed the importance of reason as our guide for understanding and shaping the world. Both believed that our physical world is shaped by certain eternal forms that are more real than matter. The difference was that Plato’s forms existed outside matter, whereas Aristotle’s forms were unrealizable without it. (p. 61)

The twentieth century’s greatest ideological conflicts do mark the violent unfolding of a Platonist versus Aristotelian view of what it means to be free and how reason and knowledge ultimately fit into our lives (p.539-540)

The Platonic cognitive style amounts to pure abstract reason, “unconstrained” by reality. It has no limiting principle. It is imbalanced. Aristotelian thinking also relies on reason, but it is “constrained” by empirical reality. It has a limiting principle. It is balanced.

The two moral matrices are described by Jonathan Haidt in his book The Righteous Mind: Why Good People Are Divided by Politics and Religion. Moral matrices are collections of moral foundations, which are psychological adaptations of social cognition created in us by hundreds of millions of years of natural selection as we evolved into the social animal. There are six moral foundations. They are:

Care/Harm
Fairness/Cheating
Liberty/Oppression
Loyalty/Betrayal
Authority/Subversion
Sanctity/Degradation
The first three moral foundations are called the “individualizing” foundations because they’re focused on the autonomy and well being of the individual person. The second three foundations are called the “binding” foundations because they’re focused on helping individuals form into cooperative groups.

One of the two predominant moral matrices relies almost entirely on the individualizing foundations, and of those mostly just care. It is all individualizing all the time. No balance. The other moral matrix relies on all of the moral foundations relatively equally; individualizing and binding in tension. Balanced.

The leftist psychological profile is made from the imbalanced Platonic cognitive style in combination with the first, imbalanced, moral matrix.

The conservative psychological profile is made from the balanced Aristotelian cognitive style in combination with the balanced moral matrix.

It is not true that the tension between left and right is a balance between the defense of the dispossessed and the defense of hierarchies.

It is true that the tension between left and right is between an imbalanced worldview unconstrained by empirical reality and a balanced worldview constrained by it.

A Venn Diagram of the two psychological profiles looks like this:
commentary  albion  canada  journos-pundits  philosophy  politics  polisci  ideology  coalitions  left-wing  right-wing  things  phalanges  reason  darwinian  tradition  empirical  the-classics  big-peeps  canon  comparison  thinking  metabuch  skeleton  lens  psychology  social-psych  morality  justice  civil-liberty  authoritarianism  love-hate  duty  tribalism  us-them  sanctity-degradation  revolution  individualism-collectivism  n-factor  europe  the-great-west-whale  pragmatic  prudence  universalism-particularism  analytical-holistic  nationalism-globalism  social-capital  whole-partial-many  pic  intersection-connectedness  links  news  org:mag  letters  rhetoric  contrarianism  intricacy  haidt  scitariat  critique  debate  forms-instances  reduction  infographic  apollonian-dionysian  being-becoming  essence-existence 
july 2018 by nhaliday
Why read old philosophy? | Meteuphoric
(This story would suggest that in physics students are maybe missing out on learning the styles of thought that produce progress in physics. My guess is that instead they learn them in grad school when they are doing research themselves, by emulating their supervisors, and that the helpfulness of this might partially explain why Nobel prizewinner advisors beget Nobel prizewinner students.)

The story I hear about philosophy—and I actually don’t know how much it is true—is that as bits of philosophy come to have any methodological tools other than ‘think about it’, they break off and become their own sciences. So this would explain philosophy’s lone status in studying old thinkers rather than impersonal methods—philosophy is the lone ur-discipline without impersonal methods but thinking.

This suggests a research project: try summarizing what Aristotle is doing rather than Aristotle’s views. Then write a nice short textbook about it.
ratty  learning  reading  studying  prioritizing  history  letters  philosophy  science  comparison  the-classics  canon  speculation  reflection  big-peeps  iron-age  mediterranean  roots  lens  core-rats  thinking  methodology  grad-school  academia  physics  giants  problem-solving  meta:research  scholar  the-trenches  explanans  crux  metameta  duplication  sociality  innovation  quixotic  meta:reading 
june 2018 by nhaliday
Dividuals – The soul is not an indivisible unit and has no unified will
Towards A More Mature Atheism: https://dividuals.wordpress.com/2015/09/17/towards-a-more-mature-atheism/
Human intelligence evolved as a social intelligence, for the purposes of social cooperation, social competition and social domination. It evolved to make us efficient at cooperating at removing obstacles, especially the kinds of obstacles that tend to fight back, i.e. at warfare. If you ever studied strategy or tactics, or just played really good board games, you have probably found your brain seems to be strangely well suited for specifically this kind of intellectual activity. It’s not necessarily easier than studying physics, and yet it somehow feels more natural. Physics is like swimming, strategy and tactics is like running. The reason for that is that our brains are truly evolved to be strategic, tactical, diplomatic computers, not physics computers. The question our brains are REALLY good at finding the answer for is “Just what does this guy really want?”

...

Thus, a very basic failure mode of the human brain is to overdetect agency.

I think this is partially what SSC wrote about in Mysticism And Pattern-Matching too. But instead of mystical experiences, my focus is on our brains claiming to detect agency where there is none. Thus my view is closer to Richard Carrier’s definition of the supernatural: it is the idea that some mental things cannot be reduced to nonmental things.

...

Meaning actually means will and agency. It took me a while to figure that one out. When we look for the meaning of life, a meaning in life, or a meaningful life, we look for a will or agency generally outside our own.

...

I am a double oddball – kind of autistic, but still far more interested in human social dynamics, such as history, than in natural sciences or technology. As a result, I do feel a calling to religion – the human world, as opposed to outer space, the human city, the human history, is such a perfect fit for a view like that of Catholicism! The reason for that is that Catholicism is the pinnacle of human intellectual efforts dealing with human agency. Ideas like Augustine’s three failure modes of the human brain: greed, lust and desire for power and status, are just about the closest to forming correct psychological theories far earlier than the scientific method was discovered. Just read your Chesterbelloc and Lewis. And of course because the agency radars of Catholics run at full burst, they overdetect it and thus believe in a god behind the universe. My brain, due to my deep interest in human agency and its consequences, also would like to be religious: wouldn’t it be great if the universe was made by something we could talk to, like, everything else that I am interested in, from field generals to municipal governments are entities I could talk to?

...

I also dislike that atheists often refuse to propose a falsifiable theory because they claim the burden of proof is not on them. Strictly speaking it can be true, but it is still good form to provide one.

Since I am something like an “nontheistic Catholic” anyway (e.g. I believe in original sin from the practical, political angle, I just think it has natural, not supernatural causes: evolution, the move from hunting-gathering to agriculture etc.), all one would need to do to make me fully so is to plug a God concept in my mind.

If you can convince me that my brain is not actually overdetecting agency when I feel a calling to religion, if you can convince me that my brain and most human brains detect agency just about right, there will be no reason for me to not believe in God. Because if there would any sort of agency behind the universe, the smartest bet would be that this agency would be the God of Thomas Aquinas’ Summa. That guy was plain simply a genius.

How to convince me my brain is not overdetecting agency? The simplest way is to convince me that magic, witchcraft, or superstition in general is real, and real in the supernatural sense (I do know Wiccans who cast spells and claim they are natural, not supernatural: divination spells make the brain more aware of hidden details, healing spells recruit the healing processes of the body etc.) You see, Catholics generally do believe in magic and witchcraft, as in: “These really do something, and they do something bad, so never practice them.”

The Strange Places the “God of the Gaps” Takes You: https://dividuals.wordpress.com/2018/05/25/the-strange-places-the-god-of-the-gaps-takes-you/
I assume people are familiar with the God of the Gaps argument. Well, it is usually just an accusation, but Newton for instance really pulled one.

But natural science is inherently different from humanities, because in natural science you build a predictive model of which you are not part of. You are just a point-like neutral observer.

You cannot do that with other human minds because you just don’t have the computing power to simulate a roughly similarly intelligent mind and have enough left to actually work with your model. So you put yourself into the predictive model, you make yourself a part of the model itself. You use a certain empathic kind of understanding, a “what would I do in that guys shoes?” and generate your predictions that way.

...

Which means that while natural science is relatively new, and strongly correlates with technological progress, this empathic, self-programming model of the humanities you could do millenia ago as well, you don’t need math or tools for this, and you probably cannot expect anything like straight-line progress. Maybe some wisdoms people figure out this way are really timeless and we just keep on rediscovering them.

So imagine, say, Catholicism as a large set of humanities. Sociology, social psychology, moral philosophy in the pragmatic, scientific sense (“What morality makes a society not collapse and actually prosper?”), life wisdom and all that. Basically just figuring out how people tick, how societies tick and how to make them tick well.

...

What do? Well, the obvious move is to pull a Newton and inject a God of the Gaps into your humanities. We tick like that because God. We must do so and so to tick well because God.

...

What I am saying is that we are at some point probably going to prove pretty much all of the this-worldy, pragmatic (moral, sociological, psychological etc.) aspect of Catholicism correct by something like evolutionary psychology.

And I am saying that while it will dramatically increase our respect for religion, this will also be probably a huge blow to theism. I don’t want that to happen, but I think it will. Because eliminating God from the gaps of natural science does not hurt faith much. But eliminating God from the gaps of the humanities and yes, religion itself?

My Kind of Atheist: http://www.overcomingbias.com/2018/08/my-kind-of-athiest.html
I think I’ve mentioned somewhere in public that I’m now an atheist, even though I grew up in a very Christian family, and I even joined a “cult” at a young age (against disapproving parents). The proximate cause of my atheism was learning physics in college. But I don’t think I’ve ever clarified in public what kind of an “atheist” or “agnostic” I am. So here goes.

The universe is vast and most of it is very far away in space and time, making our knowledge of those distant parts very thin. So it isn’t at all crazy to think that very powerful beings exist somewhere far away out there, or far before us or after us in time. In fact, many of us hope that we now can give rise to such powerful beings in the distant future. If those powerful beings count as “gods”, then I’m certainly open to the idea that such gods exist somewhere in space-time.

It also isn’t crazy to imagine powerful beings that are “closer” in space and time, but far away in causal connection. They could be in parallel “planes”, in other dimensions, or in “dark” matter that doesn’t interact much with our matter. Or they might perhaps have little interest in influencing or interacting with our sort of things. Or they might just “like to watch.”

But to most religious people, a key emotional appeal of religion is the idea that gods often “answer” prayer by intervening in their world. Sometimes intervening in their head to make them feel different, but also sometimes responding to prayers about their test tomorrow, their friend’s marriage, or their aunt’s hemorrhoids. It is these sort of prayer-answering “gods” in which I just can’t believe. Not that I’m absolutely sure they don’t exist, but I’m sure enough that the term “atheist” fits much better than the term “agnostic.”

These sort of gods supposedly intervene in our world millions of times daily to respond positively to particular prayers, and yet they do not noticeably intervene in world affairs. Not only can we find no physical trace of any machinery or system by which such gods exert their influence, even though we understand the physics of our local world very well, but the history of life and civilization shows no obvious traces of their influence. They know of terrible things that go wrong in our world, but instead of doing much about those things, these gods instead prioritize not leaving any clear evidence of their existence or influence. And yet for some reason they don’t mind people believing in them enough to pray to them, as they often reward such prayers with favorable interventions.
gnon  blog  stream  politics  polisci  ideology  institutions  thinking  religion  christianity  protestant-catholic  history  medieval  individualism-collectivism  n-factor  left-wing  right-wing  tribalism  us-them  cohesion  sociality  ecology  philosophy  buddhism  gavisti  europe  the-great-west-whale  occident  germanic  theos  culture  society  cultural-dynamics  anthropology  volo-avolo  meaningness  coalitions  theory-of-mind  coordination  organizing  psychology  social-psych  fashun  status  nationalism-globalism  models  power  evopsych  EEA  deep-materialism  new-religion  metameta  social-science  sociology  multi  definition  intelligence  science  comparison  letters  social-structure  existence  nihil  ratty  hanson  intricacy  reflection  people  physics  paganism 
june 2018 by nhaliday
Cultural variation in cultural evolution | Proceedings of the Royal Society of London B: Biological Sciences
Cultural evolutionary models have identified a range of conditions under which social learning (copying others) is predicted to be adaptive relative to asocial learning (learning on one's own), particularly in humans where socially learned information can accumulate over successive generations. However, cultural evolution and behavioural economics experiments have consistently shown apparently maladaptive under-utilization of social information in Western populations. Here we provide experimental evidence of cultural variation in people's use of social learning, potentially explaining this mismatch. People in mainland China showed significantly more social learning than British people in an artefact-design task designed to assess the adaptiveness of social information use. People in Hong Kong, and Chinese immigrants in the UK, resembled British people in their social information use, suggesting a recent shift in these groups from social to asocial learning due to exposure to Western culture. Finally, Chinese mainland participants responded less than other participants to increased environmental change within the task. Our results suggest that learning strategies in humans are culturally variable and not genetically fixed, necessitating the study of the ‘social learning of social learning strategies' whereby the dynamics of cultural evolution are responsive to social processes, such as migration, education and globalization.

...

Western education emphasizes individual discovery and creativity, whereas East Asian education emphasizes rote learning from authority [25]. The adoption of consumer products shows less social influence in Western than East Asian countries [26]. Westerners are described as more individualistic/independent, while East Asians are described as more collectivistic/interdependent [27], dimensions which intuitively map on to asocial and social learning, respectively.

Societal background influences social learning in cooperative decision making: https://www.sciencedirect.com/science/article/pii/S1090513817303501
We demonstrate that Chinese participants base their cooperation decisions on information about their peers much more frequently than their British counterparts. Moreover, our results reveal remarkable societal differences in the type of peer information people consider. In contrast to the consensus view, Chinese participants tend to be substantially less majority-oriented than the British. While Chinese participants are inclined to adopt peer behavior that leads to higher payoffs, British participants tend to cooperate only if sufficiently many peers do so too. These results indicate that the basic processes underlying social transmission are not universal; rather, they vary with cultural conditions. As success-based learning is associated with selfish behavior and majority-based learning can help foster cooperation, our study suggests that in different societies social learning can play diverging roles in the emergence and maintenance of cooperation.
study  org:nat  anthropology  cultural-dynamics  sapiens  pop-diff  comparison  sociality  learning  duplication  individualism-collectivism  n-factor  europe  the-great-west-whale  china  asia  sinosphere  britain  anglosphere  strategy  environmental-effects  biodet  within-without  auto-learning  tribalism  things  broad-econ  psychology  cog-psych  social-psych  🎩  🌞  microfoundations  egalitarianism-hierarchy  innovation  creative  explanans  education  culture  curiosity  multi  occident  cooperate-defect  coordination  organizing  self-interest  altruism  patho-altruism  orient  ecology  axelrod 
may 2018 by nhaliday
Theological differences between the Catholic Church and the Eastern Orthodox Church - Wikipedia
Did the Filioque Ruin the West?: https://contingentnotarbitrary.com/2017/06/15/the-filioque-ruined-the-west/
The theology of the filioque makes the Father and the Son equal as sources of divinity. Flattening the hierarchy implicit in the Trinity does away with the Monarchy of the Father: the family relationship becomes less patriarchal and more egalitarian. The Son, with his humanity, mercy, love and sacrifice, is no longer subordinate to the Father, while the Father – the God of the Old Testament, law and tradition – is no longer sovereign. Looks like the change would elevate egalitarianism, compassion, humanity and self-sacrifice while undermining hierarchy, rules, family and tradition. Sound familiar?
article  wiki  reference  philosophy  backup  religion  christianity  theos  ideology  comparison  nitty-gritty  intricacy  europe  the-great-west-whale  occident  russia  MENA  orient  letters  epistemic  truth  science  logic  inference  guilt-shame  volo-avolo  causation  multi  gnon  eastern-europe  roots  explanans  enlightenment-renaissance-restoration-reformation  modernity  egalitarianism-hierarchy  love-hate  free-riding  cooperate-defect  gender  justice  law  tradition  legacy  parenting  ascetic  altruism  farmers-and-foragers  protestant-catholic  exegesis-hermeneutics 
april 2018 by nhaliday
Theory of Self-Reproducing Automata - John von Neumann
Fourth Lecture: THE ROLE OF HIGH AND OF EXTREMELY HIGH COMPLICATION

Comparisons between computing machines and the nervous systems. Estimates of size for computing machines, present and near future.

Estimates for size for the human central nervous system. Excursus about the “mixed” character of living organisms. Analog and digital elements. Observations about the “mixed” character of all componentry, artificial as well as natural. Interpretation of the position to be taken with respect to these.

Evaluation of the discrepancy in size between artificial and natural automata. Interpretation of this discrepancy in terms of physical factors. Nature of the materials used.

The probability of the presence of other intellectual factors. The role of complication and the theoretical penetration that it requires.

Questions of reliability and errors reconsidered. Probability of individual errors and length of procedure. Typical lengths of procedure for computing machines and for living organisms--that is, for artificial and for natural automata. Upper limits on acceptable probability of error in individual operations. Compensation by checking and self-correcting features.

Differences of principle in the way in which errors are dealt with in artificial and in natural automata. The “single error” principle in artificial automata. Crudeness of our approach in this case, due to the lack of adequate theory. More sophisticated treatment of this problem in natural automata: The role of the autonomy of parts. Connections between this autonomy and evolution.

- 10^10 neurons in brain, 10^4 vacuum tubes in largest computer at time
- machines faster: 5 ms from neuron potential to neuron potential, 10^-3 ms for vacuum tubes

https://en.wikipedia.org/wiki/John_von_Neumann#Computing
pdf  article  papers  essay  nibble  math  cs  computation  bio  neuro  neuro-nitgrit  scale  magnitude  comparison  acm  von-neumann  giants  thermo  phys-energy  speed  performance  time  density  frequency  hardware  ems  efficiency  dirty-hands  street-fighting  fermi  estimate  retention  physics  interdisciplinary  multi  wiki  links  people  🔬  atoms  automata  duplication  iteration-recursion  turing  complexity  measure  nature  technology  complex-systems  bits  information-theory  circuits  robust  structure  composition-decomposition  evolution  mutation  axioms  analogy  thinking  input-output  hi-order-bits  coding-theory  flexibility  rigidity 
april 2018 by nhaliday
More arguments against blockchain, most of all about trust - Marginal REVOLUTION
Auditing software is hard! The most-heavily scrutinized smart contract in history had a small bug that nobody noticed — that is, until someone did notice it, and used it to steal fifty million dollars. If cryptocurrency enthusiasts putting together a $150m investment fund can’t properly audit the software, how confident are you in your e-book audit? Perhaps you would rather write your own counteroffer software contract, in case this e-book author has hidden a recursion bug in their version to drain your ethereum wallet of all your life savings?

It’s a complicated way to buy a book! It’s not trustless, you’re trusting in the software (and your ability to defend yourself in a software-driven world), instead of trusting other people.
econotariat  marginal-rev  links  commentary  quotes  bitcoin  cryptocurrency  blockchain  crypto  trust  money  monetary-fiscal  technology  software  institutions  government  comparison  cost-benefit  primitivism  eden-heaven 
april 2018 by nhaliday
Christian ethics - Wikipedia
Christian ethics is a branch of Christian theology that defines virtuous behavior and wrong behavior from a Christian perspective. Systematic theological study of Christian ethics is called moral theology, possibly with the name of the respective theological tradition, e.g. Catholic moral theology.

Christian virtues are often divided into four cardinal virtues and three theological virtues. Christian ethics includes questions regarding how the rich should act toward the poor, how women are to be treated, and the morality of war. Christian ethicists, like other ethicists, approach ethics from different frameworks and perspectives. The approach of virtue ethics has also become popular in recent decades, largely due to the work of Alasdair MacIntyre and Stanley Hauerwas.[2]

...

The seven Christian virtues are from two sets of virtues. The four cardinal virtues are Prudence, Justice, Restraint (or Temperance), and Courage (or Fortitude). The cardinal virtues are so called because they are regarded as the basic virtues required for a virtuous life. The three theological virtues, are Faith, Hope, and Love (or Charity).

- Prudence: also described as wisdom, the ability to judge between actions with regard to appropriate actions at a given time
- Justice: also considered as fairness, the most extensive and most important virtue[20]
- Temperance: also known as restraint, the practice of self-control, abstention, and moderation tempering the appetition
- Courage: also termed fortitude, forebearance, strength, endurance, and the ability to confront fear, uncertainty, and intimidation
- Faith: belief in God, and in the truth of His revelation as well as obedience to Him (cf. Rom 1:5:16:26)[21][22]
- Hope: expectation of and desire of receiving; refraining from despair and capability of not giving up. The belief that God will be eternally present in every human's life and never giving up on His love.
- Charity: a supernatural virtue that helps us love God and our neighbors, the same way as we love ourselves.

Seven deadly sins: https://en.wikipedia.org/wiki/Seven_deadly_sins
The seven deadly sins, also known as the capital vices or cardinal sins, is a grouping and classification of vices of Christian origin.[1] Behaviours or habits are classified under this category if they directly give birth to other immoralities.[2] According to the standard list, they are pride, greed, lust, envy, gluttony, wrath, and sloth,[2] which are also contrary to the seven virtues. These sins are often thought to be abuses or excessive versions of one's natural faculties or passions (for example, gluttony abuses one's desire to eat).

originally:
1 Gula (gluttony)
2 Luxuria/Fornicatio (lust, fornication)
3 Avaritia (avarice/greed)
4 Superbia (pride, hubris)
5 Tristitia (sorrow/despair/despondency)
6 Ira (wrath)
7 Vanagloria (vainglory)
8 Acedia (sloth)

Golden Rule: https://en.wikipedia.org/wiki/Golden_Rule
The Golden Rule (which can be considered a law of reciprocity in some religions) is the principle of treating others as one would wish to be treated. It is a maxim that is found in many religions and cultures.[1][2] The maxim may appear as _either a positive or negative injunction_ governing conduct:

- One should treat others as one would like others to treat oneself (positive or directive form).[1]
- One should not treat others in ways that one would not like to be treated (negative or prohibitive form).[1]
- What you wish upon others, you wish upon yourself (empathic or responsive form).[1]
The Golden Rule _differs from the maxim of reciprocity captured in do ut des—"I give so that you will give in return"—and is rather a unilateral moral commitment to the well-being of the other without the expectation of anything in return_.[3]

The concept occurs in some form in nearly every religion[4][5] and ethical tradition[6] and is often considered _the central tenet of Christian ethics_[7] [8]. It can also be explained from the perspectives of psychology, philosophy, sociology, human evolution, and economics. Psychologically, it involves a person empathizing with others. Philosophically, it involves a person perceiving their neighbor also as "I" or "self".[9] Sociologically, "love your neighbor as yourself" is applicable between individuals, between groups, and also between individuals and groups. In evolution, "reciprocal altruism" is seen as a distinctive advance in the capacity of human groups to survive and reproduce, as their exceptional brains demanded exceptionally long childhoods and ongoing provision and protection even beyond that of the immediate family.[10] In economics, Richard Swift, referring to ideas from David Graeber, suggests that "without some kind of reciprocity society would no longer be able to exist."[11]

...

hmm, Meta-Golden Rule already stated:
Seneca the Younger (c. 4 BC–65 AD), a practitioner of Stoicism (c. 300 BC–200 AD) expressed the Golden Rule in his essay regarding the treatment of slaves: "Treat your inferior as you would wish your superior to treat you."[23]

...

The "Golden Rule" was given by Jesus of Nazareth, who used it to summarize the Torah: "Do to others what you want them to do to you." and "This is the meaning of the law of Moses and the teaching of the prophets"[33] (Matthew 7:12 NCV, see also Luke 6:31). The common English phrasing is "Do unto others as you would have them do unto you". A similar form of the phrase appeared in a Catholic catechism around 1567 (certainly in the reprint of 1583).[34] The Golden Rule is _stated positively numerous times in the Hebrew Pentateuch_ as well as the Prophets and Writings. Leviticus 19:18 ("Forget about the wrong things people do to you, and do not try to get even. Love your neighbor as you love yourself."; see also Great Commandment) and Leviticus 19:34 ("But treat them just as you treat your own citizens. Love foreigners as you love yourselves, because you were foreigners one time in Egypt. I am the Lord your God.").

The Old Testament Deuterocanonical books of Tobit and Sirach, accepted as part of the Scriptural canon by Catholic Church, Eastern Orthodoxy, and the Non-Chalcedonian Churches, express a _negative form_ of the golden rule:

"Do to no one what you yourself dislike."

— Tobit 4:15
"Recognize that your neighbor feels as you do, and keep in mind your own dislikes."

— Sirach 31:15
Two passages in the New Testament quote Jesus of Nazareth espousing the _positive form_ of the Golden rule:

Matthew 7:12
Do to others what you want them to do to you. This is the meaning of the law of Moses and the teaching of the prophets.

Luke 6:31
Do to others what you would want them to do to you.

...

The passage in the book of Luke then continues with Jesus answering the question, "Who is my neighbor?", by telling the parable of the Good Samaritan, indicating that "your neighbor" is anyone in need.[35] This extends to all, including those who are generally considered hostile.

Jesus' teaching goes beyond the negative formulation of not doing what one would not like done to themselves, to the positive formulation of actively doing good to another that, if the situations were reversed, one would desire that the other would do for them. This formulation, as indicated in the parable of the Good Samaritan, emphasizes the needs for positive action that brings benefit to another, not simply restraining oneself from negative activities that hurt another. Taken as a rule of judgment, both formulations of the golden rule, the negative and positive, are equally applicable.[36]

The Golden Rule: Not So Golden Anymore: https://philosophynow.org/issues/74/The_Golden_Rule_Not_So_Golden_Anymore
Pluralism is the most serious problem facing liberal democracies today. We can no longer ignore the fact that cultures around the world are not simply different from one another, but profoundly so; and the most urgent area in which this realization faces us is in the realm of morality. Western democratic systems depend on there being at least a minimal consensus concerning national values, especially in regard to such things as justice, equality and human rights. But global communication, economics and the migration of populations have placed new strains on Western democracies. Suddenly we find we must adjust to peoples whose suppositions about the ultimate values and goals of life are very different from ours. A clear lesson from events such as 9/11 is that disregarding these differences is not an option. Collisions between worldviews and value systems can be cataclysmic. Somehow we must learn to manage this new situation.

For a long time, liberal democratic optimism in the West has been shored up by suppositions about other cultures and their differences from us. The cornerpiece of this optimism has been the assumption that whatever differences exist they cannot be too great. A core of ‘basic humanity’ surely must tie all of the world’s moral systems together – and if only we could locate this core we might be able to forge agreements and alliances among groups that otherwise appear profoundly opposed. We could perhaps then shelve our cultural or ideological differences and get on with the more pleasant and productive business of celebrating our core agreement. One cannot fail to see how this hope is repeated in order buoy optimism about the Middle East peace process, for example.

...

It becomes obvious immediately that no matter how widespread we want the Golden Rule to be, there are some ethical systems that we have to admit do not have it. In fact, there are a few traditions that actually disdain the Rule. In philosophy, the Nietzschean tradition holds that the virtues implicit in the Golden Rule are antithetical to the true virtues of self-assertion and the will-to-power. Among religions, there are a good many that prefer to emphasize the importance of self, cult, clan or tribe rather than of general others; and a good many other religions for whom large populations are simply excluded from goodwill, being labeled as outsiders, heretics or … [more]
article  letters  philosophy  morality  ethics  formal-values  religion  christianity  theos  n-factor  europe  the-great-west-whale  occident  justice  war  peace-violence  janus  virtu  list  sanctity-degradation  class  lens  wealth  gender  sex  sexuality  multi  concept  wiki  reference  theory-of-mind  ideology  cooperate-defect  coordination  psychology  cog-psych  social-psych  emotion  cybernetics  ecology  deep-materialism  new-religion  hsu  scitariat  aphorism  quotes  stories  fiction  gedanken  altruism  parasites-microbiome  food  diet  nutrition  individualism-collectivism  taxes  government  redistribution  analogy  lol  troll  poast  death  long-short-run  axioms  judaism  islam  tribalism  us-them  kinship  interests  self-interest  dignity  civil-liberty  values  homo-hetero  diversity  unintended-consequences  within-without  increase-decrease  signum  ascetic  axelrod  guilt-shame  patho-altruism  history  iron-age  mediterranean  the-classics  robust  egalitarianism-hierarchy  intricacy  hypocrisy  parable  roots  explanans  crux  s 
april 2018 by nhaliday
The Hanson-Yudkowsky AI-Foom Debate - Machine Intelligence Research Institute
How Deviant Recent AI Progress Lumpiness?: http://www.overcomingbias.com/2018/03/how-deviant-recent-ai-progress-lumpiness.html
I seem to disagree with most people working on artificial intelligence (AI) risk. While with them I expect rapid change once AI is powerful enough to replace most all human workers, I expect this change to be spread across the world, not concentrated in one main localized AI system. The efforts of AI risk folks to design AI systems whose values won’t drift might stop global AI value drift if there is just one main AI system. But doing so in a world of many AI systems at similar abilities levels requires strong global governance of AI systems, which is a tall order anytime soon. Their continued focus on preventing single system drift suggests that they expect a single main AI system.

The main reason that I understand to expect relatively local AI progress is if AI progress is unusually lumpy, i.e., arriving in unusually fewer larger packages rather than in the usual many smaller packages. If one AI team finds a big lump, it might jump way ahead of the other teams.

However, we have a vast literature on the lumpiness of research and innovation more generally, which clearly says that usually most of the value in innovation is found in many small innovations. We have also so far seen this in computer science (CS) and AI. Even if there have been historical examples where much value was found in particular big innovations, such as nuclear weapons or the origin of humans.

Apparently many people associated with AI risk, including the star machine learning (ML) researchers that they often idolize, find it intuitively plausible that AI and ML progress is exceptionally lumpy. Such researchers often say, “My project is ‘huge’, and will soon do it all!” A decade ago my ex-co-blogger Eliezer Yudkowsky and I argued here on this blog about our differing estimates of AI progress lumpiness. He recently offered Alpha Go Zero as evidence of AI lumpiness:

...

In this post, let me give another example (beyond two big lumps in a row) of what could change my mind. I offer a clear observable indicator, for which data should have available now: deviant citation lumpiness in recent ML research. One standard measure of research impact is citations; bigger lumpier developments gain more citations that smaller ones. And it turns out that the lumpiness of citations is remarkably constant across research fields! See this March 3 paper in Science:

I Still Don’t Get Foom: http://www.overcomingbias.com/2014/07/30855.html
All of which makes it look like I’m the one with the problem; everyone else gets it. Even so, I’m gonna try to explain my problem again, in the hope that someone can explain where I’m going wrong. Here goes.

“Intelligence” just means an ability to do mental/calculation tasks, averaged over many tasks. I’ve always found it plausible that machines will continue to do more kinds of mental tasks better, and eventually be better at pretty much all of them. But what I’ve found it hard to accept is a “local explosion.” This is where a single machine, built by a single project using only a tiny fraction of world resources, goes in a short time (e.g., weeks) from being so weak that it is usually beat by a single human with the usual tools, to so powerful that it easily takes over the entire world. Yes, smarter machines may greatly increase overall economic growth rates, and yes such growth may be uneven. But this degree of unevenness seems implausibly extreme. Let me explain.

If we count by economic value, humans now do most of the mental tasks worth doing. Evolution has given us a brain chock-full of useful well-honed modules. And the fact that most mental tasks require the use of many modules is enough to explain why some of us are smarter than others. (There’d be a common “g” factor in task performance even with independent module variation.) Our modules aren’t that different from those of other primates, but because ours are different enough to allow lots of cultural transmission of innovation, we’ve out-competed other primates handily.

We’ve had computers for over seventy years, and have slowly build up libraries of software modules for them. Like brains, computers do mental tasks by combining modules. An important mental task is software innovation: improving these modules, adding new ones, and finding new ways to combine them. Ideas for new modules are sometimes inspired by the modules we see in our brains. When an innovation team finds an improvement, they usually sell access to it, which gives them resources for new projects, and lets others take advantage of their innovation.

...

In Bostrom’s graph above the line for an initially small project and system has a much higher slope, which means that it becomes in a short time vastly better at software innovation. Better than the entire rest of the world put together. And my key question is: how could it plausibly do that? Since the rest of the world is already trying the best it can to usefully innovate, and to abstract to promote such innovation, what exactly gives one small project such a huge advantage to let it innovate so much faster?

...

In fact, most software innovation seems to be driven by hardware advances, instead of innovator creativity. Apparently, good ideas are available but must usually wait until hardware is cheap enough to support them.

Yes, sometimes architectural choices have wider impacts. But I was an artificial intelligence researcher for nine years, ending twenty years ago, and I never saw an architecture choice make a huge difference, relative to other reasonable architecture choices. For most big systems, overall architecture matters a lot less than getting lots of detail right. Researchers have long wandered the space of architectures, mostly rediscovering variations on what others found before.

Some hope that a small project could be much better at innovation because it specializes in that topic, and much better understands new theoretical insights into the basic nature of innovation or intelligence. But I don’t think those are actually topics where one can usefully specialize much, or where we’ll find much useful new theory. To be much better at learning, the project would instead have to be much better at hundreds of specific kinds of learning. Which is very hard to do in a small project.

What does Bostrom say? Alas, not much. He distinguishes several advantages of digital over human minds, but all software shares those advantages. Bostrom also distinguishes five paths: better software, brain emulation (i.e., ems), biological enhancement of humans, brain-computer interfaces, and better human organizations. He doesn’t think interfaces would work, and sees organizations and better biology as only playing supporting roles.

...

Similarly, while you might imagine someday standing in awe in front of a super intelligence that embodies all the power of a new age, superintelligence just isn’t the sort of thing that one project could invent. As “intelligence” is just the name we give to being better at many mental tasks by using many good mental modules, there’s no one place to improve it. So I can’t see a plausible way one project could increase its intelligence vastly faster than could the rest of the world.

Takeoff speeds: https://sideways-view.com/2018/02/24/takeoff-speeds/
Futurists have argued for years about whether the development of AGI will look more like a breakthrough within a small group (“fast takeoff”), or a continuous acceleration distributed across the broader economy or a large firm (“slow takeoff”).

I currently think a slow takeoff is significantly more likely. This post explains some of my reasoning and why I think it matters. Mostly the post lists arguments I often hear for a fast takeoff and explains why I don’t find them compelling.

(Note: this is not a post about whether an intelligence explosion will occur. That seems very likely to me. Quantitatively I expect it to go along these lines. So e.g. while I disagree with many of the claims and assumptions in Intelligence Explosion Microeconomics, I don’t disagree with the central thesis or with most of the arguments.)
ratty  lesswrong  subculture  miri-cfar  ai  risk  ai-control  futurism  books  debate  hanson  big-yud  prediction  contrarianism  singularity  local-global  speed  speedometer  time  frontier  distribution  smoothness  shift  pdf  economics  track-record  abstraction  analogy  links  wiki  list  evolution  mutation  selection  optimization  search  iteration-recursion  intelligence  metameta  chart  analysis  number  ems  coordination  cooperate-defect  death  values  formal-values  flux-stasis  philosophy  farmers-and-foragers  malthus  scale  studying  innovation  insight  conceptual-vocab  growth-econ  egalitarianism-hierarchy  inequality  authoritarianism  wealth  near-far  rationality  epistemic  biases  cycles  competition  arms  zero-positive-sum  deterrence  war  peace-violence  winner-take-all  technology  moloch  multi  plots  research  science  publishing  humanity  labor  marginal  urban-rural  structure  composition-decomposition  complex-systems  gregory-clark  decentralized  heavy-industry  magnitude  multiplicative  endogenous-exogenous  models  uncertainty  decision-theory  time-prefer 
april 2018 by nhaliday
Diving into Chinese philosophy – Gene Expression
Back when I was in college one of my roommates was taking a Chinese philosophy class for a general education requirement. A double major in mathematics and economics (he went on to get an economics Ph.D.) he found the lack of formal rigor in the field rather maddening. I thought this was fair, but I suggested to him that the this-worldy and often non-metaphysical orientation of much of Chinese philosophy made it less amenable to formal and logical analysis.

...

IMO the much more problematic thing about premodern Chinese political philosophy from the point of view of the West is its lack of interest in constitutionalism and the rule of law, stemming from a generally less rationalist approach than the Classical Westerns, than any sort of inherent anti-individualism or collectivism or whatever. For someone like Aristotle the constitutional rule of law was the highest moral good in itself and the definition of justice, very much not so for Confucius or for Zhu Xi. They still believed in Justice in the sense of people getting what they deserve, but they didn’t really consider the written rule of law an appropriate way to conceptualize it. OG Confucius leaned more towards the unwritten traditions and rituals passed down from the ancestors, and Neoconfucianism leaned more towards a sort of Universal Reason that could be accessed by the individual’s subjective understanding but which again need not be written down necessarily (although unlike Kant/the Enlightenment it basically implies that such subjective reasoning will naturally lead one to reaffirming the ancient traditions). In left-right political spectrum terms IMO this leads to a well-defined right and left and a big old hole in the center where classical republicanism would be in the West. This resonates pretty well with modern East Asian political history IMO

https://www.radicalphilosophy.com/article/is-logos-a-proper-noun
Is logos a proper noun?
Or, is Aristotelian Logic translatable into Chinese?
gnxp  scitariat  books  recommendations  discussion  reflection  china  asia  sinosphere  philosophy  logic  rigor  rigidity  flexibility  leviathan  law  individualism-collectivism  analytical-holistic  systematic-ad-hoc  the-classics  canon  morality  ethics  formal-values  justice  reason  tradition  government  polisci  left-wing  right-wing  order-disorder  eden-heaven  analogy  similarity  comparison  thinking  summary  top-n  n-factor  universalism-particularism  duality  rationality  absolute-relative  subjective-objective  the-self  apollonian-dionysian  big-peeps  history  iron-age  antidemos  democracy  institutions  darwinian  multi  language  concept  conceptual-vocab  inference  linguistics  foreign-lang  mediterranean  europe  germanic  mostly-modern  gallic  culture 
march 2018 by nhaliday
« earlier      
per page:    204080120160

bundles : meta

related tags

2016-election  80000-hours  :)  :/  ability-competence  abortion-contraception-embryo  absolute-relative  abstraction  academia  accelerationism  accuracy  acemoglu  acm  acmtariat  aDNA  advanced  adversarial  advertising  advice  aesthetics  africa  afterlife  age-generation  age-of-discovery  aggregator  aging  agri-mindset  agriculture  ai  ai-control  albion  alesina  algorithms  alien-character  alignment  allodium  alt-inst  altruism  ama  amazon  analogy  analysis  analytical-holistic  anglo  anglosphere  anomie  anonymity  anthropic  anthropology  antidemos  antiquity  aphorism  apollonian-dionysian  apple  applicability-prereqs  applications  approximation  arbitrage  archaeology  archaics  architecture  aristos  arms  arrows  art  article  ascetic  asia  assembly  assimilation  assortative-mating  atmosphere  atoms  attaq  attention  audio  authoritarianism  autism  auto-learning  automata  automation  autor  average-case  axelrod  axioms  backup  bangbang  barons  bayesian  beauty  behavioral-econ  behavioral-gen  being-becoming  being-right  benchmarks  benevolence  berkeley  best-practices  better-explained  betting  bias-variance  biases  bible  big-peeps  big-picture  big-surf  big-yud  bio  biodet  bioinformatics  biomechanics  biophysical-econ  biotech  bitcoin  bits  blockchain  blog  blowhards  boaz-barak  books  borjas  bostrom  bounded-cognition  brain-scan  branches  brands  brexit  britain  broad-econ  browser  buddhism  build-packaging  business  business-models  c(pp)  c:*  c:**  c:***  caching  calculation  calculator  california  caltech  canada  cancer  candidate-gene  canon  capital  capitalism  career  carmack  cartoons  CAS  causation  censorship  chapman  characterization  charity  chart  cheatsheet  checking  checklists  chemistry  chicago  china  christianity  circuits  civic  civil-liberty  civilization  cjones-like  clarity  class  class-warfare  classic  classification  clever-rats  climate-change  clinton  cliometrics  clown-world  coalitions  coarse-fine  cochrane  cocktail  coding-theory  cog-psych  cohesion  cold-war  collaboration  comedy  coming-apart  commentary  communication  communism  community  comparison  compensation  competition  compilers  complement-substitute  complex-systems  complexity  composition-decomposition  compressed-sensing  compression  computation  computer-vision  concentration-of-measure  concept  conceptual-vocab  concrete  concurrency  confidence  confluence  confounding  confucian  confusion  conquest-empire  consilience  constraint-satisfaction  consumerism  context  contracts  contradiction  contrarianism  control  convergence  convexity-curvature  cooking  cool  cooperate-defect  coordination  core-rats  corporation  correctness  correlation  corruption  cost-benefit  cost-disease  counter-revolution  counterexample  counterfactual  coupling-cohesion  courage  course  cracker-econ  creative  crime  criminal-justice  criminology  CRISPR  critique  crooked  crosstab  crux  crypto  cryptocurrency  cs  cultural-dynamics  culture  culture-war  curiosity  current-events  curvature  cybernetics  cycles  cynicism-idealism  d3  dan-luu  dark-arts  darwinian  data  data-science  data-structures  database  dataset  dataviz  dbs  death  debate  debt  debuggin  debugging  decentralized  decision-making  decision-theory  deep-learning  deep-materialism  deepgoog  defense  definite-planning  definition  degrees-of-freedom  democracy  demographic-transition  demographics  dennett  density  dental  descriptive  design  desktop  detail-architecture  deterrence  developing-world  developmental  devops  devtools  diaspora  diet  differential  dignity  dimensionality  direct-indirect  direction  dirty-hands  discipline  discovery  discrete  discrimination  discussion  disease  distributed  distribution  divergence  diversity  documentation  domestication  dominant-minority  dotnet  douthatish  DP  drama  driving  drugs  duality  duplication  duty  dynamic  dynamical  dysgenics  early-modern  earth  eastern-europe  ecology  econ-metrics  econ-productivity  econometrics  economics  econotariat  eden  eden-heaven  editors  education  EEA  effect-size  effective-altruism  efficiency  egalitarianism-hierarchy  EGT  einstein  elections  electromag  elegance  elite  email  embedded-cognition  embodied  embodied-cognition  embodied-pack  embodied-street-fighting  emotion  empirical  ems  encyclopedic  end-times  endo-exo  endocrine  endogenous-exogenous  ends-means  endurance  energy-resources  engineering  enhancement  enlightenment-renaissance-restoration-reformation  ensembles  entertainment  entrepreneurialism  entropy-like  environment  environmental-effects  envy  epidemiology  epigenetics  epistemic  equilibrium  ergo  eric-kaufmann  error  essay  essence-existence  estimate  ethanol  ethical-algorithms  ethics  ethnocentrism  ethnography  EU  europe  evidence  evidence-based  evolution  evopsych  examples  exegesis-hermeneutics  existence  exit-voice  exocortex  expansionism  expectancy  experiment  expert  expert-experience  explanans  explanation  exploratory  explore-exploit  exposition  expression-survival  externalities  extra-introversion  extrema  facebook  failure  faq  farmers-and-foragers  fashun  FDA  features  fermi  fertility  feudal  fiction  field-study  fighting  finance  finiteness  fisher  fitness  fitsci  fixed-point  flexibility  fluid  flux-stasis  flynn  focus  food  foreign-lang  foreign-policy  formal-methods  formal-values  forms-instances  frameworks  free-riding  frequency  frequentist  frisson  frontier  functional  fungibility-liquidity  futurism  gallic  game-theory  games  garett-jones  gavisti  gbooks  gedanken  gelman  gender  gender-diff  gene-drift  gene-flow  general-survey  generalization  generative  genetic-correlation  genetic-load  genetics  genomics  geoengineering  geography  geometry  geopolitics  germanic  giants  gibbon  gilens-page  git  github  gnon  gnosis-logos  gnxp  god-man-beast-victim  golang  good-evil  google  gotchas  government  gowers  grad-school  gradient-descent  graph-theory  graphical-models  graphics  graphs  gravity  gray-econ  great-powers  greedy  gregory-clark  ground-up  group-level  group-selection  growth-econ  GT-101  guide  guilt-shame  GWAS  gwern  GxE  h2o  habit  haidt  hamming  hanson  hanushek  happy-sad  hard-tech  hardware  hari-seldon  harvard  haskell  hci  health  healthcare  heavy-industry  henrich  hetero-advantage  heterodox  heuristic  hg  hi-order-bits  hidden-motives  high-dimension  high-variance  higher-ed  history  hive-mind  hmm  hn  homo-hetero  honor  housing  howto  hsu  huge-data-the-biggest  human-bean  human-capital  human-ml  human-study  humanity  humility  huntington  hypochondria  hypocrisy  hypothesis-testing  ide  ideas  identity-politics  ideology  idk  IEEE  iidness  illusion  immune  impact  impetus  impro  incentives  increase-decrease  india  individualism-collectivism  industrial-org  industrial-revolution  inequality  inference  info-dynamics  info-econ  info-foraging  infographic  information-theory  infrastructure  init  innovation  input-output  insight  instinct  institutions  integration-extension  integrity  intel  intelligence  interdisciplinary  interests  internet  interpretability  intersection  intersection-connectedness  intervention  interview  interview-prep  intricacy  intuition  investigative-journo  investing  iq  iran  iraq-syria  iron-age  is-ought  islam  israel  isteveish  iteration-recursion  janus  japan  jargon  javascript  jobs  journos-pundits  judaism  judgement  julia  justice  jvm  keyboard  keyboards  kinship  knowledge  korea  krugman  kumbaya-kult  labor  land  language  large-factor  larry-summers  latent-variables  latex  latin-america  law  leadership  leaks  learning  learning-theory  lecture-notes  lee-kuan-yew  left-wing  legacy  legibility  len:long  len:short  lens  lesswrong  let-me-see  letters  levers  leviathan  lexical  libraries  life-history  lifestyle  limits  linear-algebra  linear-models  linearity  linguistics  links  linux  lisp  list  literature  lived-experience  llvm  lmao  local-global  logic  logos  lol  long-short-run  long-term  longevity  longform  longitudinal  love-hate  low-hanging  lurid  machiavelli  machine-learning  macro  madisonian  magnitude  malaise  male-variability  malthus  management  managerial-state  manifolds  map-territory  maps  marginal  marginal-rev  market-failure  market-power  markets  markov  martial  martingale  matching  math  math.CA  math.CO  math.CV  math.DS  math.GN  math.NT  mathtariat  matrix-factorization  maxim-gun  meaningness  measure  measurement  media  medicine  medieval  mediterranean  memes(ew)  memory-management  MENA  mena4  meta-analysis  meta:math  meta:medicine  meta:prediction  meta:reading  meta:research  meta:rhetoric  meta:science  meta:war  metabolic  metabuch  metameta  methodology  metrics  micro  microfoundations  microsoft  midwest  migrant-crisis  migration  mihai  military  minimum-viable  miri-cfar  missing-heritability  ML-MAP-E  mobile  mobility  model-class  model-organism  model-selection  models  modernity  mokyr-allen-mccloskey  moloch  moments  monetary-fiscal  money  monte-carlo  mooc  mood-affiliation  morality  mostly-modern  motivation  multi  multiplicative  murray  music  musk  mutation  mystic  myth  n-factor  narrative  nascent-state  nationalism-globalism  natural-experiment  nature  navigation  near-far  neocons  network-structure  networking  neuro  neuro-nitgrit  neurons  new-religion  news  nibble  nietzschean  nihil  nitty-gritty  nl-and-so-can-you  nlp  no-go  noahpinion  noble-lie  noblesse-oblige  noise-structure  nonlinearity  nonparametric  nootropics  nordic  norms  north-weingast-like  northeast  nostalgia  notetaking  novelty  nuclear  null-result  number  numerics  nutrition  nyc  obama  obesity  objective-measure  objektbuch  ocaml-sml  occam  occident  oceans  offense-defense  old-anglo  oly  oly-programming  oop  open-closed  open-problems  openai  operational  opioids  optimate  optimism  optimization  order-disorder  orders  ORFE  org:anglo  org:biz  org:bleg  org:bv  org:data  org:davos  org:econlib  org:edu  org:euro  org:fin  org:foreign  org:gov  org:health  org:inst  org:junk  org:lite  org:local  org:mag  org:mat  org:med  org:nat  org:ngo  org:popup  org:rec  org:sci  org:theos  organization  organizing  orient  orourke  orwellian  os  oscillation  oss  osx  other-xtian  outcome-risk  outliers  overflow  oxbridge  paganism  paleocon  papers  parable  paradox  parallax  parametric  parasites-microbiome  parenting  pareto  parsimony  paste  paternal-age  path-dependence  patho-altruism  patience  paul-romer  paulg  paying-rent  pdf  peace-violence  people  performance  personal-finance  personality  pessimism  phalanges  pharma  phase-transition  phd  philosophy  phys-energy  physics  pic  piketty  pinker  piracy  planning  play  plots  pls  plt  poast  podcast  poetry  polanyi-marx  polarization  policy  polis  polisci  political-econ  politics  poll  polynomials  pop-diff  pop-structure  popsci  population  population-genetics  populism  postmortem  postrat  power  power-law  pragmatic  pre-2013  pre-ww2  prediction  prediction-markets  preference-falsification  prejudice  prepping  preprint  presentation  primitivism  princeton  prioritizing  privacy  pro-rata  probability  problem-solving  productivity  prof  profile  programming  progression  project  propaganda  properties  property-rights  proposal  protestant-catholic  protocol  prudence  pseudoE  psych-architecture  psychiatry  psychology  psychometrics  public-goodish  public-health  publishing  putnam-like  python  q-n-a  qra  QTL  quality  quantitative-qualitative  quantum  quantum-info  questions  quixotic  quiz  quora  quotes  r-lang  race  rand-approx  random  randy-ayndy  ranking  rant  rationality  ratty  reading  real-nominal  realness  realpolitik  reason  recent-selection  recommendations  recruiting  red-queen  reddit  redistribution  reduction  reference  reflection  regional-scatter-plots  regression  regression-to-mean  regularity  regularization  regularizer  regulation  reinforcement  relativity  religion  rent-seeking  replication  repo  reputation  research  research-program  responsibility  retention  retrofit  revealed-preference  review  revolution  rhetoric  rhythm  right-wing  rigidity  rigor  rindermann-thompson  risk  ritual  robotics  robust  roots  rot  running  russia  rust  rust-lang  s-factor  s:*  s:***  s:null  saas  safety  sample-complexity  sampling  sampling-bias  sanctity-degradation  sapiens  scala  scale  scaling-up  schelling  scholar  sci-comp  science  science-anxiety  scifi-fantasy  scitariat  scott-sumner  search  securities  security  selection  self-control  self-interest  self-report  selfish-gene  sentiment  sequential  sex  sexuality  shakespeare  shannon  shift  shipping  sib-study  signal-noise  signaling  signum  similarity  simler  simulation  singularity  sinosphere  skeleton  skunkworks  sky  slides  smoothness  soccer  social  social-capital  social-choice  social-norms  social-psych  social-science  social-structure  sociality  society  sociology  socs-and-mops  soft-question  software  solid-study  solzhenitsyn  space  sparsity  spatial  speaking  spearhead  speculation  speed  speedometer  spengler  spock  sports  spreading  ssc  stackex  stagnation  stamina  stanford  startups  stat-mech  stat-power  state  state-of-art  statesmen  stats  status  stereotypes  stochastic-processes  stock-flow  stoic  stories  strategy  straussian  stream  street-fighting  stress  strings  structure  study  studying  stylized-facts  subculture  subjective-objective  success  sulla  summary  supply-demand  survey  sv  symmetry  synchrony  syntax  synthesis  system-design  systematic-ad-hoc  systems  tactics  tails  tainter  talks  tapes  taubes-guyenet  taxes  tcs  tcstariat  teaching  tech  technical-writing  technocracy  technology  techtariat  telos-atelos  temperance  temperature  tensors  terminal  terrorism  tetlock  texas  the-basilisk  the-bones  the-classics  the-devil  the-founding  the-great-west-whale  the-self  the-south  the-trenches  the-watchers  the-west  the-world-is-just-atoms  theory-of-mind  theory-practice  theos  thermo  thick-thin  thiel  things  thinking  threat-modeling  thucydides  tidbits  tightness  time  time-complexity  time-preference  time-series  time-use  tip-of-tongue  tocqueville  todo  toolkit  tools  top-n  topology  toys  traces  track-record  trade  tradeoffs  tradition  transportation  travel  trees  trends  tribalism  tricks  trivia  troll  trump  trust  truth  tumblr  turchin  turing  tutorial  tutoring  tv  twin-study  twitter  types  ubiquity  ui  unaffiliated  uncertainty  unintended-consequences  uniqueness  unit  universalism-particularism  unix  unsupervised  urban  urban-rural  us-them  usa  utopia-dystopia  ux  vague  values  vampire-squid  variance-components  vc-dimension  vcs  venture  video  virginia-DC  virtu  visual-understanding  visualization  visuo  vitality  volo-avolo  von-neumann  vr  vulgar  walls  walter-scheidel  war  washington  water  waves  wealth  wealth-of-nations  web  webapp  weightlifting  weird  welfare-state  west-hunter  westminster  whiggish-hegelian  white-paper  whole-partial-many  wiki  wild-ideas  winner-take-all  wire-guided  wisdom  within-group  within-without  wkfly  wonkish  wordlessness  workflow  working-stiff  world  world-war  wormholes  worrydream  worse-is-better/the-right-thing  writing  wut  X-not-about-Y  yak-shaving  yc  yoga  yvain  zeitgeist  zero-positive-sum  zooming  🌞  🎓  🎩  🐝  🐸  👳  👽  🔬  🖥  🤖 

Copy this bookmark:



description:


tags: