nhaliday + commentary 989
I don't like notebooks.- Joel Grus (Allen Institute for Artificial Intelligence) - YouTube
2 days ago by nhaliday
https://news.ycombinator.com/item?id=17856700
https://www.reddit.com/r/MachineLearning/comments/9a7usg/d_i_dont_like_notebooks/
https://www.reddit.com/r/Python/comments/9aoi35/i_dont_like_notebooks_joel_grus_jupytercon_2018/
others:
https://www.reddit.com/r/MachineLearning/comments/4q9ev0/is_it_only_me_that_thinks_jupyter_is_horrible/
https://towardsdatascience.com/5-reasons-why-jupyter-notebooks-suck-4dc201e27086
video
presentation
techtariat
slides
programming
engineering
data-science
best-practices
python
frameworks
ecosystem
live-coding
hci
ui
ux
state
sci-comp
contrarianism
rhetoric
critique
rant
worrydream
multi
hn
commentary
reddit
social
org:med
org:popup
acmtariat
move-fast-(and-break-things)
summary
list
top-n
https://www.reddit.com/r/MachineLearning/comments/9a7usg/d_i_dont_like_notebooks/
https://www.reddit.com/r/Python/comments/9aoi35/i_dont_like_notebooks_joel_grus_jupytercon_2018/
others:
https://www.reddit.com/r/MachineLearning/comments/4q9ev0/is_it_only_me_that_thinks_jupyter_is_horrible/
https://towardsdatascience.com/5-reasons-why-jupyter-notebooks-suck-4dc201e27086
2 days ago by nhaliday
Etsy’s experiment with immutable documentation | Hacker News
hn commentary techtariat org:com technical-writing collaboration best-practices programming engineering documentation communication flux-stasis interface-compatibility synchrony cost-benefit time sequential ends-means software project yak-shaving detail-architecture map-territory state
23 days ago by nhaliday
hn commentary techtariat org:com technical-writing collaboration best-practices programming engineering documentation communication flux-stasis interface-compatibility synchrony cost-benefit time sequential ends-means software project yak-shaving detail-architecture map-territory state
23 days ago by nhaliday
My Conversation with Eric Schmidt - Marginal REVOLUTION
27 days ago by nhaliday
actually fairly interesting
economics
marginal-rev
interview
org:med
commentary
google
barons
sv
tech
reflection
stories
business
entrepreneurialism
the-founding
init
culture
management
advertising
money
cost-benefit
startups
social
media
frontier
education
higher-ed
signaling
human-capital
paying-rent
arbitrage
blockchain
charity
effective-altruism
capitalism
long-short-run
incentives
internet
world
china
asia
authoritarianism
usa
great-powers
regulation
skunkworks
urban-rural
housing
venture
longevity
malaise
stagnation
growth-econ
compensation
class
winner-take-all
polarization
persuasion
info-foraging
27 days ago by nhaliday
Archiving URLs | Hacker News
28 days ago by nhaliday
https://news.ycombinator.com/item?id=18511760
Pinboard: https://news.ycombinator.com/item?id=1941823
https://web.archive.org/web/20170707135337/http://blog.pinboard.in:80/2010/11/bookmark_archives_that_don_t
https://github.com/pirate/ArchiveBox
https://github.com/pirate/ArchiveBox/wiki/Web-Archiving-Community
https://github.com/iipc/awesome-web-archiving
https://github.com/ArchiveTeam/grab-site
hn
commentary
ratty
gwern
internet
backup
pinboard
project
howto
guide
analysis
data
pro-rata
programming
long-short-run
time
sequential
spreading
flux-stasis
nihil
multi
comparison
yak-shaving
repo
tools
diogenes
techtariat
org:com
paste
links
list
top-n
sleuthin
protocol-metadata
Pinboard: https://news.ycombinator.com/item?id=1941823
https://web.archive.org/web/20170707135337/http://blog.pinboard.in:80/2010/11/bookmark_archives_that_don_t
https://github.com/pirate/ArchiveBox
https://github.com/pirate/ArchiveBox/wiki/Web-Archiving-Community
https://github.com/iipc/awesome-web-archiving
https://github.com/ArchiveTeam/grab-site
28 days ago by nhaliday
Streamlit — the fastest way to build custom ML tools
28 days ago by nhaliday
very cool, "React" + Jupyter
https://news.ycombinator.com/item?id=21158487
python
libraries
frameworks
worrydream
ui
let-me-see
move-fast-(and-break-things)
dynamic
machine-learning
data-science
multi
hn
commentary
comparison
facebook
javascript
dataviz
yak-shaving
state
web
frontend
functional
caching
sci-comp
https://news.ycombinator.com/item?id=21158487
28 days ago by nhaliday
In Numbers: Ask HN: Who is hiring? (June 2018)
28 days ago by nhaliday
mainly analysis of in-demand skills (particular languages/libraries/frameworks, not general skills)
similar: https://www.hiringlab.org/2019/11/19/todays-top-tech-skills/
https://news.ycombinator.com/item?id=21620687
https://jessesw.com/Data-Science-Skills/
https://news.ycombinator.com/item?id=9287491
old (orig in multi here: https://pinboard.in/u:nhaliday/b:a4e6f5b80faf):
https://www.latitude.work/trends/july-2017
techtariat
org:com
data
analysis
visualization
trends
time-series
ubiquity
supply-demand
human-capital
programming
tech
hn
yc
ecosystem
pls
python
javascript
jvm
golang
c(pp)
types
frameworks
libraries
facebook
frontend
web
cloud
tech-infrastructure
devops
dbs
distribution
pro-rata
local-global
let-me-see
amazon
working-stiff
multi
commentary
ranking
list
top-n
saas
software
maps
usa
correlation
crosstab
r-lang
dynamic
calculator
tools
jobs
career
similar: https://www.hiringlab.org/2019/11/19/todays-top-tech-skills/
https://news.ycombinator.com/item?id=21620687
https://jessesw.com/Data-Science-Skills/
https://news.ycombinator.com/item?id=9287491
old (orig in multi here: https://pinboard.in/u:nhaliday/b:a4e6f5b80faf):
https://www.latitude.work/trends/july-2017
28 days ago by nhaliday
REST is the new SOAP | Hacker News
28 days ago by nhaliday
Nobody Understands REST or HTTP: https://news.ycombinator.com/item?id=2724488
Some REST best practices: https://news.ycombinator.com/item?id=8618243
REST was never about CRUD: https://news.ycombinator.com/item?id=17563851
Post-REST: https://news.ycombinator.com/item?id=18485978
https://stackoverflow.com/questions/16360351/get-http-request-payload
https://stackoverflow.com/questions/978061/http-get-with-request-body
Ask HN: Were you happy moving your API from REST to GraphQL?: https://news.ycombinator.com/item?id=17565508
From REST to GraphQL: https://news.ycombinator.com/item?id=10365555
REST in Peace. Long Live GraphQL: https://news.ycombinator.com/item?id=14839576
GraphQL Didn't Kill REST: https://news.ycombinator.com/item?id=17572154
https://www.freecodecamp.org/news/five-common-problems-in-graphql-apps-and-how-to-fix-them-ac74d37a293c/
hn
commentary
techtariat
org:ngo
programming
engineering
web
client-server
networking
rant
rhetoric
contrarianism
idk
org:med
best-practices
working-stiff
api
models
protocol-metadata
internet
state
structure
chart
multi
q-n-a
discussion
expert-experience
track-record
reflection
cost-benefit
design
system-design
comparison
code-organizing
flux-stasis
interface-compatibility
trends
gotchas
stackex
state-of-art
distributed
concurrency
abstraction
concept
conceptual-vocab
python
ubiquity
list
top-n
duplication
synchrony
performance
caching
Some REST best practices: https://news.ycombinator.com/item?id=8618243
REST was never about CRUD: https://news.ycombinator.com/item?id=17563851
Post-REST: https://news.ycombinator.com/item?id=18485978
https://stackoverflow.com/questions/16360351/get-http-request-payload
https://stackoverflow.com/questions/978061/http-get-with-request-body
Ask HN: Were you happy moving your API from REST to GraphQL?: https://news.ycombinator.com/item?id=17565508
From REST to GraphQL: https://news.ycombinator.com/item?id=10365555
REST in Peace. Long Live GraphQL: https://news.ycombinator.com/item?id=14839576
GraphQL Didn't Kill REST: https://news.ycombinator.com/item?id=17572154
https://www.freecodecamp.org/news/five-common-problems-in-graphql-apps-and-how-to-fix-them-ac74d37a293c/
28 days ago by nhaliday
Ask HN: What's a promising area to work on? | Hacker News
29 days ago by nhaliday
https://www.reddit.com/r/cscareerquestions/comments/d65upt/how_did_you_know_what_niche_to_go_into/
https://www.reddit.com/r/cscareerquestions/comments/7yb9ol/how_did_you_find_your_nichespecialization/
https://www.reddit.com/r/cscareerquestions/comments/58qr8d/what_specialization_tends_to_have_the_biggest/
https://www.reddit.com/r/cscareerquestions/comments/b4jp0k/what_are_some_worthy_job_specializations_to_get/
https://www.reddit.com/r/cscareerquestions/comments/awabf7/what_are_some_underrated_specializations_in/
We’re in the Middle of a Data Engineering Talent Shortage: https://news.ycombinator.com/item?id=12454901
https://www.reddit.com/r/cscareerquestions/comments/a4rhgu/is_programming_languages_a_good/
https://www.reddit.com/r/cscareerquestions/comments/ahyyib/recommended_areas_to_specialize_in_if_youre_not/
hn
discussion
q-n-a
ideas
impact
trends
the-bones
speedometer
technology
applications
tech
cs
programming
list
top-n
recommendations
lens
machine-learning
deep-learning
security
privacy
crypto
software
hardware
cloud
biotech
CRISPR
bioinformatics
biohacking
blockchain
cryptocurrency
crypto-anarchy
healthcare
graphics
SIGGRAPH
vr
automation
universalism-particularism
expert-experience
reddit
social
arbitrage
supply-demand
ubiquity
cost-benefit
compensation
chart
career
planning
strategy
long-term
advice
sub-super
commentary
rhetoric
org:com
techtariat
human-capital
prioritizing
tech-infrastructure
working-stiff
data-science
https://www.reddit.com/r/cscareerquestions/comments/7yb9ol/how_did_you_find_your_nichespecialization/
https://www.reddit.com/r/cscareerquestions/comments/58qr8d/what_specialization_tends_to_have_the_biggest/
https://www.reddit.com/r/cscareerquestions/comments/b4jp0k/what_are_some_worthy_job_specializations_to_get/
https://www.reddit.com/r/cscareerquestions/comments/awabf7/what_are_some_underrated_specializations_in/
We’re in the Middle of a Data Engineering Talent Shortage: https://news.ycombinator.com/item?id=12454901
https://www.reddit.com/r/cscareerquestions/comments/a4rhgu/is_programming_languages_a_good/
https://www.reddit.com/r/cscareerquestions/comments/ahyyib/recommended_areas_to_specialize_in_if_youre_not/
29 days ago by nhaliday
Shtetl-Optimized » Blog Archive » What does the NSA think of academic cryptographers? Recently-declassified document provides clues
tcstariat aaronson commentary links quotes lens academia interdisciplinary government intel white-paper crypto rigorous-crypto complexity tcs conference impetus telos-atelos motivation impact proof-systems expert-experience gedanken reduction
29 days ago by nhaliday
tcstariat aaronson commentary links quotes lens academia interdisciplinary government intel white-paper crypto rigorous-crypto complexity tcs conference impetus telos-atelos motivation impact proof-systems expert-experience gedanken reduction
29 days ago by nhaliday
The Open Steno Project | Hacker News
4 weeks ago by nhaliday
https://web.archive.org/web/20170315133208/http://www.danieljosephpetersen.com/posts/programming-and-stenography.html
I think at the end of the day, the Plover guys are trying to solve the wrong problem. Stenography is a dying field. I don’t wish anyone to lose their livelihood, but realistically speaking, the job should not exist once text to speech technology advances far enough. I’m not claiming that the field will be replaced by it, but I also don’t love the idea of people having to learn such an inane and archaic system.
hn
commentary
keyboard
speed
efficiency
writing
language
maker
homepage
project
multi
techtariat
cost-benefit
critique
expert-experience
programming
backup
contrarianism
I think at the end of the day, the Plover guys are trying to solve the wrong problem. Stenography is a dying field. I don’t wish anyone to lose their livelihood, but realistically speaking, the job should not exist once text to speech technology advances far enough. I’m not claiming that the field will be replaced by it, but I also don’t love the idea of people having to learn such an inane and archaic system.
4 weeks ago by nhaliday
The Definitive Guide To Website Authentication | Hacker News
5 weeks ago by nhaliday
Ask HN: What do you use for authentication and authorization?: https://news.ycombinator.com/item?id=18767767
Ask HN: What's the recommended method of adding authentication to a REST API?: https://news.ycombinator.com/item?id=16157002
Authentication Cheat Sheet: https://news.ycombinator.com/item?id=8984266
https://stackoverflow.com/questions/37582444/jwt-vs-cookies-for-token-based-authentication
https://stackoverflow.com/questions/39909419/what-are-the-main-differences-between-jwt-and-oauth-authentication
https://medium.com/@sherryhsu/session-vs-token-based-authentication-11a6c5ac45e4
this is clearest explanation of session vs token to me (seems to hinge on anything is stored in the server's database, as opposed to just cryptographically signing user's permissions so they can't be forged and don't need anything from DB to be checked): https://security.stackexchange.com/questions/81756/session-authentication-vs-token-authentication
https://stackoverflow.com/questions/40200413/sessions-vs-token-based-authentication
https://stackoverflow.com/questions/17000835/token-authentication-vs-cookies
https://softwareengineering.stackexchange.com/questions/350092/cookie-based-vs-session-vs-token-based-vs-claims-based-authentications
hn
commentary
q-n-a
stackex
programming
identification-equivalence
security
web
client-server
crypto
checklists
best-practices
objektbuch
api
multi
cheatsheet
chart
system-design
nitty-gritty
yak-shaving
comparison
explanation
summary
jargon
state
networking
protocol-metadata
time
Ask HN: What's the recommended method of adding authentication to a REST API?: https://news.ycombinator.com/item?id=16157002
Authentication Cheat Sheet: https://news.ycombinator.com/item?id=8984266
https://stackoverflow.com/questions/37582444/jwt-vs-cookies-for-token-based-authentication
https://stackoverflow.com/questions/39909419/what-are-the-main-differences-between-jwt-and-oauth-authentication
https://medium.com/@sherryhsu/session-vs-token-based-authentication-11a6c5ac45e4
this is clearest explanation of session vs token to me (seems to hinge on anything is stored in the server's database, as opposed to just cryptographically signing user's permissions so they can't be forged and don't need anything from DB to be checked): https://security.stackexchange.com/questions/81756/session-authentication-vs-token-authentication
https://stackoverflow.com/questions/40200413/sessions-vs-token-based-authentication
https://stackoverflow.com/questions/17000835/token-authentication-vs-cookies
https://softwareengineering.stackexchange.com/questions/350092/cookie-based-vs-session-vs-token-based-vs-claims-based-authentications
5 weeks ago by nhaliday
Build your own X: project-based programming tutorials | Hacker News
5 weeks ago by nhaliday
https://news.ycombinator.com/item?id=21430321
https://www.reddit.com/r/programming/comments/8j0gz3/build_your_own_x/
hn
commentary
repo
paste
programming
minimum-viable
frontier
allodium
list
links
roadmap
accretion
quixotic
🖥
interview-prep
system-design
move-fast-(and-break-things)
graphics
SIGGRAPH
vr
p2p
project
blockchain
cryptocurrency
bitcoin
bots
terminal
dbs
virtualization
frontend
web
javascript
frameworks
libraries
facebook
pls
c(pp)
python
dotnet
jvm
ocaml-sml
haskell
networking
systems
metal-to-virtual
deep-learning
os
physics
mechanics
simulation
automata-languages
compilers
search
internet
huge-data-the-biggest
strings
computer-vision
multi
reddit
social
detail-architecture
https://www.reddit.com/r/programming/comments/8j0gz3/build_your_own_x/
5 weeks ago by nhaliday
The Baseline Costs of JavaScript Frameworks | Hacker News
hn commentary techtariat org:med programming frontend web javascript performance latency-throughput cost-benefit frameworks libraries ecosystem client-server tradeoffs intricacy engineering no-go data benchmarks caching unintended-consequences security comparison
5 weeks ago by nhaliday
hn commentary techtariat org:med programming frontend web javascript performance latency-throughput cost-benefit frameworks libraries ecosystem client-server tradeoffs intricacy engineering no-go data benchmarks caching unintended-consequences security comparison
5 weeks ago by nhaliday
Has Australia Really Had a 28-Year Expansion? (Yes!) - Marginal REVOLUTION
7 weeks ago by nhaliday
The bottom line is that however you measure it, Australian performance looks very good. Moreover RER are correct that one of the reasons for strong Australian economic performance is higher population growth rates. It’s not that higher population growth rates are masking poorer performance in real GDP per capita, however, it’s more in my view that higher population growth rates are contributing to strong performance as measured by both real GDP and real GDP per capita.
--
Control+F "China"
0 results.
China gets a 40 year expansion relying heavily on commodities. Australia squeezes 30 years out of it by happily selling to the Chinese.
yeah...
econotariat
marginal-rev
commentary
links
summary
data
analysis
economics
growth-econ
econ-metrics
wealth
china
asia
anglo
anglosphere
trade
population
demographics
increase-decrease
--
Control+F "China"
0 results.
China gets a 40 year expansion relying heavily on commodities. Australia squeezes 30 years out of it by happily selling to the Chinese.
yeah...
7 weeks ago by nhaliday
Advantages and disadvantages of building a single page web application - Software Engineering Stack Exchange
7 weeks ago by nhaliday
Advantages
- All data has to be available via some sort of API - this is a big advantage for my use case as I want to have an API to my application anyway. Right now about 60-70% of my calls to get/update data are done through a REST API. Doing a single page application will allow me to better test my REST API since the application itself will use it. It also means that as the application grows, the API itself will grow since that is what the application uses; no need to maintain the API as an add-on to the application.
- More responsive application - since all data loaded after the initial page is kept to a minimum and transmitted in a compact format (like JSON), data requests should generally be faster, and the server will do slightly less processing.
Disadvantages
- Duplication of code - for example, model code. I am going to have to create models both on the server side (PHP in this case) and the client side in Javascript.
- Business logic in Javascript - I can't give any concrete examples on why this would be bad but it just doesn't feel right to me having business logic in Javascript that anyone can read.
- Javascript memory leaks - since the page never reloads, Javascript memory leaks can happen, and I would not even know where to begin to debug them.
--
Disadvantages I often see with Single Page Web Applications:
- Inability to link to a specific part of the site, there's often only 1 entry point.
- Disfunctional back and forward buttons.
- The use of tabs is limited or non-existant.
(especially mobile:)
- Take very long to load.
- Don't function at all.
- Can't reload a page, a sudden loss of network takes you back to the start of the site.
This answer is outdated, Most single page application frameworks have a way to deal with the issues above – Luis May 27 '14 at 1:41
@Luis while the technology is there, too often it isn't used. – Pieter B Jun 12 '14 at 6:53
https://softwareengineering.stackexchange.com/questions/201838/building-a-web-application-that-is-almost-completely-rendered-by-javascript-whi
https://softwareengineering.stackexchange.com/questions/143194/what-advantages-are-conferred-by-using-server-side-page-rendering
Server-side HTML rendering:
- Fastest browser rendering
- Page caching is possible as a quick-and-dirty performance boost
- For "standard" apps, many UI features are pre-built
- Sometimes considered more stable because components are usually subject to compile-time validation
- Leans on backend expertise
- Sometimes faster to develop*
*When UI requirements fit the framework well.
Client-side HTML rendering:
- Lower bandwidth usage
- Slower initial page render. May not even be noticeable in modern desktop browsers. If you need to support IE6-7, or many mobile browsers (mobile webkit is not bad) you may encounter bottlenecks.
- Building API-first means the client can just as easily be an proprietary app, thin client, another web service, etc.
- Leans on JS expertise
- Sometimes faster to develop**
**When the UI is largely custom, with more interesting interactions. Also, I find coding in the browser with interpreted code noticeably speedier than waiting for compiles and server restarts.
https://softwareengineering.stackexchange.com/questions/237537/progressive-enhancement-vs-single-page-apps
https://stackoverflow.com/questions/21862054/single-page-application-advantages-and-disadvantages
=== ADVANTAGES ===
1. SPA is extremely good for very responsive sites:
2. With SPA we don't need to use extra queries to the server to download pages.
3.May be any other advantages? Don't hear about any else..
=== DISADVANTAGES ===
1. Client must enable javascript.
2. Only one entry point to the site.
3. Security.
https://softwareengineering.stackexchange.com/questions/287819/should-you-write-your-back-end-as-an-api
focused on .NET
https://softwareengineering.stackexchange.com/questions/337467/is-it-normal-design-to-completely-decouple-backend-and-frontend-web-applications
A SPA comes with a few issues associated with it. Here are just a few that pop in my mind now:
- it's mostly JavaScript. One error in a section of your application might prevent other sections of the application to work because of that Javascript error.
- CORS.
- SEO.
- separate front-end application means separate projects, deployment pipelines, extra tooling, etc;
- security is harder to do when all the code is on the client;
- completely interact in the front-end with the user and only load data as needed from the server. So better responsiveness and user experience;
- depending on the application, some processing done on the client means you spare the server of those computations.
- have a better flexibility in evolving the back-end and front-end (you can do it separately);
- if your back-end is essentially an API, you can have other clients in front of it like native Android/iPhone applications;
- the separation might make is easier for front-end developers to do CSS/HTML without needing to have a server application running on their machine.
Create your own dysfunctional single-page app: https://news.ycombinator.com/item?id=18341993
I think are three broadly assumed user benefits of single-page apps:
1. Improved user experience.
2. Improved perceived performance.
3. It’s still the web.
5 mistakes to create a dysfunctional single-page app
Mistake 1: Under-estimate long-term development and maintenance costs
Mistake 2: Use the single-page app approach unilaterally
Mistake 3: Under-invest in front end capability
Mistake 4: Use naïve dev practices
Mistake 5: Surf the waves of framework hype
The disadvantages of single page applications: https://news.ycombinator.com/item?id=9879685
You probably don't need a single-page app: https://news.ycombinator.com/item?id=19184496
https://news.ycombinator.com/item?id=20384738
MPA advantages:
- Stateless requests
- The browser knows how to deal with a traditional architecture
- Fewer, more mature tools
- SEO for free
When to go for the single page app:
- Core functionality is real-time (e.g Slack)
- Rich UI interactions are core to the product (e.g Trello)
- Lots of state shared between screens (e.g. Spotify)
Hybrid solutions
...
Github uses this hybrid approach.
...
Ask HN: Is it ok to use traditional server-side rendering these days?: https://news.ycombinator.com/item?id=13212465
https://www.reddit.com/r/webdev/comments/cp9vb8/are_people_still_doing_ssr/
https://www.reddit.com/r/webdev/comments/93n60h/best_javascript_modern_approach_to_multi_page/
https://www.reddit.com/r/webdev/comments/aax4k5/do_you_develop_solely_using_spa_these_days/
The SEO issues with SPAs is a persistent concern you hear about a lot, yet nobody ever quantifies the issues. That is because search engines keep the operation of their crawler bots and indexing secret. I have read into it some, and it seems that problem used to exist, somewhat, but is more or less gone now. Bots can deal with SPAs fine.
--
I try to avoid building a SPA nowadays if possible. Not because of SEO (there are now server-side solutions to help with that), but because a SPA increases the complexity of the code base by a magnitude. State management with Redux... Async this and that... URL routing... And don't forget to manage page history.
How about just render pages with templates and be done?
If I need a highly dynamic UI for a particular feature, then I'd probably build an embeddable JS widget for it.
q-n-a
stackex
programming
engineering
tradeoffs
system-design
design
web
frontend
javascript
cost-benefit
analysis
security
state
performance
traces
measurement
intricacy
code-organizing
applicability-prereqs
multi
comparison
smoothness
shift
critique
techtariat
chart
ui
coupling-cohesion
interface-compatibility
hn
commentary
best-practices
discussion
trends
client-server
api
composition-decomposition
cycles
frameworks
ecosystem
degrees-of-freedom
dotnet
working-stiff
reddit
social
- All data has to be available via some sort of API - this is a big advantage for my use case as I want to have an API to my application anyway. Right now about 60-70% of my calls to get/update data are done through a REST API. Doing a single page application will allow me to better test my REST API since the application itself will use it. It also means that as the application grows, the API itself will grow since that is what the application uses; no need to maintain the API as an add-on to the application.
- More responsive application - since all data loaded after the initial page is kept to a minimum and transmitted in a compact format (like JSON), data requests should generally be faster, and the server will do slightly less processing.
Disadvantages
- Duplication of code - for example, model code. I am going to have to create models both on the server side (PHP in this case) and the client side in Javascript.
- Business logic in Javascript - I can't give any concrete examples on why this would be bad but it just doesn't feel right to me having business logic in Javascript that anyone can read.
- Javascript memory leaks - since the page never reloads, Javascript memory leaks can happen, and I would not even know where to begin to debug them.
--
Disadvantages I often see with Single Page Web Applications:
- Inability to link to a specific part of the site, there's often only 1 entry point.
- Disfunctional back and forward buttons.
- The use of tabs is limited or non-existant.
(especially mobile:)
- Take very long to load.
- Don't function at all.
- Can't reload a page, a sudden loss of network takes you back to the start of the site.
This answer is outdated, Most single page application frameworks have a way to deal with the issues above – Luis May 27 '14 at 1:41
@Luis while the technology is there, too often it isn't used. – Pieter B Jun 12 '14 at 6:53
https://softwareengineering.stackexchange.com/questions/201838/building-a-web-application-that-is-almost-completely-rendered-by-javascript-whi
https://softwareengineering.stackexchange.com/questions/143194/what-advantages-are-conferred-by-using-server-side-page-rendering
Server-side HTML rendering:
- Fastest browser rendering
- Page caching is possible as a quick-and-dirty performance boost
- For "standard" apps, many UI features are pre-built
- Sometimes considered more stable because components are usually subject to compile-time validation
- Leans on backend expertise
- Sometimes faster to develop*
*When UI requirements fit the framework well.
Client-side HTML rendering:
- Lower bandwidth usage
- Slower initial page render. May not even be noticeable in modern desktop browsers. If you need to support IE6-7, or many mobile browsers (mobile webkit is not bad) you may encounter bottlenecks.
- Building API-first means the client can just as easily be an proprietary app, thin client, another web service, etc.
- Leans on JS expertise
- Sometimes faster to develop**
**When the UI is largely custom, with more interesting interactions. Also, I find coding in the browser with interpreted code noticeably speedier than waiting for compiles and server restarts.
https://softwareengineering.stackexchange.com/questions/237537/progressive-enhancement-vs-single-page-apps
https://stackoverflow.com/questions/21862054/single-page-application-advantages-and-disadvantages
=== ADVANTAGES ===
1. SPA is extremely good for very responsive sites:
2. With SPA we don't need to use extra queries to the server to download pages.
3.May be any other advantages? Don't hear about any else..
=== DISADVANTAGES ===
1. Client must enable javascript.
2. Only one entry point to the site.
3. Security.
https://softwareengineering.stackexchange.com/questions/287819/should-you-write-your-back-end-as-an-api
focused on .NET
https://softwareengineering.stackexchange.com/questions/337467/is-it-normal-design-to-completely-decouple-backend-and-frontend-web-applications
A SPA comes with a few issues associated with it. Here are just a few that pop in my mind now:
- it's mostly JavaScript. One error in a section of your application might prevent other sections of the application to work because of that Javascript error.
- CORS.
- SEO.
- separate front-end application means separate projects, deployment pipelines, extra tooling, etc;
- security is harder to do when all the code is on the client;
- completely interact in the front-end with the user and only load data as needed from the server. So better responsiveness and user experience;
- depending on the application, some processing done on the client means you spare the server of those computations.
- have a better flexibility in evolving the back-end and front-end (you can do it separately);
- if your back-end is essentially an API, you can have other clients in front of it like native Android/iPhone applications;
- the separation might make is easier for front-end developers to do CSS/HTML without needing to have a server application running on their machine.
Create your own dysfunctional single-page app: https://news.ycombinator.com/item?id=18341993
I think are three broadly assumed user benefits of single-page apps:
1. Improved user experience.
2. Improved perceived performance.
3. It’s still the web.
5 mistakes to create a dysfunctional single-page app
Mistake 1: Under-estimate long-term development and maintenance costs
Mistake 2: Use the single-page app approach unilaterally
Mistake 3: Under-invest in front end capability
Mistake 4: Use naïve dev practices
Mistake 5: Surf the waves of framework hype
The disadvantages of single page applications: https://news.ycombinator.com/item?id=9879685
You probably don't need a single-page app: https://news.ycombinator.com/item?id=19184496
https://news.ycombinator.com/item?id=20384738
MPA advantages:
- Stateless requests
- The browser knows how to deal with a traditional architecture
- Fewer, more mature tools
- SEO for free
When to go for the single page app:
- Core functionality is real-time (e.g Slack)
- Rich UI interactions are core to the product (e.g Trello)
- Lots of state shared between screens (e.g. Spotify)
Hybrid solutions
...
Github uses this hybrid approach.
...
Ask HN: Is it ok to use traditional server-side rendering these days?: https://news.ycombinator.com/item?id=13212465
https://www.reddit.com/r/webdev/comments/cp9vb8/are_people_still_doing_ssr/
https://www.reddit.com/r/webdev/comments/93n60h/best_javascript_modern_approach_to_multi_page/
https://www.reddit.com/r/webdev/comments/aax4k5/do_you_develop_solely_using_spa_these_days/
The SEO issues with SPAs is a persistent concern you hear about a lot, yet nobody ever quantifies the issues. That is because search engines keep the operation of their crawler bots and indexing secret. I have read into it some, and it seems that problem used to exist, somewhat, but is more or less gone now. Bots can deal with SPAs fine.
--
I try to avoid building a SPA nowadays if possible. Not because of SEO (there are now server-side solutions to help with that), but because a SPA increases the complexity of the code base by a magnitude. State management with Redux... Async this and that... URL routing... And don't forget to manage page history.
How about just render pages with templates and be done?
If I need a highly dynamic UI for a particular feature, then I'd probably build an embeddable JS widget for it.
7 weeks ago by nhaliday
donnemartin/system-design-primer: Learn how to design large-scale systems. Prep for the system design interview. Includes Anki flashcards.
systems engineering guide recruiting tech career jobs pragmatic system-design 🖥 techtariat minimum-viable working-stiff transitions progression interview-prep move-fast-(and-break-things) repo hn commentary retention puzzles examples client-server detail-architecture cheatsheet accretion
7 weeks ago by nhaliday
systems engineering guide recruiting tech career jobs pragmatic system-design 🖥 techtariat minimum-viable working-stiff transitions progression interview-prep move-fast-(and-break-things) repo hn commentary retention puzzles examples client-server detail-architecture cheatsheet accretion
7 weeks ago by nhaliday
Cross-Platform GUI Toolkit Trainwreck (2016) | Hacker News
7 weeks ago by nhaliday
https://news.ycombinator.com/item?id=13952007
Revery (Reason/OCaml): https://news.ycombinator.com/item?id=18994837
hn
commentary
programming
frameworks
libraries
comparison
desktop
interface-compatibility
web
cocoa
osx
linux
unix
microsoft
multi
techtariat
ui
universalism-particularism
worse-is-better/the-right-thing
ubiquity
software
flexibility
ocaml-sml
ecosystem
Revery (Reason/OCaml): https://news.ycombinator.com/item?id=18994837
7 weeks ago by nhaliday
Removing User Interface Complexity, or Why React is Awesome | Hacker News
7 weeks ago by nhaliday
You’re Missing the Point of React: https://medium.com/@dan_abramov/youre-missing-the-point-of-react-a20e34a51e1a
- Dan Abramov
https://reactjs.org/blog/2013/06/05/why-react.html
https://blog.gyrosco.pe/facebook-just-taught-us-all-how-to-build-websites-51f1e7e996f2
hn
commentary
techtariat
intricacy
parsimony
worrydream
ui
frontend
web
javascript
libraries
frameworks
ecosystem
impetus
cost-benefit
explanation
state
functional
time
direction
checking
facebook
DSL
tutorial
dynamic
examples
abstraction
multi
org:med
expert-experience
summary
tradeoffs
composition-decomposition
arrows
models
thinking
top-n
lisp
minimum-viable
allodium
frontier
move-fast-(and-break-things)
- Dan Abramov
https://reactjs.org/blog/2013/06/05/why-react.html
https://blog.gyrosco.pe/facebook-just-taught-us-all-how-to-build-websites-51f1e7e996f2
7 weeks ago by nhaliday
JavaScript: The Modern Parts | Hacker News
8 weeks ago by nhaliday
https://medium.com/the-node-js-collection/modern-javascript-explained-for-dinosaurs-f695e9747b70
https://news.ycombinator.com/item?id=16139791
https://www.reddit.com/r/javascript/comments/a32a3a/modern_javascript_explained_for_dinosaurs/
https://stackoverflow.com/questions/35062852/npm-vs-bower-vs-browserify-vs-gulp-vs-grunt-vs-webpack
hn
commentary
techtariat
reflection
trends
javascript
programming
pls
web
frontend
state-of-art
summary
ecosystem
build-packaging
devtools
debugging
engineering
intricacy
flux-stasis
best-practices
code-organizing
multi
org:med
reddit
social
q-n-a
stackex
comparison
applicability-prereqs
tools
software
degrees-of-freedom
client-server
chart
compilers
https://news.ycombinator.com/item?id=16139791
https://www.reddit.com/r/javascript/comments/a32a3a/modern_javascript_explained_for_dinosaurs/
https://stackoverflow.com/questions/35062852/npm-vs-bower-vs-browserify-vs-gulp-vs-grunt-vs-webpack
8 weeks ago by nhaliday
58 Bytes of CSS to look great nearly everywhere | Hacker News
8 weeks ago by nhaliday
Author mentions this took a long time to arrive at.
I recommend "Web Design in 4 Minutes" from the CSS guru behind Bulma:
https://jgthms.com/web-design-in-4-minutes/
[ed.: lottsa sensible criticism of the above in the comments]
https://news.ycombinator.com/item?id=12166687
hn
commentary
techtariat
design
form-design
howto
web
frontend
minimum-viable
efficiency
minimalism
parsimony
move-fast-(and-break-things)
tutorial
multi
mobile
init
advice
I recommend "Web Design in 4 Minutes" from the CSS guru behind Bulma:
https://jgthms.com/web-design-in-4-minutes/
[ed.: lottsa sensible criticism of the above in the comments]
https://news.ycombinator.com/item?id=12166687
8 weeks ago by nhaliday
Ask HN: Favorite note-taking software? | Hacker News
8 weeks ago by nhaliday
Ask HN: What is your ideal note-taking software and/or hardware?: https://news.ycombinator.com/item?id=13221158
my wishlist as of 2019:
- web + desktop macOS + mobile iOS (at least viewing on the last but ideally also editing)
- sync across all those
- open-source data format that's easy to manipulate for scripting purposes
- flexible organization: mostly tree hierarchical (subsuming linear/unorganized) but with the option for directed (acyclic) graph (possibly a second layer of structure/linking)
- can store plain text, LaTeX, diagrams, and raster/vector images (video prob not necessary except as links to elsewhere)
- full-text search
- somehow digest/import data from Pinboard, Workflowy, Papers 3/Bookends, and Skim, ideally absorbing most of their functionality
- so, eg, track notes/annotations side-by-side w/ original PDF/DjVu/ePub documents (to replace Papers3/Bookends/Skim), and maybe web pages too (to replace Pinboard)
- OCR of handwritten notes (how to handle equations/diagrams?)
- various forms of NLP analysis of everything (topic models, clustering, etc)
- maybe version control (less important than export)
candidates?:
- Evernote prob ruled out do to heavy use of proprietary data formats (unless I can find some way to export with tolerably clean output)
- Workflowy/Dynalist are good but only cover a subset of functionality I want
- org-mode doesn't interact w/ mobile well (and I haven't evaluated it in detail otherwise)
- TiddlyWiki/Zim are in the running, but not sure about mobile
- idk about vimwiki but I'm not that wedded to vim and it seems less widely used than org-mode/TiddlyWiki/Zim so prob pass on that
- Quiver/Joplin/Inkdrop look similar and cover a lot of bases, TODO: evaluate more
- Trilium looks especially promising, tho read-only mobile and for macOS desktop look at this: https://github.com/zadam/trilium/issues/511
- RocketBook is interesting scanning/OCR solution but prob not sufficient due to proprietary data format
- TODO: many more candidates, eg, TreeSheets, Gingko, OneNote (macOS?...), Notion (proprietary data format...), Zotero, Nodebook (https://nodebook.io/landing), Polar (https://getpolarized.io), Roam (looks very promising)
Ask HN: What do you use for you personal note taking activity?: https://news.ycombinator.com/item?id=15736102
Ask HN: What are your note-taking techniques?: https://news.ycombinator.com/item?id=9976751
Ask HN: How do you take notes (useful note-taking strategies)?: https://news.ycombinator.com/item?id=13064215
Ask HN: How to get better at taking notes?: https://news.ycombinator.com/item?id=21419478
Ask HN: How did you build up your personal knowledge base?: https://news.ycombinator.com/item?id=21332957
nice comment from math guy on structure and difference between math and CS: https://news.ycombinator.com/item?id=21338628
useful comment collating related discussions: https://news.ycombinator.com/item?id=21333383
highlights:
Designing a Personal Knowledge base: https://news.ycombinator.com/item?id=8270759
Ask HN: How to organize personal knowledge?: https://news.ycombinator.com/item?id=17892731
Do you use a personal 'knowledge base'?: https://news.ycombinator.com/item?id=21108527
Ask HN: How do you share/organize knowledge at work and life?: https://news.ycombinator.com/item?id=21310030
other stuff:
plain text: https://news.ycombinator.com/item?id=21685660
https://www.getdnote.com/blog/how-i-built-personal-knowledge-base-for-myself/
Tiago Forte: https://www.buildingasecondbrain.com
hn search: https://hn.algolia.com/?query=notetaking&type=story
Slant comparison commentary: https://news.ycombinator.com/item?id=7011281
good comparison of options here in comments here (and Trilium itself looks good): https://news.ycombinator.com/item?id=18840990
https://en.wikipedia.org/wiki/Comparison_of_note-taking_software
wikis:
https://www.slant.co/versus/5116/8768/~tiddlywiki_vs_zim
https://www.wikimatrix.org/compare/tiddlywiki+zim
http://tiddlymap.org/
https://www.zim-wiki.org/manual/Plugins/BackLinks_Pane.html
https://zim-wiki.org/manual/Plugins/Link_Map.html
apps:
Roam: https://news.ycombinator.com/item?id=21440289
intriguing but probably not appropriate for my needs: https://www.sophya.ai/
Inkdrop: https://news.ycombinator.com/item?id=20103589
Joplin: https://news.ycombinator.com/item?id=15815040
https://news.ycombinator.com/item?id=21555238
https://wreeto.com/
Leo Editor (combines tree outlining w/ literate programming/scripting, I think?): https://news.ycombinator.com/item?id=17769892
Frame: https://news.ycombinator.com/item?id=18760079
https://www.reddit.com/r/TheMotte/comments/cb18sy/anyone_use_a_personal_wiki_software_to_catalog/
https://archive.is/xViTY
Notion: https://news.ycombinator.com/item?id=18904648
https://www.reddit.com/r/slatestarcodex/comments/ap437v/modified_cornell_method_the_optimal_notetaking/
https://archive.is/e9oHu
https://www.reddit.com/r/slatestarcodex/comments/bt8a1r/im_about_to_start_a_one_month_journaling_test/
https://www.reddit.com/r/slatestarcodex/comments/9cot3m/question_how_do_you_guys_learn_things/
https://archive.is/HUH8V
https://www.reddit.com/r/slatestarcodex/comments/d7bvcp/how_to_read_a_book_for_understanding/
https://archive.is/VL2mi
Anki:
https://www.reddit.com/r/Anki/comments/as8i4t/use_anki_for_technical_books/
https://www.freecodecamp.org/news/how-anki-saved-my-engineering-career-293a90f70a73/
https://www.reddit.com/r/slatestarcodex/comments/ch24q9/anki_is_it_inferior_to_the_3x5_index_card_an/
https://archive.is/OaGc5
maybe not the best source for a review/advice
interesting comment(s) about tree outliners and spreadsheets: https://news.ycombinator.com/item?id=21170434
tablet:
https://www.inkandswitch.com/muse-studio-for-ideas.html
https://www.inkandswitch.com/capstone-manuscript.html
https://news.ycombinator.com/item?id=20255457
hn
discussion
recommendations
software
tools
desktop
app
notetaking
exocortex
wkfly
wiki
productivity
multi
comparison
crosstab
properties
applicability-prereqs
nlp
info-foraging
chart
webapp
reference
q-n-a
retention
workflow
reddit
social
ratty
ssc
learning
studying
commentary
structure
thinking
network-structure
things
collaboration
ocr
trees
graphs
LaTeX
search
todo
project
money-for-time
synchrony
pinboard
state
duplication
worrydream
simplification-normalization
links
minimalism
design
neurons
ai-control
openai
miri-cfar
parsimony
intricacy
my wishlist as of 2019:
- web + desktop macOS + mobile iOS (at least viewing on the last but ideally also editing)
- sync across all those
- open-source data format that's easy to manipulate for scripting purposes
- flexible organization: mostly tree hierarchical (subsuming linear/unorganized) but with the option for directed (acyclic) graph (possibly a second layer of structure/linking)
- can store plain text, LaTeX, diagrams, and raster/vector images (video prob not necessary except as links to elsewhere)
- full-text search
- somehow digest/import data from Pinboard, Workflowy, Papers 3/Bookends, and Skim, ideally absorbing most of their functionality
- so, eg, track notes/annotations side-by-side w/ original PDF/DjVu/ePub documents (to replace Papers3/Bookends/Skim), and maybe web pages too (to replace Pinboard)
- OCR of handwritten notes (how to handle equations/diagrams?)
- various forms of NLP analysis of everything (topic models, clustering, etc)
- maybe version control (less important than export)
candidates?:
- Evernote prob ruled out do to heavy use of proprietary data formats (unless I can find some way to export with tolerably clean output)
- Workflowy/Dynalist are good but only cover a subset of functionality I want
- org-mode doesn't interact w/ mobile well (and I haven't evaluated it in detail otherwise)
- TiddlyWiki/Zim are in the running, but not sure about mobile
- idk about vimwiki but I'm not that wedded to vim and it seems less widely used than org-mode/TiddlyWiki/Zim so prob pass on that
- Quiver/Joplin/Inkdrop look similar and cover a lot of bases, TODO: evaluate more
- Trilium looks especially promising, tho read-only mobile and for macOS desktop look at this: https://github.com/zadam/trilium/issues/511
- RocketBook is interesting scanning/OCR solution but prob not sufficient due to proprietary data format
- TODO: many more candidates, eg, TreeSheets, Gingko, OneNote (macOS?...), Notion (proprietary data format...), Zotero, Nodebook (https://nodebook.io/landing), Polar (https://getpolarized.io), Roam (looks very promising)
Ask HN: What do you use for you personal note taking activity?: https://news.ycombinator.com/item?id=15736102
Ask HN: What are your note-taking techniques?: https://news.ycombinator.com/item?id=9976751
Ask HN: How do you take notes (useful note-taking strategies)?: https://news.ycombinator.com/item?id=13064215
Ask HN: How to get better at taking notes?: https://news.ycombinator.com/item?id=21419478
Ask HN: How did you build up your personal knowledge base?: https://news.ycombinator.com/item?id=21332957
nice comment from math guy on structure and difference between math and CS: https://news.ycombinator.com/item?id=21338628
useful comment collating related discussions: https://news.ycombinator.com/item?id=21333383
highlights:
Designing a Personal Knowledge base: https://news.ycombinator.com/item?id=8270759
Ask HN: How to organize personal knowledge?: https://news.ycombinator.com/item?id=17892731
Do you use a personal 'knowledge base'?: https://news.ycombinator.com/item?id=21108527
Ask HN: How do you share/organize knowledge at work and life?: https://news.ycombinator.com/item?id=21310030
other stuff:
plain text: https://news.ycombinator.com/item?id=21685660
https://www.getdnote.com/blog/how-i-built-personal-knowledge-base-for-myself/
Tiago Forte: https://www.buildingasecondbrain.com
hn search: https://hn.algolia.com/?query=notetaking&type=story
Slant comparison commentary: https://news.ycombinator.com/item?id=7011281
good comparison of options here in comments here (and Trilium itself looks good): https://news.ycombinator.com/item?id=18840990
https://en.wikipedia.org/wiki/Comparison_of_note-taking_software
wikis:
https://www.slant.co/versus/5116/8768/~tiddlywiki_vs_zim
https://www.wikimatrix.org/compare/tiddlywiki+zim
http://tiddlymap.org/
https://www.zim-wiki.org/manual/Plugins/BackLinks_Pane.html
https://zim-wiki.org/manual/Plugins/Link_Map.html
apps:
Roam: https://news.ycombinator.com/item?id=21440289
intriguing but probably not appropriate for my needs: https://www.sophya.ai/
Inkdrop: https://news.ycombinator.com/item?id=20103589
Joplin: https://news.ycombinator.com/item?id=15815040
https://news.ycombinator.com/item?id=21555238
https://wreeto.com/
Leo Editor (combines tree outlining w/ literate programming/scripting, I think?): https://news.ycombinator.com/item?id=17769892
Frame: https://news.ycombinator.com/item?id=18760079
https://www.reddit.com/r/TheMotte/comments/cb18sy/anyone_use_a_personal_wiki_software_to_catalog/
https://archive.is/xViTY
Notion: https://news.ycombinator.com/item?id=18904648
https://www.reddit.com/r/slatestarcodex/comments/ap437v/modified_cornell_method_the_optimal_notetaking/
https://archive.is/e9oHu
https://www.reddit.com/r/slatestarcodex/comments/bt8a1r/im_about_to_start_a_one_month_journaling_test/
https://www.reddit.com/r/slatestarcodex/comments/9cot3m/question_how_do_you_guys_learn_things/
https://archive.is/HUH8V
https://www.reddit.com/r/slatestarcodex/comments/d7bvcp/how_to_read_a_book_for_understanding/
https://archive.is/VL2mi
Anki:
https://www.reddit.com/r/Anki/comments/as8i4t/use_anki_for_technical_books/
https://www.freecodecamp.org/news/how-anki-saved-my-engineering-career-293a90f70a73/
https://www.reddit.com/r/slatestarcodex/comments/ch24q9/anki_is_it_inferior_to_the_3x5_index_card_an/
https://archive.is/OaGc5
maybe not the best source for a review/advice
interesting comment(s) about tree outliners and spreadsheets: https://news.ycombinator.com/item?id=21170434
tablet:
https://www.inkandswitch.com/muse-studio-for-ideas.html
https://www.inkandswitch.com/capstone-manuscript.html
https://news.ycombinator.com/item?id=20255457
8 weeks ago by nhaliday
Software Testing Anti-patterns | Hacker News
8 weeks ago by nhaliday
I haven't read this but both the article and commentary/discussion look interesting from a glance
hmm: https://news.ycombinator.com/item?id=16896390
In small companies where there is no time to "waste" on tests, my view is that 80% of the problems can be caught with 20% of the work by writing integration tests that cover large areas of the application. Writing unit tests would be ideal, but time-consuming. For a web project, that would involve testing all pages for HTTP 200 (< 1 hour bash script that will catch most major bugs), automatically testing most interfaces to see if filling data and clicking "save" works. Of course, for very important/dangerous/complex algorithms in the code, unit tests are useful, but generally, that represents a very low fraction of a web application's code.
hn
commentary
techtariat
discussion
programming
engineering
methodology
best-practices
checklists
thinking
correctness
api
interface-compatibility
jargon
list
metabuch
objektbuch
workflow
documentation
debugging
span-cover
checking
metrics
abstraction
within-without
characterization
error
move-fast-(and-break-things)
minimum-viable
efficiency
multi
poast
pareto
coarse-fine
hmm: https://news.ycombinator.com/item?id=16896390
In small companies where there is no time to "waste" on tests, my view is that 80% of the problems can be caught with 20% of the work by writing integration tests that cover large areas of the application. Writing unit tests would be ideal, but time-consuming. For a web project, that would involve testing all pages for HTTP 200 (< 1 hour bash script that will catch most major bugs), automatically testing most interfaces to see if filling data and clicking "save" works. Of course, for very important/dangerous/complex algorithms in the code, unit tests are useful, but generally, that represents a very low fraction of a web application's code.
8 weeks ago by nhaliday
Zettelkästen? | Hacker News
8 weeks ago by nhaliday
Here’s a LessWrong post that describes it (including the insight “I honestly didn’t think Zettelkasten sounded like a good idea before I tried it” which I also felt).
yeah doesn't sound like a good idea to me either. idk
hn
commentary
techtariat
germanic
productivity
workflow
notetaking
exocortex
gtd
explore-exploit
business
comparison
academia
tech
ratty
lesswrong
idk
thinking
neurons
network-structure
software
tools
app
metabuch
writing
trees
graphs
skeleton
meta:reading
wkfly
worrydream
yeah doesn't sound like a good idea to me either. idk
8 weeks ago by nhaliday
The Future of Mathematics? [video] | Hacker News
9 weeks ago by nhaliday
https://news.ycombinator.com/item?id=20909404
Kevin Buzzard (the Lean guy)
- general reflection on proof asssistants/theorem provers
- Kevin Hale's formal abstracts project, etc
- thinks of available theorem provers, Lean is "[the only one currently available that may be capable of formalizing all of mathematics eventually]" (goes into more detail right at the end, eg, quotient types)
hn
commentary
discussion
video
talks
presentation
math
formal-methods
expert-experience
msr
frontier
state-of-art
proofs
rigor
education
higher-ed
optimism
prediction
lens
search
meta:research
speculation
exocortex
skunkworks
automation
research
math.NT
big-surf
software
parsimony
cost-benefit
intricacy
correctness
programming
pls
python
functional
haskell
heavyweights
research-program
review
reflection
multi
pdf
slides
oly
experiment
span-cover
git
vcs
teaching
impetus
academia
composition-decomposition
coupling-cohesion
database
trust
types
plt
lifts-projections
induction
critique
beauty
truth
elegance
aesthetics
Kevin Buzzard (the Lean guy)
- general reflection on proof asssistants/theorem provers
- Kevin Hale's formal abstracts project, etc
- thinks of available theorem provers, Lean is "[the only one currently available that may be capable of formalizing all of mathematics eventually]" (goes into more detail right at the end, eg, quotient types)
9 weeks ago by nhaliday
The State of Machine Learning Frameworks [ed.: prev: PyTorch dominates research, Tensorflow dominates industry] | Hacker News
9 weeks ago by nhaliday
thegradient.pub looks interesting
hn
commentary
techtariat
acmtariat
org:popup
nibble
org:bleg
comparison
deep-learning
libraries
machine-learning
python
software
trends
data-science
sci-comp
tools
google
facebook
tech
working-stiff
best-practices
ecosystem
academia
theory-practice
pragmatic
wire-guided
static-dynamic
state
parsimony
api
flux-stasis
ubiquity
performance
cloud
saas
tech-infrastructure
business
incentives
prediction
frameworks
9 weeks ago by nhaliday
My Conversation with Paul Romer - Marginal REVOLUTION
econotariat marginal-rev org:med interview commentary economics growth-econ developing-world paul-romer cultural-dynamics culture history age-of-discovery conquest-empire expansionism usa pennsylvania the-south northeast anglo language stagnation innovation cjones-like discovery microfoundations religion institutions leviathan government speedometer education higher-ed science academia writing meta:reading cost-benefit grokkability-clarity communication china asia sinosphere technology complex-systems meta:prediction flux-stasis foreign-lang simplification-normalization
9 weeks ago by nhaliday
econotariat marginal-rev org:med interview commentary economics growth-econ developing-world paul-romer cultural-dynamics culture history age-of-discovery conquest-empire expansionism usa pennsylvania the-south northeast anglo language stagnation innovation cjones-like discovery microfoundations religion institutions leviathan government speedometer education higher-ed science academia writing meta:reading cost-benefit grokkability-clarity communication china asia sinosphere technology complex-systems meta:prediction flux-stasis foreign-lang simplification-normalization
9 weeks ago by nhaliday
Measuring actual learning versus feeling of learning in response to being actively engaged in the classroom | PNAS
9 weeks ago by nhaliday
This article addresses the long-standing question of why students and faculty remain resistant to active learning. Comparing passive lectures with active learning using a randomized experimental approach and identical course materials, we find that students in the active classroom learn more, but they feel like they learn less. We show that this negative correlation is caused in part by the increased cognitive effort required during active learning.
https://news.ycombinator.com/item?id=21164005
study
org:nat
psychology
cog-psych
education
learning
studying
teaching
productivity
higher-ed
cost-benefit
aversion
🦉
growth
stamina
multi
hn
commentary
sentiment
thinking
neurons
wire-guided
emotion
subjective-objective
self-report
objective-measure
https://news.ycombinator.com/item?id=21164005
9 weeks ago by nhaliday
Google AI Blog: Introducing a New Framework for Flexible and Reproducible Reinforcement Learning Research
9 weeks ago by nhaliday
nice resources for learning RL in HN comments: https://news.ycombinator.com/item?id=19170294
techtariat
org:com
google
acmtariat
deepgoog
org:bleg
nibble
machine-learning
deep-learning
libraries
project
reinforcement
replication
benchmarks
move-fast-(and-break-things)
research
multi
hn
commentary
links
recommendations
init
video
lectures
books
9 weeks ago by nhaliday
2019 Growth Theory Conference - May 11-12 | Economics Department at Brown University
10 weeks ago by nhaliday
Guillaume Blanc (Brown) and Romain Wacziarg (UCLA and NBER) "Change and Persistence in the Age of Modernization:
Saint-Germain-d’Anxure, 1730-1895∗"
Figure 4.1.1.1 – Fertility
Figure 4.2.1.1 – Mortality
Figure 5.1.0.1 – Literacy
https://twitter.com/GarettJones/status/1127999888359346177
https://archive.is/1EnZg
Short pre-modern lives weren't overwhelmingly about infant mortality:
From this weekend's excellent Deep Roots conference at @Brown_Economics, new evidence from a small French town, an ancestral home of coauthor Romain Wacziarg:
--
European Carpe Diem poems made a lot more sense when 20-year-olds were halfway done with life:
...
--
...
N.B. that's not a correction at all, it's telling the same story as the above figure:
Conditioned on surviving childhood, usually living to less than 50 years total in 1750s France and in medieval times.
study
economics
broad-econ
cliometrics
demographics
history
early-modern
europe
gallic
fertility
longevity
mobility
human-capital
garett-jones
writing
class
data
time-series
demographic-transition
regularizer
lived-experience
gender
gender-diff
pro-rata
trivia
cocktail
econotariat
twitter
social
backup
commentary
poetry
medieval
modernity
alien-character
Saint-Germain-d’Anxure, 1730-1895∗"
Figure 4.1.1.1 – Fertility
Figure 4.2.1.1 – Mortality
Figure 5.1.0.1 – Literacy
https://twitter.com/GarettJones/status/1127999888359346177
https://archive.is/1EnZg
Short pre-modern lives weren't overwhelmingly about infant mortality:
From this weekend's excellent Deep Roots conference at @Brown_Economics, new evidence from a small French town, an ancestral home of coauthor Romain Wacziarg:
--
European Carpe Diem poems made a lot more sense when 20-year-olds were halfway done with life:
...
--
...
N.B. that's not a correction at all, it's telling the same story as the above figure:
Conditioned on surviving childhood, usually living to less than 50 years total in 1750s France and in medieval times.
10 weeks ago by nhaliday
T. Greer on Twitter: "Huang Qifan gave a speech on the trade war a few days ago . It is eye opening. The ideas in it aren't really new, but they are expressed with such frankness (+with so little Communist cant) that I triple checked this guy is who I tho
11 weeks ago by nhaliday
https://archive.is/Cn2H7
https://archive.is/DRUAN
https://twitter.com/Scholars_Stage/status/1176333938299691009
And here is Marco Rubio in the US Senate.... giving a direct response to Huang Qifan's speech
https://www.youtube.com/watch?v=8628xhN-r34
https://archive.is/8SHqT
twitter
social
discussion
unaffiliated
wonkish
broad-econ
backup
current-events
china
asia
trade
nationalism-globalism
politics
ideology
world
comparison
japan
track-record
government
flux-stasis
social-choice
leadership
statesmen
communism
authoritarianism
commentary
summary
usa
foreign-policy
realpolitik
great-powers
video
thucydides
self-interest
cooperate-defect
power
https://archive.is/DRUAN
https://twitter.com/Scholars_Stage/status/1176333938299691009
And here is Marco Rubio in the US Senate.... giving a direct response to Huang Qifan's speech
https://www.youtube.com/watch?v=8628xhN-r34
https://archive.is/8SHqT
11 weeks ago by nhaliday
Measures of cultural distance - Marginal REVOLUTION
12 weeks ago by nhaliday
A new paper with many authors — most prominently Joseph Henrich — tries to measure the cultural gaps between different countries. I am reproducing a few of their results (see pp.36-37 for more), noting that higher numbers represent higher gaps:
...
Overall the numbers show much greater cultural distance of other nations from China than from the United States, a significant and under-discussed problem for China. For instance, the United States is about as culturally close to Hong Kong as China is.
[ed.: Japan is closer to the US than China. Interesting. I'd like to see some data based on something other than self-reported values though.]
the study:
Beyond WEIRD Psychology: Measuring and Mapping Scales of Cultural and Psychological Distance: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3259613
We present a new tool that provides a means to measure the psychological and cultural distance between two societies and create a distance scale with any population as the point of comparison. Since psychological data is dominated by samples drawn from the United States or other WEIRD nations, this tool provides a “WEIRD scale” to assist researchers in systematically extending the existing database of psychological phenomena to more diverse and globally representative samples. As the extreme WEIRDness of the literature begins to dissolve, the tool will become more useful for designing, planning, and justifying a wide range of comparative psychological projects. We have made our code available and developed an online application for creating other scales (including the “Sino scale” also presented in this paper). We discuss regional diversity within nations showing the relative homogeneity of the United States. Finally, we use these scales to predict various psychological outcomes.
econotariat
marginal-rev
henrich
commentary
study
summary
list
data
measure
metrics
similarity
culture
cultural-dynamics
sociology
things
world
usa
anglo
anglosphere
china
asia
japan
sinosphere
russia
developing-world
canada
latin-america
MENA
europe
eastern-europe
germanic
comparison
great-powers
thucydides
foreign-policy
the-great-west-whale
generalization
anthropology
within-group
homo-hetero
moments
exploratory
phalanges
the-bones
🎩
🌞
broad-econ
cocktail
n-factor
measurement
expectancy
distribution
self-report
values
expression-survival
uniqueness
...
Overall the numbers show much greater cultural distance of other nations from China than from the United States, a significant and under-discussed problem for China. For instance, the United States is about as culturally close to Hong Kong as China is.
[ed.: Japan is closer to the US than China. Interesting. I'd like to see some data based on something other than self-reported values though.]
the study:
Beyond WEIRD Psychology: Measuring and Mapping Scales of Cultural and Psychological Distance: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3259613
We present a new tool that provides a means to measure the psychological and cultural distance between two societies and create a distance scale with any population as the point of comparison. Since psychological data is dominated by samples drawn from the United States or other WEIRD nations, this tool provides a “WEIRD scale” to assist researchers in systematically extending the existing database of psychological phenomena to more diverse and globally representative samples. As the extreme WEIRDness of the literature begins to dissolve, the tool will become more useful for designing, planning, and justifying a wide range of comparative psychological projects. We have made our code available and developed an online application for creating other scales (including the “Sino scale” also presented in this paper). We discuss regional diversity within nations showing the relative homogeneity of the United States. Finally, we use these scales to predict various psychological outcomes.
12 weeks ago by nhaliday
Mars Direct | West Hunter
september 2019 by nhaliday
Send Mr Bezos. He even looks like a Martian.
--
Throw in Zuckerberg and it’s a deal…
--
We could send twice as many people half-way to Mars.
--
I don’t think that the space station has been worth anything at all.
As for a lunar base, many of the issues are difficult and one ( effects of low-gee) is probably impossible to solve.
I don’t think that there are real mysteries about what is needed for a kind-of self-sufficient base – it’s just too hard and there’s not much prospect of a payoff.
That said, there may be other ways of going about this that are more promising.
--
Venus is worth terraforming: no gravity problems. Doable.
--
It’s not impossible that Mars might harbor microbial life – with some luck, life with a different chemical basis. That might be very valuable: there are endless industrial processes that depend upon some kind of fermentation.
Why, without acetone fermentation, there might not be a state of Israel.
--
If we used a reasonable approach, like Orion, I think that people would usefully supplement those robots.
https://westhunt.wordpress.com/2019/01/11/the-great-divorce/
Jeff Bezos isn’t my favorite guy, but he has ability and has built something useful. And an ugly, contested divorce would be harsh and unfair to the children, who have done nothing wrong.
But I don’t care. The thought of tens of billions of dollars being spent on lawyers and PIs offer the possibility of a spectacle that will live forever, far wilder than the antics of Nero or Caligula. It could make Suetonius look like Pilgrim’s Progress.
Have you ever wondered whether tens of thousands of divorce lawyers should be organized into legions or phalanxes? This is our chance to finally find out.
west-hunter
scitariat
commentary
current-events
trump
politics
troll
space
expansionism
frontier
cost-benefit
ideas
speculation
roots
deep-materialism
definite-planning
geoengineering
wild-ideas
gravity
barons
amazon
facebook
sv
tech
government
debate
critique
physics
mechanics
robotics
multi
lol
law
responsibility
drama
beginning-middle-end
direct-indirect
--
Throw in Zuckerberg and it’s a deal…
--
We could send twice as many people half-way to Mars.
--
I don’t think that the space station has been worth anything at all.
As for a lunar base, many of the issues are difficult and one ( effects of low-gee) is probably impossible to solve.
I don’t think that there are real mysteries about what is needed for a kind-of self-sufficient base – it’s just too hard and there’s not much prospect of a payoff.
That said, there may be other ways of going about this that are more promising.
--
Venus is worth terraforming: no gravity problems. Doable.
--
It’s not impossible that Mars might harbor microbial life – with some luck, life with a different chemical basis. That might be very valuable: there are endless industrial processes that depend upon some kind of fermentation.
Why, without acetone fermentation, there might not be a state of Israel.
--
If we used a reasonable approach, like Orion, I think that people would usefully supplement those robots.
https://westhunt.wordpress.com/2019/01/11/the-great-divorce/
Jeff Bezos isn’t my favorite guy, but he has ability and has built something useful. And an ugly, contested divorce would be harsh and unfair to the children, who have done nothing wrong.
But I don’t care. The thought of tens of billions of dollars being spent on lawyers and PIs offer the possibility of a spectacle that will live forever, far wilder than the antics of Nero or Caligula. It could make Suetonius look like Pilgrim’s Progress.
Have you ever wondered whether tens of thousands of divorce lawyers should be organized into legions or phalanxes? This is our chance to finally find out.
september 2019 by nhaliday
Pin Dancing: The answer to "Will you mentor me?" is
august 2019 by nhaliday
https://news.ycombinator.com/item?id=20715136
https://jakeseliger.com/2010/10/02/how-to-get-your-professors’-attention-or-how-to-get-the-coaching-and-mentorship-you-need/
techtariat
learning
growth
discipline
reflection
critique
:/
the-monster
ai
robotics
india
asia
working-stiff
communication
transitions
progression
advice
hn
commentary
multi
academia
success
humility
writing
literature
letters
🦉
https://jakeseliger.com/2010/10/02/how-to-get-your-professors’-attention-or-how-to-get-the-coaching-and-mentorship-you-need/
august 2019 by nhaliday
Fixing the computer guy posture [pdf] | Hacker News
august 2019 by nhaliday
also some discussion of RSI in the comments
hn
commentary
health
embodied
human-bean
ergo
get-fit
working-stiff
todo
fitsci
evidence-based
august 2019 by nhaliday
Organizing complexity is the most important skill in software development | Hacker News
july 2019 by nhaliday
- John D. Cook
https://news.ycombinator.com/item?id=9758063
Organization is the hardest part for me personally in getting better as a developer. How to build a structure that is easy to change and extend. Any tips where to find good books or online sources?
hn
commentary
techtariat
reflection
lens
engineering
programming
software
intricacy
parsimony
structure
coupling-cohesion
composition-decomposition
multi
poast
books
recommendations
abstraction
complex-systems
system-design
design
code-organizing
human-capital
https://news.ycombinator.com/item?id=9758063
Organization is the hardest part for me personally in getting better as a developer. How to build a structure that is easy to change and extend. Any tips where to find good books or online sources?
july 2019 by nhaliday
Inventor CEOs - Marginal REVOLUTION
july 2019 by nhaliday
One in five U.S. high-technology firms are led by CEOs with hands-on innovation experience as inventors. Firms led by “Inventor CEOs” are associated with higher quality innovation, especially when the CEO is a high-impact inventor. During an Inventor CEO’s tenure, firms file a greater number of patents and more valuable patents in technology classes where the CEO’s hands-on experience lies. Utilizing plausibly exogenous CEO turnovers to address the matching of CEOs to firms suggests these effects are causal. The results can be explained by an Inventor CEO’s superior ability to evaluate, select, and execute innovative investment projects related to their own hands-on experience.
econotariat
marginal-rev
commentary
study
summary
economics
industrial-org
management
leadership
the-world-is-just-atoms
realness
nitty-gritty
innovation
novelty
business
growth-econ
ability-competence
intellectual-property
july 2019 by nhaliday
The Scholar's Stage: Book Notes—Strategy: A History
july 2019 by nhaliday
https://twitter.com/Scholars_Stage/status/1151681120787816448
https://archive.is/Bp5eu
Freedman's book is something of a shadow history of Western intellectual thought between 1850 and 2010. Marx, Tolstoy, Foucault, game theorists, economists, business law--it is all in there.
Thus the thoughts prompted by this book have surprisingly little to do with war.
Instead I am left with questions about the long-term trajectory of Western thought. Specifically:
*Has America really dominated Western intellectual life in the post 45 world as much as English speakers seem to think it has?
*Has the professionalization/credential-iization of Western intellectual life helped or harmed our ability to understand society?
*Will we ever recover from the 1960s?
wonkish
unaffiliated
broad-econ
books
review
reflection
summary
strategy
war
higher-ed
academia
social-science
letters
organizing
nascent-state
counter-revolution
rot
westminster
culture-war
left-wing
anglosphere
usa
history
mostly-modern
coordination
lens
local-global
europe
gallic
philosophy
cultural-dynamics
anthropology
game-theory
industrial-org
schelling
flux-stasis
trends
culture
iraq-syria
MENA
military
frontier
info-dynamics
big-peeps
politics
multi
twitter
social
commentary
backup
defense
https://archive.is/Bp5eu
Freedman's book is something of a shadow history of Western intellectual thought between 1850 and 2010. Marx, Tolstoy, Foucault, game theorists, economists, business law--it is all in there.
Thus the thoughts prompted by this book have surprisingly little to do with war.
Instead I am left with questions about the long-term trajectory of Western thought. Specifically:
*Has America really dominated Western intellectual life in the post 45 world as much as English speakers seem to think it has?
*Has the professionalization/credential-iization of Western intellectual life helped or harmed our ability to understand society?
*Will we ever recover from the 1960s?
july 2019 by nhaliday
Panel: Systems Programming in 2014 and Beyond | Lang.NEXT 2014 | Channel 9
july 2019 by nhaliday
- Bjarne Stroustrup, Niko Matsakis, Andrei Alexandrescu, Rob Pike
- 2014 so pretty outdated but rare to find a discussion with people like this together
- pretty sure Jonathan Blow asked a couple questions
- Rob Pike compliments Rust at one point. Also kinda softly rags on dynamic typing at one point ("unit testing is what they have instead of static types").
related:
What is Systems Programming, Really?: http://willcrichton.net/notes/systems-programming/
https://news.ycombinator.com/item?id=17948265
https://news.ycombinator.com/item?id=21731878
video
presentation
debate
programming
pls
c(pp)
systems
os
rust
d-lang
golang
computer-memory
legacy
devtools
formal-methods
concurrency
compilers
syntax
parsimony
google
intricacy
thinking
cost-benefit
degrees-of-freedom
facebook
performance
people
rsc
cracker-prog
critique
types
checking
api
flux-stasis
engineering
time
wire-guided
worse-is-better/the-right-thing
static-dynamic
latency-throughput
techtariat
multi
plt
hn
commentary
metal-to-virtual
functional
abstraction
contrarianism
jargon
definition
characterization
reflection
- 2014 so pretty outdated but rare to find a discussion with people like this together
- pretty sure Jonathan Blow asked a couple questions
- Rob Pike compliments Rust at one point. Also kinda softly rags on dynamic typing at one point ("unit testing is what they have instead of static types").
related:
What is Systems Programming, Really?: http://willcrichton.net/notes/systems-programming/
https://news.ycombinator.com/item?id=17948265
https://news.ycombinator.com/item?id=21731878
july 2019 by nhaliday
Home is a small, engineless sailboat (2018) | Hacker News
july 2019 by nhaliday
Her deck looked disorderly; metal pipes lying on either side of the cabin, what might have been a bed sheet or sail cover (or one in the same) bunched between oxidized turnbuckles and portlights. A purple hula hoop. A green bucket. Several small, carefully potted plants. At the stern, a weathered tree limb lashed to a metal cradle – the arm of a sculling oar. There was no motor. The transom was partially obscured by a wind vane and Alexandra’s years of exposure to the elements were on full display.
...
Sean is a programmer, a fervent believer in free open source code – software programs available to the public to use and/or modify free of charge. His only computer is the Raspberry Pi he uses to code and control his autopilot, which he calls pypilot. Sean is also a programmer for and regular contributor to OpenCPN Chart Plotter Navigation, free open source software for cruisers. “I mostly write the graphics or the way it draws the chart, but a lot more than that, like how it draws the weather patterns and how it can calculate routes, like you should sail this way.”
from the comments:
Have also read both; they're fascinating in different ways. Paul Lutus has a boat full of technology (diesel engine, laptop, radio, navigation tools, and more) but his book is an intensely - almost uncomfortably - personal voyage through his psyche, while he happens to be sailing around the world. A diary of reflections on life, struggles with people, views on science, observations on the stars and sky and waves, poignant writing on how being at sea affect people, while he happens to be sailing around the world. It's better for that, more relatable as a geek, sadder and more emotional; I consider it a good read, and I reflect on it a lot.
Captain Slocum's voyage of 1896(?) is so different; he took an old clock, and not much else, he lashes the tiller and goes down below for hours at a time to read or sleep without worrying about crashing into other boats, he tells stories of mouldy cheese induced nightmares during rough seas or chasing natives away from robbing him, or finding remote islands with communites of slightly odd people. Much of his writing is about the people he meets - they often know in advance he's making a historic voyage, so when he arrives anywhere, there's a big fuss, he's invited to dine with local dignitaries or captains of large ships, gifted interesting foods and boat parts, there's a lot of interesting things about the world of 1896. (There's also quite a bit of tedious place names and locations and passages where nothing much happens, I'm not that interested in the geography of it).
hn
commentary
oceans
books
reflection
stories
track-record
world
minimum-viable
dirty-hands
links
frontier
allodium
prepping
navigation
oss
hacker
...
Sean is a programmer, a fervent believer in free open source code – software programs available to the public to use and/or modify free of charge. His only computer is the Raspberry Pi he uses to code and control his autopilot, which he calls pypilot. Sean is also a programmer for and regular contributor to OpenCPN Chart Plotter Navigation, free open source software for cruisers. “I mostly write the graphics or the way it draws the chart, but a lot more than that, like how it draws the weather patterns and how it can calculate routes, like you should sail this way.”
from the comments:
Have also read both; they're fascinating in different ways. Paul Lutus has a boat full of technology (diesel engine, laptop, radio, navigation tools, and more) but his book is an intensely - almost uncomfortably - personal voyage through his psyche, while he happens to be sailing around the world. A diary of reflections on life, struggles with people, views on science, observations on the stars and sky and waves, poignant writing on how being at sea affect people, while he happens to be sailing around the world. It's better for that, more relatable as a geek, sadder and more emotional; I consider it a good read, and I reflect on it a lot.
Captain Slocum's voyage of 1896(?) is so different; he took an old clock, and not much else, he lashes the tiller and goes down below for hours at a time to read or sleep without worrying about crashing into other boats, he tells stories of mouldy cheese induced nightmares during rough seas or chasing natives away from robbing him, or finding remote islands with communites of slightly odd people. Much of his writing is about the people he meets - they often know in advance he's making a historic voyage, so when he arrives anywhere, there's a big fuss, he's invited to dine with local dignitaries or captains of large ships, gifted interesting foods and boat parts, there's a lot of interesting things about the world of 1896. (There's also quite a bit of tedious place names and locations and passages where nothing much happens, I'm not that interested in the geography of it).
july 2019 by nhaliday
Integrated vs type based shrinking - Hypothesis
july 2019 by nhaliday
The big difference is whether shrinking is integrated into generation.
In Haskell’s QuickCheck, shrinking is defined based on types: Any value of a given type shrinks the same way, regardless of how it is generated. In Hypothesis, test.check, etc. instead shrinking is part of the generation, and the generator controls how the values it produces shrinks (this works differently in Hypothesis and test.check, and probably differently again in EQC, but the user visible result is largely the same)
This is not a trivial distinction. Integrating shrinking into generation has two large benefits:
- Shrinking composes nicely, and you can shrink anything you can generate regardless of whether there is a defined shrinker for the type produced.
- You can _guarantee that shrinking satisfies the same invariants as generation_.
The first is mostly important from a convenience point of view: Although there are some things it let you do that you can’t do in the type based approach, they’re mostly of secondary importance. It largely just saves you from the effort of having to write your own shrinkers.
But the second is really important, because the lack of it makes your test failures potentially extremely confusing.
...
[example: even_numbers = integers().map(lambda x: x * 2)]
...
In this example the problem was relatively obvious and so easy to work around, but as your invariants get more implicit and subtle it becomes really problematic: In Hypothesis it’s easy and convenient to generate quite complex data, and trying to recreate the invariants that are automatically satisfied with that in your tests and/or your custom shrinkers would quickly become a nightmare.
I don’t think it’s an accident that the main systems to get this right are in dynamic languages. It’s certainly not essential - the original proposal that lead to the implementation for test.check was for Haskell, and Jack is an alternative property based system for Haskell that does this - but you feel the pain much more quickly in dynamic languages because the typical workaround for this problem in Haskell is to define a newtype, which lets you turn off the default shrinking for your types and possibly define your own.
But that’s a workaround for a problem that shouldn’t be there in the first place, and using it will still result in your having to encode the invariants into your your shrinkers, which is more work and more brittle than just having it work automatically.
So although (as far as I know) none of the currently popular property based testing systems for statically typed languages implement this behaviour correctly, they absolutely can and they absolutely should. It will improve users’ lives significantly.
https://hypothesis.works/articles/compositional-shrinking/
In my last article about shrinking, I discussed the problems with basing shrinking on the type of the values to be shrunk.
In writing it though I forgot that there was a halfway house which is also somewhat bad (but significantly less so) that you see in a couple of implementations.
This is when the shrinking is not type based, but still follows the classic shrinking API that takes a value and returns a lazy list of shrinks of that value. Examples of libraries that do this are theft and QuickTheories.
This works reasonably well and solves the major problems with type directed shrinking, but it’s still somewhat fragile and importantly does not compose nearly as well as the approaches that Hypothesis or test.check take.
Ideally, as well as not being based on the types of the values being generated, shrinking should not be based on the actual values generated at all.
This may seem counter-intuitive, but it actually works pretty well.
...
We took a strategy and composed it with a function mapping over the values that that strategy produced to get a new strategy.
Suppose the Hypothesis strategy implementation looked something like the following:
...
i.e. we can generate a value and we can shrink a value that we’ve previously generated. By default we don’t know how to generate values (subclasses have to implement that) and we can’t shrink anything, which subclasses are able to fix if they want or leave as is if they’re fine with that.
(This is in fact how a very early implementation of it looked)
This is essentially the approach taken by theft or QuickTheories, and the problem with it is that under this implementation the ‘map’ function we used above is impossible to define in a way that preserves shrinking: In order to shrink a generated value, you need some way to invert the function you’re composing with (which is in general impossible even if your language somehow exposed the facilities to do it, which it almost certainly doesn’t) so you could take the generated value, map it back to the value that produced it, shrink that and then compose with the mapping function.
...
The key idea for fixing this is as follows: In order to shrink outputs it almost always suffices to shrink inputs. Although in theory you can get functions where simpler input leads to more complicated output, in practice this seems to be rare enough that it’s OK to just shrug and accept more complicated test output in those cases.
Given that, the _way to shrink the output of a mapped strategy is to just shrink the value generated from the first strategy and feed it to the mapping function_.
Which means that you need an API that can support that sort of shrinking.
https://hypothesis.works/articles/types-and-properties/
This happens a lot: Frequently there are properties that only hold in some restricted domain, and so you want more specific tests for that domain to complement your other tests for the larger range of data.
When this happens you need tools to generate something more specific, and those requirements don’t map naturally to types.
[ed.: Some examples of how this idea can be useful:
Have a type but want to test different distributions on it for different purposes. Eg, comparing worst-case and average-case guarantees for benchmarking time/memory complexity. Comparing a slow and fast implementation on small input sizes, then running some sanity checks for the fast implementation on large input sizes beyond what the slow implementation can handle.]
...
In Haskell, traditionally we would fix this with a newtype declaration which wraps the type. We could find a newtype NonEmptyList and a newtype FiniteFloat and then say that we actually wanted a NonEmptyList[FiniteFloat] there.
...
But why should we bother? Especially if we’re only using these in one test, we’re not actually interested in these types at all, and it just adds a whole bunch of syntactic noise when you could just pass the data generators directly. Defining new types for the data you want to generate is purely a workaround for a limitation of the API.
If you were working in a dependently typed language where you could already naturally express this in the type system it might be OK (I don’t have any direct experience of working in type systems that strong), but I’m sceptical of being able to make it work well - you’re unlikely to be able to automatically derive data generators in the general case, because the needs of data generation “go in the opposite direction” from types (a type is effectively a predicate which consumes a value, where a data generator is a function that produces a value, so in order to produce a generator for a type automatically you need to basically invert the predicate). I suspect most approaches here will leave you with a bunch of sharp edges, but I would be interested to see experiments in this direction.
https://www.reddit.com/r/haskell/comments/646k3d/ann_hedgehog_property_testing/dg1485c/
techtariat
rhetoric
rant
programming
libraries
pls
types
functional
haskell
python
random
checking
design
critique
multi
composition-decomposition
api
reddit
social
commentary
system-design
arrows
lifts-projections
DSL
static-dynamic
In Haskell’s QuickCheck, shrinking is defined based on types: Any value of a given type shrinks the same way, regardless of how it is generated. In Hypothesis, test.check, etc. instead shrinking is part of the generation, and the generator controls how the values it produces shrinks (this works differently in Hypothesis and test.check, and probably differently again in EQC, but the user visible result is largely the same)
This is not a trivial distinction. Integrating shrinking into generation has two large benefits:
- Shrinking composes nicely, and you can shrink anything you can generate regardless of whether there is a defined shrinker for the type produced.
- You can _guarantee that shrinking satisfies the same invariants as generation_.
The first is mostly important from a convenience point of view: Although there are some things it let you do that you can’t do in the type based approach, they’re mostly of secondary importance. It largely just saves you from the effort of having to write your own shrinkers.
But the second is really important, because the lack of it makes your test failures potentially extremely confusing.
...
[example: even_numbers = integers().map(lambda x: x * 2)]
...
In this example the problem was relatively obvious and so easy to work around, but as your invariants get more implicit and subtle it becomes really problematic: In Hypothesis it’s easy and convenient to generate quite complex data, and trying to recreate the invariants that are automatically satisfied with that in your tests and/or your custom shrinkers would quickly become a nightmare.
I don’t think it’s an accident that the main systems to get this right are in dynamic languages. It’s certainly not essential - the original proposal that lead to the implementation for test.check was for Haskell, and Jack is an alternative property based system for Haskell that does this - but you feel the pain much more quickly in dynamic languages because the typical workaround for this problem in Haskell is to define a newtype, which lets you turn off the default shrinking for your types and possibly define your own.
But that’s a workaround for a problem that shouldn’t be there in the first place, and using it will still result in your having to encode the invariants into your your shrinkers, which is more work and more brittle than just having it work automatically.
So although (as far as I know) none of the currently popular property based testing systems for statically typed languages implement this behaviour correctly, they absolutely can and they absolutely should. It will improve users’ lives significantly.
https://hypothesis.works/articles/compositional-shrinking/
In my last article about shrinking, I discussed the problems with basing shrinking on the type of the values to be shrunk.
In writing it though I forgot that there was a halfway house which is also somewhat bad (but significantly less so) that you see in a couple of implementations.
This is when the shrinking is not type based, but still follows the classic shrinking API that takes a value and returns a lazy list of shrinks of that value. Examples of libraries that do this are theft and QuickTheories.
This works reasonably well and solves the major problems with type directed shrinking, but it’s still somewhat fragile and importantly does not compose nearly as well as the approaches that Hypothesis or test.check take.
Ideally, as well as not being based on the types of the values being generated, shrinking should not be based on the actual values generated at all.
This may seem counter-intuitive, but it actually works pretty well.
...
We took a strategy and composed it with a function mapping over the values that that strategy produced to get a new strategy.
Suppose the Hypothesis strategy implementation looked something like the following:
...
i.e. we can generate a value and we can shrink a value that we’ve previously generated. By default we don’t know how to generate values (subclasses have to implement that) and we can’t shrink anything, which subclasses are able to fix if they want or leave as is if they’re fine with that.
(This is in fact how a very early implementation of it looked)
This is essentially the approach taken by theft or QuickTheories, and the problem with it is that under this implementation the ‘map’ function we used above is impossible to define in a way that preserves shrinking: In order to shrink a generated value, you need some way to invert the function you’re composing with (which is in general impossible even if your language somehow exposed the facilities to do it, which it almost certainly doesn’t) so you could take the generated value, map it back to the value that produced it, shrink that and then compose with the mapping function.
...
The key idea for fixing this is as follows: In order to shrink outputs it almost always suffices to shrink inputs. Although in theory you can get functions where simpler input leads to more complicated output, in practice this seems to be rare enough that it’s OK to just shrug and accept more complicated test output in those cases.
Given that, the _way to shrink the output of a mapped strategy is to just shrink the value generated from the first strategy and feed it to the mapping function_.
Which means that you need an API that can support that sort of shrinking.
https://hypothesis.works/articles/types-and-properties/
This happens a lot: Frequently there are properties that only hold in some restricted domain, and so you want more specific tests for that domain to complement your other tests for the larger range of data.
When this happens you need tools to generate something more specific, and those requirements don’t map naturally to types.
[ed.: Some examples of how this idea can be useful:
Have a type but want to test different distributions on it for different purposes. Eg, comparing worst-case and average-case guarantees for benchmarking time/memory complexity. Comparing a slow and fast implementation on small input sizes, then running some sanity checks for the fast implementation on large input sizes beyond what the slow implementation can handle.]
...
In Haskell, traditionally we would fix this with a newtype declaration which wraps the type. We could find a newtype NonEmptyList and a newtype FiniteFloat and then say that we actually wanted a NonEmptyList[FiniteFloat] there.
...
But why should we bother? Especially if we’re only using these in one test, we’re not actually interested in these types at all, and it just adds a whole bunch of syntactic noise when you could just pass the data generators directly. Defining new types for the data you want to generate is purely a workaround for a limitation of the API.
If you were working in a dependently typed language where you could already naturally express this in the type system it might be OK (I don’t have any direct experience of working in type systems that strong), but I’m sceptical of being able to make it work well - you’re unlikely to be able to automatically derive data generators in the general case, because the needs of data generation “go in the opposite direction” from types (a type is effectively a predicate which consumes a value, where a data generator is a function that produces a value, so in order to produce a generator for a type automatically you need to basically invert the predicate). I suspect most approaches here will leave you with a bunch of sharp edges, but I would be interested to see experiments in this direction.
https://www.reddit.com/r/haskell/comments/646k3d/ann_hedgehog_property_testing/dg1485c/
july 2019 by nhaliday
Cleaner, more elegant, and harder to recognize | The Old New Thing
july 2019 by nhaliday
Really easy
Writing bad error-code-based code
Writing bad exception-based code
Hard
Writing good error-code-based code
Really hard
Writing good exception-based code
--
Really easy
Recognizing that error-code-based code is badly-written
Recognizing the difference between bad error-code-based code and
not-bad error-code-based code.
Hard
Recognizing that error-code-base code is not badly-written
Really hard
Recognizing that exception-based code is badly-written
Recognizing that exception-based code is not badly-written
Recognizing the difference between bad exception-based code
and not-bad exception-based code
https://ra3s.com/wordpress/dysfunctional-programming/2009/07/15/return-code-vs-exception-handling/
https://nedbatchelder.com/blog/200501/more_exception_handling_debate.html
techtariat
org:com
microsoft
working-stiff
pragmatic
carmack
error
error-handling
programming
rhetoric
debate
critique
pls
search
structure
cost-benefit
comparison
summary
intricacy
certificates-recognition
commentary
multi
contrarianism
correctness
quality
code-dive
cracker-prog
Writing bad error-code-based code
Writing bad exception-based code
Hard
Writing good error-code-based code
Really hard
Writing good exception-based code
--
Really easy
Recognizing that error-code-based code is badly-written
Recognizing the difference between bad error-code-based code and
not-bad error-code-based code.
Hard
Recognizing that error-code-base code is not badly-written
Really hard
Recognizing that exception-based code is badly-written
Recognizing that exception-based code is not badly-written
Recognizing the difference between bad exception-based code
and not-bad exception-based code
https://ra3s.com/wordpress/dysfunctional-programming/2009/07/15/return-code-vs-exception-handling/
https://nedbatchelder.com/blog/200501/more_exception_handling_debate.html
july 2019 by nhaliday
How to work with GIT/SVN — good practices - Jakub Kułak - Medium
june 2019 by nhaliday
best part of this is the links to other guides
Commit Often, Perfect Later, Publish Once: https://sethrobertson.github.io/GitBestPractices/
My Favourite Git Commit: https://news.ycombinator.com/item?id=21289827
I use the following convention to start the subject of commit(posted by someone in a similar HN thread):
...
org:med
techtariat
tutorial
faq
guide
howto
workflow
devtools
best-practices
vcs
git
engineering
programming
multi
reference
org:junk
writing
technical-writing
hn
commentary
jargon
list
objektbuch
examples
analysis
Commit Often, Perfect Later, Publish Once: https://sethrobertson.github.io/GitBestPractices/
My Favourite Git Commit: https://news.ycombinator.com/item?id=21289827
I use the following convention to start the subject of commit(posted by someone in a similar HN thread):
...
june 2019 by nhaliday
Less is exponentially more
june 2019 by nhaliday
https://news.ycombinator.com/item?id=16548684
https://news.ycombinator.com/item?id=6417319
https://news.ycombinator.com/item?id=4158865
https://aras-p.info/blog/2018/12/28/Modern-C-Lamentations/
https://thephd.github.io/perspective-standardization-in-2018
https://sean-parent.stlab.cc/2018/12/30/cpp-ruminations.html
http://ericniebler.com/2018/12/05/standard-ranges/
techtariat
rsc
worse-is-better/the-right-thing
blowhards
diogenes
reflection
rhetoric
c(pp)
systems
programming
pls
plt
types
thinking
engineering
nitty-gritty
stories
stock-flow
network-structure
arrows
composition-decomposition
comparison
jvm
golang
degrees-of-freedom
roots
performance
hn
commentary
multi
ideology
intricacy
parsimony
minimalism
tradeoffs
impetus
design
google
python
cracker-prog
aphorism
science
critique
classification
characterization
examples
subculture
culture
grokkability
incentives
interests
latency-throughput
grokkability-clarity
https://news.ycombinator.com/item?id=6417319
https://news.ycombinator.com/item?id=4158865
https://aras-p.info/blog/2018/12/28/Modern-C-Lamentations/
https://thephd.github.io/perspective-standardization-in-2018
https://sean-parent.stlab.cc/2018/12/30/cpp-ruminations.html
http://ericniebler.com/2018/12/05/standard-ranges/
june 2019 by nhaliday
Machine Learning: The High Interest Credit Card of Technical Debt (2014) | Hacker News
june 2019 by nhaliday
I have this is in Papers3. Really should read it sometime.
hn
commentary
papers
google
machine-learning
data-science
engineering
thinking
metabuch
intricacy
nitty-gritty
aversion
reflection
debt
analogy
cost-benefit
time-preference
discipline
analysis
roots
things
tradeoffs
investing
long-short-run
system-design
big-picture
quality
best-practices
methodology
june 2019 by nhaliday
A Taxonomy of Technical Debt | Hacker News
techtariat hn commentary reflection programming engineering nitty-gritty aversion debt analogy cost-benefit time-preference discipline characterization analysis composition-decomposition things classification tech tradeoffs investing long-short-run games thinking metrics spreading metabuch time impact prioritizing models local-global stories examples legacy code-dive system-design big-picture quality
june 2019 by nhaliday
techtariat hn commentary reflection programming engineering nitty-gritty aversion debt analogy cost-benefit time-preference discipline characterization analysis composition-decomposition things classification tech tradeoffs investing long-short-run games thinking metrics spreading metabuch time impact prioritizing models local-global stories examples legacy code-dive system-design big-picture quality
june 2019 by nhaliday
C++ Core Guidelines
june 2019 by nhaliday
This document is a set of guidelines for using C++ well. The aim of this document is to help people to use modern C++ effectively. By “modern C++” we mean effective use of the ISO C++ standard (currently C++17, but almost all of our recommendations also apply to C++14 and C++11). In other words, what would you like your code to look like in 5 years’ time, given that you can start now? In 10 years’ time?
https://isocpp.github.io/CppCoreGuidelines/
“Within C++ is a smaller, simpler, safer language struggling to get out.” – Bjarne Stroustrup
...
The guidelines are focused on relatively higher-level issues, such as interfaces, resource management, memory management, and concurrency. Such rules affect application architecture and library design. Following the rules will lead to code that is statically type safe, has no resource leaks, and catches many more programming logic errors than is common in code today. And it will run fast - you can afford to do things right.
We are less concerned with low-level issues, such as naming conventions and indentation style. However, no topic that can help a programmer is out of bounds.
Our initial set of rules emphasize safety (of various forms) and simplicity. They may very well be too strict. We expect to have to introduce more exceptions to better accommodate real-world needs. We also need more rules.
...
The rules are designed to be supported by an analysis tool. Violations of rules will be flagged with references (or links) to the relevant rule. We do not expect you to memorize all the rules before trying to write code.
contrary:
https://aras-p.info/blog/2018/12/28/Modern-C-Lamentations/
This will be a long wall of text, and kinda random! My main points are:
1. C++ compile times are important,
2. Non-optimized build performance is important,
3. Cognitive load is important. I don’t expand much on this here, but if a programming language or a library makes me feel stupid, then I’m less likely to use it or like it. C++ does that a lot :)
programming
engineering
pls
best-practices
systems
c(pp)
guide
metabuch
objektbuch
reference
cheatsheet
elegance
frontier
libraries
intricacy
advanced
advice
recommendations
big-picture
novelty
lens
philosophy
state
error
types
concurrency
memory-management
performance
abstraction
plt
compilers
expert-experience
multi
checking
devtools
flux-stasis
safety
system-design
techtariat
time
measure
dotnet
comparison
examples
build-packaging
thinking
worse-is-better/the-right-thing
cost-benefit
tradeoffs
essay
commentary
oop
correctness
computer-memory
error-handling
resources-effects
latency-throughput
https://isocpp.github.io/CppCoreGuidelines/
“Within C++ is a smaller, simpler, safer language struggling to get out.” – Bjarne Stroustrup
...
The guidelines are focused on relatively higher-level issues, such as interfaces, resource management, memory management, and concurrency. Such rules affect application architecture and library design. Following the rules will lead to code that is statically type safe, has no resource leaks, and catches many more programming logic errors than is common in code today. And it will run fast - you can afford to do things right.
We are less concerned with low-level issues, such as naming conventions and indentation style. However, no topic that can help a programmer is out of bounds.
Our initial set of rules emphasize safety (of various forms) and simplicity. They may very well be too strict. We expect to have to introduce more exceptions to better accommodate real-world needs. We also need more rules.
...
The rules are designed to be supported by an analysis tool. Violations of rules will be flagged with references (or links) to the relevant rule. We do not expect you to memorize all the rules before trying to write code.
contrary:
https://aras-p.info/blog/2018/12/28/Modern-C-Lamentations/
This will be a long wall of text, and kinda random! My main points are:
1. C++ compile times are important,
2. Non-optimized build performance is important,
3. Cognitive load is important. I don’t expand much on this here, but if a programming language or a library makes me feel stupid, then I’m less likely to use it or like it. C++ does that a lot :)
june 2019 by nhaliday
Boring languages
june 2019 by nhaliday
Choose Boring Technology: http://boringtechnology.club/
https://news.ycombinator.com/item?id=20323246
techtariat
dan-luu
list
links
examples
programming
engineering
pls
contrarianism
worse-is-better/the-right-thing
regularizer
hardware
c(pp)
os
dbs
caching
editors
desktop
terminal
git
vcs
yak-shaving
huge-data-the-biggest
debate
critique
jvm
rust
ocaml-sml
dotnet
top-n
tradeoffs
cost-benefit
pragmatic
ubiquity
multi
hn
commentary
slides
nitty-gritty
carmack
shipping
working-stiff
tech
frontier
uncertainty
debugging
correctness
measure
comparison
best-practices
software
intricacy
degrees-of-freedom
minimalism
graphs
analogy
optimization
models
thinking
prioritizing
ecosystem
attention
bounded-cognition
tech-infrastructure
cynicism-idealism
https://news.ycombinator.com/item?id=20323246
june 2019 by nhaliday
An Efficiency Comparison of Document Preparation Systems Used in Academic Research and Development
june 2019 by nhaliday
The choice of an efficient document preparation system is an important decision for any academic researcher. To assist the research community, we report a software usability study in which 40 researchers across different disciplines prepared scholarly texts with either Microsoft Word or LaTeX. The probe texts included simple continuous text, text with tables and subheadings, and complex text with several mathematical equations. We show that LaTeX users were slower than Word users, wrote less text in the same amount of time, and produced more typesetting, orthographical, grammatical, and formatting errors. On most measures, expert LaTeX users performed even worse than novice Word users. LaTeX users, however, more often report enjoying using their respective software. We conclude that even experienced LaTeX users may suffer a loss in productivity when LaTeX is used, relative to other document preparation systems. Individuals, institutions, and journals should carefully consider the ramifications of this finding when choosing document preparation strategies, or requiring them of authors.
...
However, our study suggests that LaTeX should be used as a document preparation system only in cases in which a document is heavily loaded with mathematical equations. For all other types of documents, our results suggest that LaTeX reduces the user’s productivity and results in more orthographical, grammatical, and formatting errors, more typos, and less written text than Microsoft Word over the same duration of time. LaTeX users may argue that the overall quality of the text that is created with LaTeX is better than the text that is created with Microsoft Word. Although this argument may be true, the differences between text produced in more recent editions of Microsoft Word and text produced in LaTeX may be less obvious than it was in the past. Moreover, we believe that the appearance of text matters less than the scientific content and impact to the field. In particular, LaTeX is also used frequently for text that does not contain a significant amount of mathematical symbols and formula. We believe that the use of LaTeX under these circumstances is highly problematic and that researchers should reflect on the criteria that drive their preferences to use LaTeX over Microsoft Word for text that does not require significant mathematical representations.
...
A second decision criterion that factors into the choice to use a particular software system is reflection about what drives certain preferences. A striking result of our study is that LaTeX users are highly satisfied with their system despite reduced usability and productivity. From a psychological perspective, this finding may be related to motivational factors, i.e., the driving forces that compel or reinforce individuals to act in a certain way to achieve a desired goal. A vital motivational factor is the tendency to reduce cognitive dissonance. According to the theory of cognitive dissonance, each individual has a motivational drive to seek consonance between their beliefs and their actual actions. If a belief set does not concur with the individual’s actual behavior, then it is usually easier to change the belief rather than the behavior [6]. The results from many psychological studies in which people have been asked to choose between one of two items (e.g., products, objects, gifts, etc.) and then asked to rate the desirability, value, attractiveness, or usefulness of their choice, report that participants often reduce unpleasant feelings of cognitive dissonance by rationalizing the chosen alternative as more desirable than the unchosen alternative [6, 7]. This bias is usually unconscious and becomes stronger as the effort to reject the chosen alternative increases, which is similar in nature to the case of learning and using LaTeX.
...
Given these numbers it remains an open question to determine the amount of taxpayer money that is spent worldwide for researchers to use LaTeX over a more efficient document preparation system, which would free up their time to advance their respective field. Some publishers may save a significant amount of money by requesting or allowing LaTeX submissions because a well-formed LaTeX document complying with a well-designed class file (template) is much easier to bring into their publication workflow. However, this is at the expense of the researchers’ labor time and effort. We therefore suggest that leading scientific journals should consider accepting submissions in LaTeX only if this is justified by the level of mathematics presented in the paper. In all other cases, we think that scholarly journals should request authors to submit their documents in Word or PDF format. We believe that this would be a good policy for two reasons. First, we think that the appearance of the text is secondary to the scientific merit of an article and its impact to the field. And, second, preventing researchers from producing documents in LaTeX would save time and money to maximize the benefit of research and development for both the research team and the public.
[ed.: I sense some salt.
And basically no description of how "# errors" was calculated.]
https://news.ycombinator.com/item?id=8797002
I question the validity of their methodology.
At no point in the paper is exactly what is meant by a "formatting error" or a "typesetting error" defined. From what I gather, the participants in the study were required to reproduce the formatting and layout of the sample text. In theory, a LaTeX file should strictly be a semantic representation of the content of the document; while TeX may have been a raw typesetting language, this is most definitely not the intended use case of LaTeX and is overall a very poor test of its relative advantages and capabilities.
The separation of the semantic definition of the content from the rendering of the document is, in my opinion, the most important feature of LaTeX. Like CSS, this allows the actual formatting to be abstracted away, allowing plain (marked-up) content to be written without worrying about typesetting.
Word has some similar capabilities with styles, and can be used in a similar manner, though few Word users actually use the software properly. This may sound like a relatively insignificant point, but in practice, almost every Word document I have seen has some form of inconsistent formatting. If Word disallowed local formatting changes (including things such as relative spacing of nested bullet points), forcing all formatting changes to be done in document-global styles, it would be a far better typesetting system. Also, the users would be very unhappy.
Yes, LaTeX can undeniably be a pain in the arse, especially when it comes to trying to get figures in the right place; however the combination of a simple, semantic plain-text representation with a flexible and professional typesetting and rendering engine are undeniable and completely unaddressed by this study.
--
It seems that the test was heavily biased in favor of WYSIWYG.
Of course that approach makes it very simple to reproduce something, as has been tested here. Even simpler would be to scan the document and run OCR. The massive problem with both approaches (WYSIWYG and scanning) is that you can't generalize any of it. You're doomed repeating it forever.
(I'll also note the other significant issue with this study: when the ratings provided by participants came out opposite of their test results, they attributed it to irrational bias.)
https://www.nature.com/articles/d41586-019-01796-1
Over the past few years however, the line between the tools has blurred. In 2017, Microsoft made it possible to use LaTeX’s equation-writing syntax directly in Word, and last year it scrapped Word’s own equation editor. Other text editors also support elements of LaTeX, allowing newcomers to use as much or as little of the language as they like.
https://news.ycombinator.com/item?id=20191348
study
hmm
academia
writing
publishing
yak-shaving
technical-writing
software
tools
comparison
latex
scholar
regularizer
idk
microsoft
evidence-based
science
desktop
time
efficiency
multi
hn
commentary
critique
news
org:sci
flux-stasis
duplication
metrics
biases
...
However, our study suggests that LaTeX should be used as a document preparation system only in cases in which a document is heavily loaded with mathematical equations. For all other types of documents, our results suggest that LaTeX reduces the user’s productivity and results in more orthographical, grammatical, and formatting errors, more typos, and less written text than Microsoft Word over the same duration of time. LaTeX users may argue that the overall quality of the text that is created with LaTeX is better than the text that is created with Microsoft Word. Although this argument may be true, the differences between text produced in more recent editions of Microsoft Word and text produced in LaTeX may be less obvious than it was in the past. Moreover, we believe that the appearance of text matters less than the scientific content and impact to the field. In particular, LaTeX is also used frequently for text that does not contain a significant amount of mathematical symbols and formula. We believe that the use of LaTeX under these circumstances is highly problematic and that researchers should reflect on the criteria that drive their preferences to use LaTeX over Microsoft Word for text that does not require significant mathematical representations.
...
A second decision criterion that factors into the choice to use a particular software system is reflection about what drives certain preferences. A striking result of our study is that LaTeX users are highly satisfied with their system despite reduced usability and productivity. From a psychological perspective, this finding may be related to motivational factors, i.e., the driving forces that compel or reinforce individuals to act in a certain way to achieve a desired goal. A vital motivational factor is the tendency to reduce cognitive dissonance. According to the theory of cognitive dissonance, each individual has a motivational drive to seek consonance between their beliefs and their actual actions. If a belief set does not concur with the individual’s actual behavior, then it is usually easier to change the belief rather than the behavior [6]. The results from many psychological studies in which people have been asked to choose between one of two items (e.g., products, objects, gifts, etc.) and then asked to rate the desirability, value, attractiveness, or usefulness of their choice, report that participants often reduce unpleasant feelings of cognitive dissonance by rationalizing the chosen alternative as more desirable than the unchosen alternative [6, 7]. This bias is usually unconscious and becomes stronger as the effort to reject the chosen alternative increases, which is similar in nature to the case of learning and using LaTeX.
...
Given these numbers it remains an open question to determine the amount of taxpayer money that is spent worldwide for researchers to use LaTeX over a more efficient document preparation system, which would free up their time to advance their respective field. Some publishers may save a significant amount of money by requesting or allowing LaTeX submissions because a well-formed LaTeX document complying with a well-designed class file (template) is much easier to bring into their publication workflow. However, this is at the expense of the researchers’ labor time and effort. We therefore suggest that leading scientific journals should consider accepting submissions in LaTeX only if this is justified by the level of mathematics presented in the paper. In all other cases, we think that scholarly journals should request authors to submit their documents in Word or PDF format. We believe that this would be a good policy for two reasons. First, we think that the appearance of the text is secondary to the scientific merit of an article and its impact to the field. And, second, preventing researchers from producing documents in LaTeX would save time and money to maximize the benefit of research and development for both the research team and the public.
[ed.: I sense some salt.
And basically no description of how "# errors" was calculated.]
https://news.ycombinator.com/item?id=8797002
I question the validity of their methodology.
At no point in the paper is exactly what is meant by a "formatting error" or a "typesetting error" defined. From what I gather, the participants in the study were required to reproduce the formatting and layout of the sample text. In theory, a LaTeX file should strictly be a semantic representation of the content of the document; while TeX may have been a raw typesetting language, this is most definitely not the intended use case of LaTeX and is overall a very poor test of its relative advantages and capabilities.
The separation of the semantic definition of the content from the rendering of the document is, in my opinion, the most important feature of LaTeX. Like CSS, this allows the actual formatting to be abstracted away, allowing plain (marked-up) content to be written without worrying about typesetting.
Word has some similar capabilities with styles, and can be used in a similar manner, though few Word users actually use the software properly. This may sound like a relatively insignificant point, but in practice, almost every Word document I have seen has some form of inconsistent formatting. If Word disallowed local formatting changes (including things such as relative spacing of nested bullet points), forcing all formatting changes to be done in document-global styles, it would be a far better typesetting system. Also, the users would be very unhappy.
Yes, LaTeX can undeniably be a pain in the arse, especially when it comes to trying to get figures in the right place; however the combination of a simple, semantic plain-text representation with a flexible and professional typesetting and rendering engine are undeniable and completely unaddressed by this study.
--
It seems that the test was heavily biased in favor of WYSIWYG.
Of course that approach makes it very simple to reproduce something, as has been tested here. Even simpler would be to scan the document and run OCR. The massive problem with both approaches (WYSIWYG and scanning) is that you can't generalize any of it. You're doomed repeating it forever.
(I'll also note the other significant issue with this study: when the ratings provided by participants came out opposite of their test results, they attributed it to irrational bias.)
https://www.nature.com/articles/d41586-019-01796-1
Over the past few years however, the line between the tools has blurred. In 2017, Microsoft made it possible to use LaTeX’s equation-writing syntax directly in Word, and last year it scrapped Word’s own equation editor. Other text editors also support elements of LaTeX, allowing newcomers to use as much or as little of the language as they like.
https://news.ycombinator.com/item?id=20191348
june 2019 by nhaliday
Quality of Primary Care in Low-Income Countries: Facts and Economics | Annual Review of Economics
study article economics roots explanans quality healthcare world developing-world comparison wealth garett-jones multi twitter social commentary backup summary human-capital hive-mind measurement survey econotariat wealth-of-nations
june 2019 by nhaliday
study article economics roots explanans quality healthcare world developing-world comparison wealth garett-jones multi twitter social commentary backup summary human-capital hive-mind measurement survey econotariat wealth-of-nations
june 2019 by nhaliday
Interview with Donald Knuth | Interview with Donald Knuth | InformIT
june 2019 by nhaliday
Andrew Binstock and Donald Knuth converse on the success of open source, the problem with multicore architecture, the disappointing lack of interest in literate programming, the menace of reusable code, and that urban legend about winning a programming contest with a single compilation.
Reusable vs. re-editable code: https://hal.archives-ouvertes.fr/hal-01966146/document
- Konrad Hinsen
https://www.johndcook.com/blog/2008/05/03/reusable-code-vs-re-editable-code/
I think whether code should be editable or in “an untouchable black box” depends on the number of developers involved, as well as their talent and motivation. Knuth is a highly motivated genius working in isolation. Most software is developed by large teams of programmers with varying degrees of motivation and talent. I think the further you move away from Knuth along these three axes the more important black boxes become.
nibble
interview
giants
expert-experience
programming
cs
software
contrarianism
carmack
oss
prediction
trends
linux
concurrency
desktop
comparison
checking
debugging
stories
engineering
hmm
idk
algorithms
books
debate
flux-stasis
duplication
parsimony
best-practices
writing
documentation
latex
intricacy
structure
hardware
caching
workflow
editors
composition-decomposition
coupling-cohesion
exposition
technical-writing
thinking
cracker-prog
code-organizing
grokkability
multi
techtariat
commentary
pdf
reflection
essay
examples
python
data-science
libraries
grokkability-clarity
Reusable vs. re-editable code: https://hal.archives-ouvertes.fr/hal-01966146/document
- Konrad Hinsen
https://www.johndcook.com/blog/2008/05/03/reusable-code-vs-re-editable-code/
I think whether code should be editable or in “an untouchable black box” depends on the number of developers involved, as well as their talent and motivation. Knuth is a highly motivated genius working in isolation. Most software is developed by large teams of programmers with varying degrees of motivation and talent. I think the further you move away from Knuth along these three axes the more important black boxes become.
june 2019 by nhaliday
One week of bugs
may 2019 by nhaliday
If I had to guess, I'd say I probably work around hundreds of bugs in an average week, and thousands in a bad week. It's not unusual for me to run into a hundred new bugs in a single week. But I often get skepticism when I mention that I run into multiple new (to me) bugs per day, and that this is inevitable if we don't change how we write tests. Well, here's a log of one week of bugs, limited to bugs that were new to me that week. After a brief description of the bugs, I'll talk about what we can do to improve the situation. The obvious answer to spend more effort on testing, but everyone already knows we should do that and no one does it. That doesn't mean it's hopeless, though.
...
Here's where I'm supposed to write an appeal to take testing more seriously and put real effort into it. But we all know that's not going to work. It would take 90k LOC of tests to get Julia to be as well tested as a poorly tested prototype (falsely assuming linear complexity in size). That's two person-years of work, not even including time to debug and fix bugs (which probably brings it closer to four of five years). Who's going to do that? No one. Writing tests is like writing documentation. Everyone already knows you should do it. Telling people they should do it adds zero information1.
Given that people aren't going to put any effort into testing, what's the best way to do it?
Property-based testing. Generative testing. Random testing. Concolic Testing (which was done long before the term was coined). Static analysis. Fuzzing. Statistical bug finding. There are lots of options. Some of them are actually the same thing because the terminology we use is inconsistent and buggy. I'm going to arbitrarily pick one to talk about, but they're all worth looking into.
...
There are a lot of great resources out there, but if you're just getting started, I found this description of types of fuzzers to be one of those most helpful (and simplest) things I've read.
John Regehr has a udacity course on software testing. I haven't worked through it yet (Pablo Torres just pointed to it), but given the quality of Dr. Regehr's writing, I expect the course to be good.
For more on my perspective on testing, there's this.
Everything's broken and nobody's upset: https://www.hanselman.com/blog/EverythingsBrokenAndNobodysUpset.aspx
https://news.ycombinator.com/item?id=4531549
https://hypothesis.works/articles/the-purpose-of-hypothesis/
From the perspective of a user, the purpose of Hypothesis is to make it easier for you to write better tests.
From my perspective as the primary author, that is of course also a purpose of Hypothesis. I write a lot of code, it needs testing, and the idea of trying to do that without Hypothesis has become nearly unthinkable.
But, on a large scale, the true purpose of Hypothesis is to drag the world kicking and screaming into a new and terrifying age of high quality software.
Software is everywhere. We have built a civilization on it, and it’s only getting more prevalent as more services move online and embedded and “internet of things” devices become cheaper and more common.
Software is also terrible. It’s buggy, it’s insecure, and it’s rarely well thought out.
This combination is clearly a recipe for disaster.
The state of software testing is even worse. It’s uncontroversial at this point that you should be testing your code, but it’s a rare codebase whose authors could honestly claim that they feel its testing is sufficient.
Much of the problem here is that it’s too hard to write good tests. Tests take up a vast quantity of development time, but they mostly just laboriously encode exactly the same assumptions and fallacies that the authors had when they wrote the code, so they miss exactly the same bugs that you missed when they wrote the code.
Preventing the Collapse of Civilization [video]: https://news.ycombinator.com/item?id=19945452
- Jonathan Blow
NB: DevGAMM is a game industry conference
- loss of technological knowledge (Antikythera mechanism, aqueducts, etc.)
- hardware driving most gains, not software
- software's actually less robust, often poorly designed and overengineered these days
- *list of bugs he's encountered recently*:
https://youtu.be/pW-SOdj4Kkk?t=1387
- knowledge of trivia becomes more than general, deep knowledge
- does at least acknowledge value of DRY, reusing code, abstraction saving dev time
techtariat
dan-luu
tech
software
error
list
debugging
linux
github
robust
checking
oss
troll
lol
aphorism
webapp
email
google
facebook
games
julia
pls
compilers
communication
mooc
browser
rust
programming
engineering
random
jargon
formal-methods
expert-experience
prof
c(pp)
course
correctness
hn
commentary
video
presentation
carmack
pragmatic
contrarianism
pessimism
sv
unix
rhetoric
critique
worrydream
hardware
performance
trends
multiplicative
roots
impact
comparison
history
iron-age
the-classics
mediterranean
conquest-empire
gibbon
technology
the-world-is-just-atoms
flux-stasis
increase-decrease
graphics
hmm
idk
systems
os
abstraction
intricacy
worse-is-better/the-right-thing
build-packaging
microsoft
osx
apple
reflection
assembly
things
knowledge
detail-architecture
thick-thin
trivia
info-dynamics
caching
frameworks
generalization
systematic-ad-hoc
universalism-particularism
analytical-holistic
structure
tainter
libraries
tradeoffs
prepping
threat-modeling
network-structure
writing
risk
local-glob
...
Here's where I'm supposed to write an appeal to take testing more seriously and put real effort into it. But we all know that's not going to work. It would take 90k LOC of tests to get Julia to be as well tested as a poorly tested prototype (falsely assuming linear complexity in size). That's two person-years of work, not even including time to debug and fix bugs (which probably brings it closer to four of five years). Who's going to do that? No one. Writing tests is like writing documentation. Everyone already knows you should do it. Telling people they should do it adds zero information1.
Given that people aren't going to put any effort into testing, what's the best way to do it?
Property-based testing. Generative testing. Random testing. Concolic Testing (which was done long before the term was coined). Static analysis. Fuzzing. Statistical bug finding. There are lots of options. Some of them are actually the same thing because the terminology we use is inconsistent and buggy. I'm going to arbitrarily pick one to talk about, but they're all worth looking into.
...
There are a lot of great resources out there, but if you're just getting started, I found this description of types of fuzzers to be one of those most helpful (and simplest) things I've read.
John Regehr has a udacity course on software testing. I haven't worked through it yet (Pablo Torres just pointed to it), but given the quality of Dr. Regehr's writing, I expect the course to be good.
For more on my perspective on testing, there's this.
Everything's broken and nobody's upset: https://www.hanselman.com/blog/EverythingsBrokenAndNobodysUpset.aspx
https://news.ycombinator.com/item?id=4531549
https://hypothesis.works/articles/the-purpose-of-hypothesis/
From the perspective of a user, the purpose of Hypothesis is to make it easier for you to write better tests.
From my perspective as the primary author, that is of course also a purpose of Hypothesis. I write a lot of code, it needs testing, and the idea of trying to do that without Hypothesis has become nearly unthinkable.
But, on a large scale, the true purpose of Hypothesis is to drag the world kicking and screaming into a new and terrifying age of high quality software.
Software is everywhere. We have built a civilization on it, and it’s only getting more prevalent as more services move online and embedded and “internet of things” devices become cheaper and more common.
Software is also terrible. It’s buggy, it’s insecure, and it’s rarely well thought out.
This combination is clearly a recipe for disaster.
The state of software testing is even worse. It’s uncontroversial at this point that you should be testing your code, but it’s a rare codebase whose authors could honestly claim that they feel its testing is sufficient.
Much of the problem here is that it’s too hard to write good tests. Tests take up a vast quantity of development time, but they mostly just laboriously encode exactly the same assumptions and fallacies that the authors had when they wrote the code, so they miss exactly the same bugs that you missed when they wrote the code.
Preventing the Collapse of Civilization [video]: https://news.ycombinator.com/item?id=19945452
- Jonathan Blow
NB: DevGAMM is a game industry conference
- loss of technological knowledge (Antikythera mechanism, aqueducts, etc.)
- hardware driving most gains, not software
- software's actually less robust, often poorly designed and overengineered these days
- *list of bugs he's encountered recently*:
https://youtu.be/pW-SOdj4Kkk?t=1387
- knowledge of trivia becomes more than general, deep knowledge
- does at least acknowledge value of DRY, reusing code, abstraction saving dev time
may 2019 by nhaliday
Rust Creator Graydon Hoare Recounts the History of Compilers - The New Stack
techtariat presentation links commentary summary slides pdf programming pls plt compilers reflection history comparison cost-benefit c(pp) performance lisp functional ocaml-sml haskell formal-methods llvm tradeoffs measurement intricacy troll aphorism software hardware roots impact expert-experience jvm constraint-satisfaction pareto rust gnu
may 2019 by nhaliday
techtariat presentation links commentary summary slides pdf programming pls plt compilers reflection history comparison cost-benefit c(pp) performance lisp functional ocaml-sml haskell formal-methods llvm tradeoffs measurement intricacy troll aphorism software hardware roots impact expert-experience jvm constraint-satisfaction pareto rust gnu
may 2019 by nhaliday
Intelligence predicts cooperativeness better than conscientiousness does - Marginal REVOLUTION
may 2019 by nhaliday
Intelligence has a large and positive long-run effect on cooperative behavior. The effect is strong when at the equilibrium of the repeated game there is a trade-off between short-run gains and long-run losses. Conscientiousness and Agreeableness have a natural, significant but transitory effect on cooperation rates
--
Note that agreeable people do cooperate more at first, but they don’t have the strategic ability and consistency of the higher IQ individuals in these games. Conscientiousness has multiple features, one of which is caution, and that deters cooperation, since the cautious are afraid of being taken advantage of. So, at least in these settings, high IQ really is the better predictor of cooperativeness, especially over longer-term horizons.
I think Garett Jones commented on this on Twitter or in a podcast?
http://www.unz.com/jthompson/prisoners-of-intelligence/
The researchers then deliberately paired up an above average intelligence player with one who was below average to see what happened. The overall return to the participants fell, because lower ability players tended to defect so as to obtain an immediate advantage, at great cost to the other player. How should the bright player respond? Simply continuing to try to cooperate does not work, because the duller player is then rewarded for his lack of cooperation. Instead, the “tit for tat” punishment strategy is required. Start by cooperating, and on the next round do whatever the other person did: if they cooperated, you cooperate; if they defected, you defect. The researchers call this “tough love”.
Four applications of retaliation were, on average, required to teach the lesson that lack of cooperation would be punished with reciprocal lack of cooperation. Eventually cooperation is established between bright and dull, but at an initial cost. Lower intelligence players learn to cooperate, because higher intelligence players punish them if they don’t. In societies where cooperation is already low, lenient and forgiving strategies become less frequent. There is very probably a level at which trust can be assumed, but below that punishment will be the norm. Where is the social tipping point below which cooperation is too costly a strategy? At what point do civil societies collapse and turn into uncivil bands?
econotariat
marginal-rev
commentary
study
economics
behavioral-gen
psychology
cog-psych
microfoundations
hive-mind
cooperate-defect
iq
psychometrics
personality
discipline
long-short-run
patience
time-preference
equilibrium
multi
albion
scitariat
garett-jones
GT-101
coordination
alignment
homo-hetero
models
correlation
cost-benefit
rindermann-thompson
wealth-of-nations
--
Note that agreeable people do cooperate more at first, but they don’t have the strategic ability and consistency of the higher IQ individuals in these games. Conscientiousness has multiple features, one of which is caution, and that deters cooperation, since the cautious are afraid of being taken advantage of. So, at least in these settings, high IQ really is the better predictor of cooperativeness, especially over longer-term horizons.
I think Garett Jones commented on this on Twitter or in a podcast?
http://www.unz.com/jthompson/prisoners-of-intelligence/
The researchers then deliberately paired up an above average intelligence player with one who was below average to see what happened. The overall return to the participants fell, because lower ability players tended to defect so as to obtain an immediate advantage, at great cost to the other player. How should the bright player respond? Simply continuing to try to cooperate does not work, because the duller player is then rewarded for his lack of cooperation. Instead, the “tit for tat” punishment strategy is required. Start by cooperating, and on the next round do whatever the other person did: if they cooperated, you cooperate; if they defected, you defect. The researchers call this “tough love”.
Four applications of retaliation were, on average, required to teach the lesson that lack of cooperation would be punished with reciprocal lack of cooperation. Eventually cooperation is established between bright and dull, but at an initial cost. Lower intelligence players learn to cooperate, because higher intelligence players punish them if they don’t. In societies where cooperation is already low, lenient and forgiving strategies become less frequent. There is very probably a level at which trust can be assumed, but below that punishment will be the norm. Where is the social tipping point below which cooperation is too costly a strategy? At what point do civil societies collapse and turn into uncivil bands?
may 2019 by nhaliday
Comparing within- and between-family polygenic score prediction | bioRxiv
april 2019 by nhaliday
https://twitter.com/StuartJRitchie/status/1116074740475736066
https://archive.is/bQnjM
See this thread for our new study on polygenic scores within fraternal twin pairs! Main point: take extra care with polygenic scores for traits like IQ & education, because they're confounded by (what seem to be) socioeconomic status effects. Not so for traits like height & BMI.
The idea is that the parenting is caused by the parental genotype, so it gets (mis)classified as a genetic effect on the children. It's really another way of looking at "genetic nurture" - see the papers from last year.
study
bio
preprint
biodet
behavioral-gen
genetics
sib-study
GWAS
class
s-factor
iq
education
attention
disease
psychiatry
embodied
health
environmental-effects
parenting
regularizer
spearhead
multi
twitter
social
commentary
backup
https://archive.is/bQnjM
See this thread for our new study on polygenic scores within fraternal twin pairs! Main point: take extra care with polygenic scores for traits like IQ & education, because they're confounded by (what seem to be) socioeconomic status effects. Not so for traits like height & BMI.
The idea is that the parenting is caused by the parental genotype, so it gets (mis)classified as a genetic effect on the children. It's really another way of looking at "genetic nurture" - see the papers from last year.
april 2019 by nhaliday
Language Log » English or Mandarin as the World Language?
february 2019 by nhaliday
- writing system frequently mentioned as barrier
- also imprecision of Chinese might hurt its use for technical writing
- most predicting it won't (but English might be replaced by absence of lingua franca per Nicholas Ostler)
linguistics
language
foreign-lang
china
asia
anglo
world
trends
prediction
speculation
expert-experience
analytical-holistic
writing
network-structure
science
discussion
commentary
flux-stasis
nationalism-globalism
comparison
org:edu
- also imprecision of Chinese might hurt its use for technical writing
- most predicting it won't (but English might be replaced by absence of lingua franca per Nicholas Ostler)
february 2019 by nhaliday
Information Processing: PanOpticon in my Pocket: 0.35GB/month of surveillance, no charge!
hsu scitariat commentary links data cocktail intel privacy opsec google time density spatial mobile finance tech network-structure anonymity identity advertising huge-data-the-biggest security threat-modeling labor speculation examples inference open-closed
september 2018 by nhaliday
hsu scitariat commentary links data cocktail intel privacy opsec google time density spatial mobile finance tech network-structure anonymity identity advertising huge-data-the-biggest security threat-modeling labor speculation examples inference open-closed
september 2018 by nhaliday
Re-identification of genomic data using long range familial searches | bioRxiv
august 2018 by nhaliday
https://twitter.com/EricTopol/status/1013094074159517697
What happens when you combine #AI facial recognition and #genomics to identify a person in the US?
Half of American adults in facial recognition databases https://www.theguardian.com/world/2016/oct/18/police-facial-recognition-database-surveillance-profiling …
Long-range familial searches
https://www.biorxiv.org/content/early/2018/06/18/350231 … @erlichya
study
bio
preprint
genetics
genomics
measurement
identity
spreading
criminology
criminal-justice
methodology
kinship
trees
usa
scale
huge-data-the-biggest
gnxp
scitariat
privacy
intel
leviathan
government
whole-partial-many
population
demographics
crypto
ethical-algorithms
unintended-consequences
dataset
state-of-art
differential-privacy
the-watchers
multi
twitter
social
commentary
technology
biotech
computer-vision
matching
volo-avolo
civil-liberty
degrees-of-freedom
legibility
managerial-state
security
proposal
regulation
risk
graphs
protocol-metadata
What happens when you combine #AI facial recognition and #genomics to identify a person in the US?
Half of American adults in facial recognition databases https://www.theguardian.com/world/2016/oct/18/police-facial-recognition-database-surveillance-profiling …
Long-range familial searches
https://www.biorxiv.org/content/early/2018/06/18/350231 … @erlichya
august 2018 by nhaliday
Jordan Peterson is Wrong About the Case for the Left
july 2018 by nhaliday
I suggest that the tension of which he speaks is fully formed and self-contained completely within conservatism. Balancing those two forces is, in fact, what conservatism is all about. Thomas Sowell, in A Conflict of Visions: Ideological Origins of Political Struggles describes the conservative outlook as (paraphrasing): “There are no solutions, only tradeoffs.”
The real tension is between balance on the right and imbalance on the left.
In Towards a Cognitive Theory of Polics in the online magazine Quillette I make the case that left and right are best understood as psychological profiles consisting of 1) cognitive style, and 2) moral matrix.
There are two predominant cognitive styles and two predominant moral matrices.
The two cognitive styles are described by Arthur Herman in his book The Cave and the Light: Plato Versus Aristotle, and the Struggle for the Soul of Western Civilization, in which Plato and Aristotle serve as metaphors for them. These two quotes from the book summarize the two styles:
Despite their differences, Plato and Aristotle agreed on many things. They both stressed the importance of reason as our guide for understanding and shaping the world. Both believed that our physical world is shaped by certain eternal forms that are more real than matter. The difference was that Plato’s forms existed outside matter, whereas Aristotle’s forms were unrealizable without it. (p. 61)
The twentieth century’s greatest ideological conflicts do mark the violent unfolding of a Platonist versus Aristotelian view of what it means to be free and how reason and knowledge ultimately fit into our lives (p.539-540)
The Platonic cognitive style amounts to pure abstract reason, “unconstrained” by reality. It has no limiting principle. It is imbalanced. Aristotelian thinking also relies on reason, but it is “constrained” by empirical reality. It has a limiting principle. It is balanced.
The two moral matrices are described by Jonathan Haidt in his book The Righteous Mind: Why Good People Are Divided by Politics and Religion. Moral matrices are collections of moral foundations, which are psychological adaptations of social cognition created in us by hundreds of millions of years of natural selection as we evolved into the social animal. There are six moral foundations. They are:
Care/Harm
Fairness/Cheating
Liberty/Oppression
Loyalty/Betrayal
Authority/Subversion
Sanctity/Degradation
The first three moral foundations are called the “individualizing” foundations because they’re focused on the autonomy and well being of the individual person. The second three foundations are called the “binding” foundations because they’re focused on helping individuals form into cooperative groups.
One of the two predominant moral matrices relies almost entirely on the individualizing foundations, and of those mostly just care. It is all individualizing all the time. No balance. The other moral matrix relies on all of the moral foundations relatively equally; individualizing and binding in tension. Balanced.
The leftist psychological profile is made from the imbalanced Platonic cognitive style in combination with the first, imbalanced, moral matrix.
The conservative psychological profile is made from the balanced Aristotelian cognitive style in combination with the balanced moral matrix.
It is not true that the tension between left and right is a balance between the defense of the dispossessed and the defense of hierarchies.
It is true that the tension between left and right is between an imbalanced worldview unconstrained by empirical reality and a balanced worldview constrained by it.
A Venn Diagram of the two psychological profiles looks like this:
commentary
albion
canada
journos-pundits
philosophy
politics
polisci
ideology
coalitions
left-wing
right-wing
things
phalanges
reason
darwinian
tradition
empirical
the-classics
big-peeps
canon
comparison
thinking
metabuch
skeleton
lens
psychology
social-psych
morality
justice
civil-liberty
authoritarianism
love-hate
duty
tribalism
us-them
sanctity-degradation
revolution
individualism-collectivism
n-factor
europe
the-great-west-whale
pragmatic
prudence
universalism-particularism
analytical-holistic
nationalism-globalism
social-capital
whole-partial-many
pic
intersection-connectedness
links
news
org:mag
letters
rhetoric
contrarianism
intricacy
haidt
scitariat
critique
debate
forms-instances
reduction
infographic
apollonian-dionysian
being-becoming
essence-existence
The real tension is between balance on the right and imbalance on the left.
In Towards a Cognitive Theory of Polics in the online magazine Quillette I make the case that left and right are best understood as psychological profiles consisting of 1) cognitive style, and 2) moral matrix.
There are two predominant cognitive styles and two predominant moral matrices.
The two cognitive styles are described by Arthur Herman in his book The Cave and the Light: Plato Versus Aristotle, and the Struggle for the Soul of Western Civilization, in which Plato and Aristotle serve as metaphors for them. These two quotes from the book summarize the two styles:
Despite their differences, Plato and Aristotle agreed on many things. They both stressed the importance of reason as our guide for understanding and shaping the world. Both believed that our physical world is shaped by certain eternal forms that are more real than matter. The difference was that Plato’s forms existed outside matter, whereas Aristotle’s forms were unrealizable without it. (p. 61)
The twentieth century’s greatest ideological conflicts do mark the violent unfolding of a Platonist versus Aristotelian view of what it means to be free and how reason and knowledge ultimately fit into our lives (p.539-540)
The Platonic cognitive style amounts to pure abstract reason, “unconstrained” by reality. It has no limiting principle. It is imbalanced. Aristotelian thinking also relies on reason, but it is “constrained” by empirical reality. It has a limiting principle. It is balanced.
The two moral matrices are described by Jonathan Haidt in his book The Righteous Mind: Why Good People Are Divided by Politics and Religion. Moral matrices are collections of moral foundations, which are psychological adaptations of social cognition created in us by hundreds of millions of years of natural selection as we evolved into the social animal. There are six moral foundations. They are:
Care/Harm
Fairness/Cheating
Liberty/Oppression
Loyalty/Betrayal
Authority/Subversion
Sanctity/Degradation
The first three moral foundations are called the “individualizing” foundations because they’re focused on the autonomy and well being of the individual person. The second three foundations are called the “binding” foundations because they’re focused on helping individuals form into cooperative groups.
One of the two predominant moral matrices relies almost entirely on the individualizing foundations, and of those mostly just care. It is all individualizing all the time. No balance. The other moral matrix relies on all of the moral foundations relatively equally; individualizing and binding in tension. Balanced.
The leftist psychological profile is made from the imbalanced Platonic cognitive style in combination with the first, imbalanced, moral matrix.
The conservative psychological profile is made from the balanced Aristotelian cognitive style in combination with the balanced moral matrix.
It is not true that the tension between left and right is a balance between the defense of the dispossessed and the defense of hierarchies.
It is true that the tension between left and right is between an imbalanced worldview unconstrained by empirical reality and a balanced worldview constrained by it.
A Venn Diagram of the two psychological profiles looks like this:
july 2018 by nhaliday
Overcoming Bias : Beware Covert War Morality Tales
ratty hanson fiction reflection thinking rationality truth religion theos hidden-motives social-norms coordination cooperate-defect signaling morality realness cynicism-idealism good-evil tribalism us-them peace-violence war justice telos-atelos farmers-and-foragers trends history early-modern study summary anthropology sapiens culture extra-introversion personality survey order-disorder open-closed stress psych-architecture discipline self-control self-interest curiosity evolution EEA evopsych epistemic alignment shift wealth modernity ends-means nietzschean iron-age mediterranean the-classics canon virtu nationalism-globalism roots duty values diversity yvain ssc links commentary quotes universalism-particularism absolute-relative cultural-dynamics culture-war myth film intel identity-politics subculture authoritarianism government revolution politics coalitions ideology polarization regression-to-mean X-not-about-Y the-devil god-man-beast-victim duality janus
june 2018 by nhaliday
ratty hanson fiction reflection thinking rationality truth religion theos hidden-motives social-norms coordination cooperate-defect signaling morality realness cynicism-idealism good-evil tribalism us-them peace-violence war justice telos-atelos farmers-and-foragers trends history early-modern study summary anthropology sapiens culture extra-introversion personality survey order-disorder open-closed stress psych-architecture discipline self-control self-interest curiosity evolution EEA evopsych epistemic alignment shift wealth modernity ends-means nietzschean iron-age mediterranean the-classics canon virtu nationalism-globalism roots duty values diversity yvain ssc links commentary quotes universalism-particularism absolute-relative cultural-dynamics culture-war myth film intel identity-politics subculture authoritarianism government revolution politics coalitions ideology polarization regression-to-mean X-not-about-Y the-devil god-man-beast-victim duality janus
june 2018 by nhaliday
John Dee - Wikipedia
april 2018 by nhaliday
John Dee (13 July 1527 – 1608 or 1609) was an English mathematician, astronomer, astrologer, occult philosopher,[5] and advisor to Queen Elizabeth I. He devoted much of his life to the study of alchemy, divination, and Hermetic philosophy. He was also an advocate of England's imperial expansion into a "British Empire", a term he is generally credited with coining.[6]
Dee straddled the worlds of modern science and magic just as the former was emerging. One of the most learned men of his age, he had been invited to lecture on the geometry of Euclid at the University of Paris while still in his early twenties. Dee was an ardent promoter of mathematics and a respected astronomer, as well as a leading expert in navigation, having trained many of those who would conduct England's voyages of discovery.
Simultaneously with these efforts, Dee immersed himself in the worlds of magic, astrology and Hermetic philosophy. He devoted much time and effort in the last thirty years or so of his life to attempting to commune with angels in order to learn the universal language of creation and bring about the pre-apocalyptic unity of mankind. However, Robert Hooke suggested in the chapter Of Dr. Dee's Book of Spirits, that John Dee made use of Trithemian steganography, to conceal his communication with Elizabeth I.[7] A student of the Renaissance Neo-Platonism of Marsilio Ficino, Dee did not draw distinctions between his mathematical research and his investigations into Hermetic magic, angel summoning and divination. Instead he considered all of his activities to constitute different facets of the same quest: the search for a transcendent understanding of the divine forms which underlie the visible world, which Dee called "pure verities".
In his lifetime, Dee amassed one of the largest libraries in England. His high status as a scholar also allowed him to play a role in Elizabethan politics. He served as an occasional advisor and tutor to Elizabeth I and nurtured relationships with her ministers Francis Walsingham and William Cecil. Dee also tutored and enjoyed patronage relationships with Sir Philip Sidney, his uncle Robert Dudley, 1st Earl of Leicester, and Edward Dyer. He also enjoyed patronage from Sir Christopher Hatton.
https://twitter.com/Logo_Daedalus/status/985203144044040192
https://archive.is/h7ibQ
mind meld
Leave Me Alone! Misanthropic Writings from the Anti-Social Edge
people
big-peeps
old-anglo
wiki
history
early-modern
britain
anglosphere
optimate
philosophy
mystic
deep-materialism
science
aristos
math
geometry
conquest-empire
nietzschean
religion
christianity
theos
innovation
the-devil
forms-instances
god-man-beast-victim
gnosis-logos
expansionism
age-of-discovery
oceans
frontier
multi
twitter
social
commentary
backup
pic
memes(ew)
gnon
🐸
books
literature
Dee straddled the worlds of modern science and magic just as the former was emerging. One of the most learned men of his age, he had been invited to lecture on the geometry of Euclid at the University of Paris while still in his early twenties. Dee was an ardent promoter of mathematics and a respected astronomer, as well as a leading expert in navigation, having trained many of those who would conduct England's voyages of discovery.
Simultaneously with these efforts, Dee immersed himself in the worlds of magic, astrology and Hermetic philosophy. He devoted much time and effort in the last thirty years or so of his life to attempting to commune with angels in order to learn the universal language of creation and bring about the pre-apocalyptic unity of mankind. However, Robert Hooke suggested in the chapter Of Dr. Dee's Book of Spirits, that John Dee made use of Trithemian steganography, to conceal his communication with Elizabeth I.[7] A student of the Renaissance Neo-Platonism of Marsilio Ficino, Dee did not draw distinctions between his mathematical research and his investigations into Hermetic magic, angel summoning and divination. Instead he considered all of his activities to constitute different facets of the same quest: the search for a transcendent understanding of the divine forms which underlie the visible world, which Dee called "pure verities".
In his lifetime, Dee amassed one of the largest libraries in England. His high status as a scholar also allowed him to play a role in Elizabethan politics. He served as an occasional advisor and tutor to Elizabeth I and nurtured relationships with her ministers Francis Walsingham and William Cecil. Dee also tutored and enjoyed patronage relationships with Sir Philip Sidney, his uncle Robert Dudley, 1st Earl of Leicester, and Edward Dyer. He also enjoyed patronage from Sir Christopher Hatton.
https://twitter.com/Logo_Daedalus/status/985203144044040192
https://archive.is/h7ibQ
mind meld
Leave Me Alone! Misanthropic Writings from the Anti-Social Edge
april 2018 by nhaliday
More arguments against blockchain, most of all about trust - Marginal REVOLUTION
april 2018 by nhaliday
Auditing software is hard! The most-heavily scrutinized smart contract in history had a small bug that nobody noticed — that is, until someone did notice it, and used it to steal fifty million dollars. If cryptocurrency enthusiasts putting together a $150m investment fund can’t properly audit the software, how confident are you in your e-book audit? Perhaps you would rather write your own counteroffer software contract, in case this e-book author has hidden a recursion bug in their version to drain your ethereum wallet of all your life savings?
It’s a complicated way to buy a book! It’s not trustless, you’re trusting in the software (and your ability to defend yourself in a software-driven world), instead of trusting other people.
econotariat
marginal-rev
links
commentary
quotes
bitcoin
cryptocurrency
blockchain
crypto
trust
money
monetary-fiscal
technology
software
institutions
government
comparison
cost-benefit
primitivism
eden-heaven
It’s a complicated way to buy a book! It’s not trustless, you’re trusting in the software (and your ability to defend yourself in a software-driven world), instead of trusting other people.
april 2018 by nhaliday
Theories of humor - Wikipedia
april 2018 by nhaliday
There are many theories of humor which attempt to explain what humor is, what social functions it serves, and what would be considered humorous. Among the prevailing types of theories that attempt to account for the existence of humor, there are psychological theories, the vast majority of which consider humor to be very healthy behavior; there are spiritual theories, which consider humor to be an inexplicable mystery, very much like a mystical experience.[1] Although various classical theories of humor and laughter may be found, in contemporary academic literature, three theories of humor appear repeatedly: relief theory, superiority theory, and incongruity theory.[2] Among current humor researchers, there is no consensus about which of these three theories of humor is most viable.[2] Proponents of each one originally claimed their theory to be capable of explaining all cases of humor.[2][3] However, they now acknowledge that although each theory generally covers its own area of focus, many instances of humor can be explained by more than one theory.[2][3][4][5] Incongruity and superiority theories, for instance, seem to describe complementary mechanisms which together create humor.[6]
...
Relief theory
Relief theory maintains that laughter is a homeostatic mechanism by which psychological tension is reduced.[2][3][7] Humor may thus for example serve to facilitate relief of the tension caused by one's fears.[8] Laughter and mirth, according to relief theory, result from this release of nervous energy.[2] Humor, according to relief theory, is used mainly to overcome sociocultural inhibitions and reveal suppressed desires. It is believed that this is the reason we laugh whilst being tickled, due to a buildup of tension as the tickler "strikes".[2][9] According to Herbert Spencer, laughter is an "economical phenomenon" whose function is to release "psychic energy" that had been wrongly mobilized by incorrect or false expectations. The latter point of view was supported also by Sigmund Freud.
Superiority theory
The superiority theory of humor traces back to Plato and Aristotle, and Thomas Hobbes' Leviathan. The general idea is that a person laughs about misfortunes of others (so called schadenfreude), because these misfortunes assert the person's superiority on the background of shortcomings of others.[10] Socrates was reported by Plato as saying that the ridiculous was characterized by a display of self-ignorance.[11] For Aristotle, we laugh at inferior or ugly individuals, because we feel a joy at feeling superior to them.[12]
Incongruous juxtaposition theory
The incongruity theory states that humor is perceived at the moment of realization of incongruity between a concept involved in a certain situation and the real objects thought to be in some relation to the concept.[10]
Since the main point of the theory is not the incongruity per se, but its realization and resolution (i.e., putting the objects in question into the real relation), it is often called the incongruity-resolution theory.[10]
...
Detection of mistaken reasoning
In 2011, three researchers, Hurley, Dennett and Adams, published a book that reviews previous theories of humor and many specific jokes. They propose the theory that humor evolved because it strengthens the ability of the brain to find mistakes in active belief structures, that is, to detect mistaken reasoning.[46] This is somewhat consistent with the sexual selection theory, because, as stated above, humor would be a reliable indicator of an important survival trait: the ability to detect mistaken reasoning. However, the three researchers argue that humor is fundamentally important because it is the very mechanism that allows the human brain to excel at practical problem solving. Thus, according to them, humor did have survival value even for early humans, because it enhanced the neural circuitry needed to survive.
Misattribution theory
Misattribution is one theory of humor that describes an audience's inability to identify exactly why they find a joke to be funny. The formal theory is attributed to Zillmann & Bryant (1980) in their article, "Misattribution Theory of Tendentious Humor", published in Journal of Experimental Social Psychology. They derived the critical concepts of the theory from Sigmund Freud's Wit and Its Relation to the Unconscious (note: from a Freudian perspective, wit is separate from humor), originally published in 1905.
Benign violation theory
The benign violation theory (BVT) is developed by researchers A. Peter McGraw and Caleb Warren.[47] The BVT integrates seemingly disparate theories of humor to predict that humor occurs when three conditions are satisfied: 1) something threatens one's sense of how the world "ought to be", 2) the threatening situation seems benign, and 3) a person sees both interpretations at the same time.
From an evolutionary perspective, humorous violations likely originated as apparent physical threats, like those present in play fighting and tickling. As humans evolved, the situations that elicit humor likely expanded from physical threats to other violations, including violations of personal dignity (e.g., slapstick, teasing), linguistic norms (e.g., puns, malapropisms), social norms (e.g., strange behaviors, risqué jokes), and even moral norms (e.g., disrespectful behaviors). The BVT suggests that anything that threatens one's sense of how the world "ought to be" will be humorous, so long as the threatening situation also seems benign.
...
Sense of humor, sense of seriousness
One must have a sense of humor and a sense of seriousness to distinguish what is supposed to be taken literally or not. An even more keen sense is needed when humor is used to make a serious point.[48][49] Psychologists have studied how humor is intended to be taken as having seriousness, as when court jesters used humor to convey serious information. Conversely, when humor is not intended to be taken seriously, bad taste in humor may cross a line after which it is taken seriously, though not intended.[50]
Philosophy of humor bleg: http://marginalrevolution.com/marginalrevolution/2017/03/philosophy-humor-bleg.html
Inside Jokes: https://mitpress.mit.edu/books/inside-jokes
humor as reward for discovering inconsistency in inferential chain
https://twitter.com/search?q=comedy%20OR%20humor%20OR%20humour%20from%3Asarahdoingthing&src=typd
https://twitter.com/sarahdoingthing/status/500000435529195520
https://twitter.com/sarahdoingthing/status/568346955811663872
https://twitter.com/sarahdoingthing/status/600792582453465088
https://twitter.com/sarahdoingthing/status/603215362033778688
https://twitter.com/sarahdoingthing/status/605051508472713216
https://twitter.com/sarahdoingthing/status/606197597699604481
https://twitter.com/sarahdoingthing/status/753514548787683328
https://en.wikipedia.org/wiki/Humour
People of all ages and cultures respond to humour. Most people are able to experience humour—be amused, smile or laugh at something funny—and thus are considered to have a sense of humour. The hypothetical person lacking a sense of humour would likely find the behaviour inducing it to be inexplicable, strange, or even irrational.
...
Ancient Greece
Western humour theory begins with Plato, who attributed to Socrates (as a semi-historical dialogue character) in the Philebus (p. 49b) the view that the essence of the ridiculous is an ignorance in the weak, who are thus unable to retaliate when ridiculed. Later, in Greek philosophy, Aristotle, in the Poetics (1449a, pp. 34–35), suggested that an ugliness that does not disgust is fundamental to humour.
...
China
Confucianist Neo-Confucian orthodoxy, with its emphasis on ritual and propriety, has traditionally looked down upon humour as subversive or unseemly. The Confucian "Analects" itself, however, depicts the Master as fond of humorous self-deprecation, once comparing his wanderings to the existence of a homeless dog.[10] Early Daoist philosophical texts such as "Zhuangzi" pointedly make fun of Confucian seriousness and make Confucius himself a slow-witted figure of fun.[11] Joke books containing a mix of wordplay, puns, situational humor, and play with taboo subjects like sex and scatology, remained popular over the centuries. Local performing arts, storytelling, vernacular fiction, and poetry offer a wide variety of humorous styles and sensibilities.
...
Physical attractiveness
90% of men and 81% of women, all college students, report having a sense of humour is a crucial characteristic looked for in a romantic partner.[21] Humour and honesty were ranked as the two most important attributes in a significant other.[22] It has since been recorded that humour becomes more evident and significantly more important as the level of commitment in a romantic relationship increases.[23] Recent research suggests expressions of humour in relation to physical attractiveness are two major factors in the desire for future interaction.[19] Women regard physical attractiveness less highly compared to men when it came to dating, a serious relationship, and sexual intercourse.[19] However, women rate humorous men more desirable than nonhumorous individuals for a serious relationship or marriage, but only when these men were physically attractive.[19]
Furthermore, humorous people are perceived by others to be more cheerful but less intellectual than nonhumorous people. Self-deprecating humour has been found to increase the desirability of physically attractive others for committed relationships.[19] The results of a study conducted by McMaster University suggest humour can positively affect one’s desirability for a specific relationship partner, but this effect is only most likely to occur when men use humour and are evaluated by women.[24] No evidence was found to suggest men prefer women with a sense of humour as partners, nor women preferring other women with a sense of humour as potential partners.[24] When women were given the forced-choice design in the study, they chose funny men as potential … [more]
article
list
wiki
reference
psychology
cog-psych
social-psych
emotion
things
phalanges
concept
neurons
instinct
👽
comedy
models
theory-of-mind
explanans
roots
evopsych
signaling
humanity
logic
sex
sexuality
cost-benefit
iq
intelligence
contradiction
homo-hetero
egalitarianism-hierarchy
humility
reinforcement
EEA
eden
play
telos-atelos
impetus
theos
mystic
philosophy
big-peeps
the-classics
literature
inequality
illusion
within-without
dennett
dignity
social-norms
paradox
parallax
analytical-holistic
multi
econotariat
marginal-rev
discussion
speculation
books
impro
carcinisation
postrat
cool
twitter
social
quotes
commentary
search
farmers-and-foragers
🦀
evolution
sapiens
metameta
insight
novelty
wire-guided
realness
chart
beauty
nietzschean
class
pop-diff
culture
alien-character
confucian
order-disorder
sociality
🐝
integrity
properties
gender
gender-diff
china
asia
sinosphere
long-short-run
trust
religion
ideology
elegance
psycho-atoms
...
Relief theory
Relief theory maintains that laughter is a homeostatic mechanism by which psychological tension is reduced.[2][3][7] Humor may thus for example serve to facilitate relief of the tension caused by one's fears.[8] Laughter and mirth, according to relief theory, result from this release of nervous energy.[2] Humor, according to relief theory, is used mainly to overcome sociocultural inhibitions and reveal suppressed desires. It is believed that this is the reason we laugh whilst being tickled, due to a buildup of tension as the tickler "strikes".[2][9] According to Herbert Spencer, laughter is an "economical phenomenon" whose function is to release "psychic energy" that had been wrongly mobilized by incorrect or false expectations. The latter point of view was supported also by Sigmund Freud.
Superiority theory
The superiority theory of humor traces back to Plato and Aristotle, and Thomas Hobbes' Leviathan. The general idea is that a person laughs about misfortunes of others (so called schadenfreude), because these misfortunes assert the person's superiority on the background of shortcomings of others.[10] Socrates was reported by Plato as saying that the ridiculous was characterized by a display of self-ignorance.[11] For Aristotle, we laugh at inferior or ugly individuals, because we feel a joy at feeling superior to them.[12]
Incongruous juxtaposition theory
The incongruity theory states that humor is perceived at the moment of realization of incongruity between a concept involved in a certain situation and the real objects thought to be in some relation to the concept.[10]
Since the main point of the theory is not the incongruity per se, but its realization and resolution (i.e., putting the objects in question into the real relation), it is often called the incongruity-resolution theory.[10]
...
Detection of mistaken reasoning
In 2011, three researchers, Hurley, Dennett and Adams, published a book that reviews previous theories of humor and many specific jokes. They propose the theory that humor evolved because it strengthens the ability of the brain to find mistakes in active belief structures, that is, to detect mistaken reasoning.[46] This is somewhat consistent with the sexual selection theory, because, as stated above, humor would be a reliable indicator of an important survival trait: the ability to detect mistaken reasoning. However, the three researchers argue that humor is fundamentally important because it is the very mechanism that allows the human brain to excel at practical problem solving. Thus, according to them, humor did have survival value even for early humans, because it enhanced the neural circuitry needed to survive.
Misattribution theory
Misattribution is one theory of humor that describes an audience's inability to identify exactly why they find a joke to be funny. The formal theory is attributed to Zillmann & Bryant (1980) in their article, "Misattribution Theory of Tendentious Humor", published in Journal of Experimental Social Psychology. They derived the critical concepts of the theory from Sigmund Freud's Wit and Its Relation to the Unconscious (note: from a Freudian perspective, wit is separate from humor), originally published in 1905.
Benign violation theory
The benign violation theory (BVT) is developed by researchers A. Peter McGraw and Caleb Warren.[47] The BVT integrates seemingly disparate theories of humor to predict that humor occurs when three conditions are satisfied: 1) something threatens one's sense of how the world "ought to be", 2) the threatening situation seems benign, and 3) a person sees both interpretations at the same time.
From an evolutionary perspective, humorous violations likely originated as apparent physical threats, like those present in play fighting and tickling. As humans evolved, the situations that elicit humor likely expanded from physical threats to other violations, including violations of personal dignity (e.g., slapstick, teasing), linguistic norms (e.g., puns, malapropisms), social norms (e.g., strange behaviors, risqué jokes), and even moral norms (e.g., disrespectful behaviors). The BVT suggests that anything that threatens one's sense of how the world "ought to be" will be humorous, so long as the threatening situation also seems benign.
...
Sense of humor, sense of seriousness
One must have a sense of humor and a sense of seriousness to distinguish what is supposed to be taken literally or not. An even more keen sense is needed when humor is used to make a serious point.[48][49] Psychologists have studied how humor is intended to be taken as having seriousness, as when court jesters used humor to convey serious information. Conversely, when humor is not intended to be taken seriously, bad taste in humor may cross a line after which it is taken seriously, though not intended.[50]
Philosophy of humor bleg: http://marginalrevolution.com/marginalrevolution/2017/03/philosophy-humor-bleg.html
Inside Jokes: https://mitpress.mit.edu/books/inside-jokes
humor as reward for discovering inconsistency in inferential chain
https://twitter.com/search?q=comedy%20OR%20humor%20OR%20humour%20from%3Asarahdoingthing&src=typd
https://twitter.com/sarahdoingthing/status/500000435529195520
https://twitter.com/sarahdoingthing/status/568346955811663872
https://twitter.com/sarahdoingthing/status/600792582453465088
https://twitter.com/sarahdoingthing/status/603215362033778688
https://twitter.com/sarahdoingthing/status/605051508472713216
https://twitter.com/sarahdoingthing/status/606197597699604481
https://twitter.com/sarahdoingthing/status/753514548787683328
https://en.wikipedia.org/wiki/Humour
People of all ages and cultures respond to humour. Most people are able to experience humour—be amused, smile or laugh at something funny—and thus are considered to have a sense of humour. The hypothetical person lacking a sense of humour would likely find the behaviour inducing it to be inexplicable, strange, or even irrational.
...
Ancient Greece
Western humour theory begins with Plato, who attributed to Socrates (as a semi-historical dialogue character) in the Philebus (p. 49b) the view that the essence of the ridiculous is an ignorance in the weak, who are thus unable to retaliate when ridiculed. Later, in Greek philosophy, Aristotle, in the Poetics (1449a, pp. 34–35), suggested that an ugliness that does not disgust is fundamental to humour.
...
China
Confucianist Neo-Confucian orthodoxy, with its emphasis on ritual and propriety, has traditionally looked down upon humour as subversive or unseemly. The Confucian "Analects" itself, however, depicts the Master as fond of humorous self-deprecation, once comparing his wanderings to the existence of a homeless dog.[10] Early Daoist philosophical texts such as "Zhuangzi" pointedly make fun of Confucian seriousness and make Confucius himself a slow-witted figure of fun.[11] Joke books containing a mix of wordplay, puns, situational humor, and play with taboo subjects like sex and scatology, remained popular over the centuries. Local performing arts, storytelling, vernacular fiction, and poetry offer a wide variety of humorous styles and sensibilities.
...
Physical attractiveness
90% of men and 81% of women, all college students, report having a sense of humour is a crucial characteristic looked for in a romantic partner.[21] Humour and honesty were ranked as the two most important attributes in a significant other.[22] It has since been recorded that humour becomes more evident and significantly more important as the level of commitment in a romantic relationship increases.[23] Recent research suggests expressions of humour in relation to physical attractiveness are two major factors in the desire for future interaction.[19] Women regard physical attractiveness less highly compared to men when it came to dating, a serious relationship, and sexual intercourse.[19] However, women rate humorous men more desirable than nonhumorous individuals for a serious relationship or marriage, but only when these men were physically attractive.[19]
Furthermore, humorous people are perceived by others to be more cheerful but less intellectual than nonhumorous people. Self-deprecating humour has been found to increase the desirability of physically attractive others for committed relationships.[19] The results of a study conducted by McMaster University suggest humour can positively affect one’s desirability for a specific relationship partner, but this effect is only most likely to occur when men use humour and are evaluated by women.[24] No evidence was found to suggest men prefer women with a sense of humour as partners, nor women preferring other women with a sense of humour as potential partners.[24] When women were given the forced-choice design in the study, they chose funny men as potential … [more]
april 2018 by nhaliday
Max Tani on Twitter: "Bannon says the three pillars of his new ideology are nationalism, cryptocurrencies, and digital sovereignty."
twitter social commentary quotes trump nascent-state politics ideology current-events nationalism-globalism cryptocurrency bitcoin internet privacy axioms allodium leviathan
march 2018 by nhaliday
twitter social commentary quotes trump nascent-state politics ideology current-events nationalism-globalism cryptocurrency bitcoin internet privacy axioms allodium leviathan
march 2018 by nhaliday
The origin of the Ashkenazi Jews in early medieval Europe – Gene Expression
gnxp scitariat commentary study summary bio genetics population-genetics judaism gene-flow history mostly-modern time religion christianity gender egalitarianism-hierarchy class feudal decentralized europe the-great-west-whale occident heterodox israel medieval leviathan government
march 2018 by nhaliday
gnxp scitariat commentary study summary bio genetics population-genetics judaism gene-flow history mostly-modern time religion christianity gender egalitarianism-hierarchy class feudal decentralized europe the-great-west-whale occident heterodox israel medieval leviathan government
march 2018 by nhaliday
bundles : meta
related tags
-_- ⊕ 2016-election ⊕ 80000-hours ⊕ :) ⊕ :/ ⊕ aaronson ⊕ ability-competence ⊕ abortion-contraception-embryo ⊕ absolute-relative ⊕ abstraction ⊕ academia ⊕ accelerationism ⊕ accessibility ⊕ accretion ⊕ accuracy ⊕ acemoglu ⊕ acm ⊕ acmtariat ⊕ aDNA ⊕ advanced ⊕ adversarial ⊕ advertising ⊕ advice ⊕ aesthetics ⊕ africa ⊕ afterlife ⊕ age-generation ⊕ age-of-discovery ⊕ aggregator ⊕ aging ⊕ agri-mindset ⊕ agriculture ⊕ ai ⊕ ai-control ⊕ akrasia ⊕ albion ⊕ alesina ⊕ algorithmic-econ ⊕ algorithms ⊕ alien-character ⊕ alignment ⊕ allodium ⊕ alt-inst ⊕ altruism ⊕ amazon ⊕ american-nations ⊕ analogy ⊕ analysis ⊕ analytical-holistic ⊕ anarcho-tyranny ⊕ anglo ⊕ anglosphere ⊕ announcement ⊕ anomie ⊕ anonymity ⊕ anthropic ⊕ anthropology ⊕ antidemos ⊕ antiquity ⊕ aphorism ⊕ api ⊕ apollonian-dionysian ⊕ app ⊕ apple ⊕ applicability-prereqs ⊕ applications ⊕ arbitrage ⊕ archaeology ⊕ archaics ⊕ architecture ⊕ aristos ⊕ arms ⊕ arrows ⊕ art ⊕ article ⊕ ascetic ⊕ asia ⊕ assembly ⊕ assimilation ⊕ assortative-mating ⊕ atmosphere ⊕ atoms ⊕ attaq ⊕ attention ⊕ audio ⊕ authoritarianism ⊕ autism ⊕ auto-learning ⊕ automata-languages ⊕ automation ⊕ autor ⊕ aversion ⊕ axelrod ⊕ axioms ⊕ backup ⊕ baez ⊕ bangbang ⊕ bare-hands ⊕ barons ⊕ bayesian ⊕ beauty ⊕ beginning-middle-end ⊕ behavioral-econ ⊕ behavioral-gen ⊕ being-becoming ⊕ being-right ⊕ benchmarks ⊕ benevolence ⊕ berkeley ⊕ best-practices ⊕ better-explained ⊕ bias-variance ⊕ biases ⊕ big-peeps ⊕ big-picture ⊕ big-surf ⊕ big-yud ⊕ bio ⊕ biodet ⊕ biohacking ⊕ bioinformatics ⊕ biomechanics ⊕ biophysical-econ ⊕ biotech ⊕ bitcoin ⊕ bits ⊕ blockchain ⊕ blog ⊕ blowhards ⊕ boaz-barak ⊕ books ⊕ bootstraps ⊕ borjas ⊕ bostrom ⊕ bots ⊕ bounded-cognition ⊕ brain-scan ⊕ branches ⊕ brands ⊕ brexit ⊕ britain ⊕ broad-econ ⊕ browser ⊕ buddhism ⊕ build-packaging ⊕ business ⊕ business-models ⊕ c(pp) ⊕ c:* ⊕ c:** ⊕ c:*** ⊕ caching ⊕ calculation ⊕ calculator ⊕ california ⊕ caltech ⊕ canada ⊕ cancer ⊕ candidate-gene ⊕ canon ⊕ capital ⊕ capitalism ⊕ carcinisation ⊕ cardio ⊕ career ⊕ carmack ⊕ cartoons ⊕ CAS ⊕ causation ⊕ cause ⊕ censorship ⊕ certificates-recognition ⊕ chan ⊕ chapman ⊕ characterization ⊕ charity ⊕ chart ⊕ cheatsheet ⊕ checking ⊕ checklists ⊕ chemistry ⊕ chicago ⊕ china ⊕ christianity ⊕ christopher-lasch ⊕ civic ⊕ civil-liberty ⊕ civilization ⊕ cjones-like ⊕ clarity ⊕ class ⊕ class-warfare ⊕ classic ⊕ classification ⊕ clever-rats ⊕ client-server ⊕ climate-change ⊕ clinton ⊕ cliometrics ⊕ cloud ⊕ clown-world ⊕ coalitions ⊕ coarse-fine ⊕ cocktail ⊕ cocoa ⊕ code-dive ⊕ code-organizing ⊕ coding-theory ⊕ cog-psych ⊕ cohesion ⊕ cold-war ⊕ collaboration ⊕ comedy ⊕ comics ⊕ coming-apart ⊕ commentary ⊖ communication ⊕ communication-complexity ⊕ communism ⊕ community ⊕ comparison ⊕ compensation ⊕ competition ⊕ compilers ⊕ complex-systems ⊕ complexity ⊕ composition-decomposition ⊕ compressed-sensing ⊕ compression ⊕ computation ⊕ computer-memory ⊕ computer-vision ⊕ concentration-of-measure ⊕ concept ⊕ conceptual-vocab ⊕ concrete ⊕ concurrency ⊕ conference ⊕ confidence ⊕ config ⊕ confluence ⊕ confounding ⊕ confucian ⊕ conquest-empire ⊕ consilience ⊕ constraint-satisfaction ⊕ consumerism ⊕ context ⊕ contracts ⊕ contradiction ⊕ contrarianism ⊕ control ⊕ convergence ⊕ convexity-curvature ⊕ cool ⊕ cooperate-defect ⊕ coordination ⊕ core-rats ⊕ corporation ⊕ correctness ⊕ correlation ⊕ corruption ⊕ cost-benefit ⊕ cost-disease ⊕ counter-revolution ⊕ counterfactual ⊕ coupling-cohesion ⊕ courage ⊕ course ⊕ cracker-econ ⊕ cracker-prog ⊕ creative ⊕ crime ⊕ criminal-justice ⊕ criminology ⊕ CRISPR ⊕ critique ⊕ crooked ⊕ crosstab ⊕ crypto ⊕ crypto-anarchy ⊕ cryptocurrency ⊕ cs ⊕ cultural-dynamics ⊕ culture ⊕ culture-war ⊕ curiosity ⊕ current-events ⊕ curvature ⊕ cybernetics ⊕ cycles ⊕ cynicism-idealism ⊕ d-lang ⊕ dan-luu ⊕ dark-arts ⊕ darwinian ⊕ data ⊕ data-science ⊕ data-structures ⊕ database ⊕ dataset ⊕ dataviz ⊕ dbs ⊕ death ⊕ debate ⊕ debt ⊕ debugging ⊕ decentralized ⊕ decision-making ⊕ decision-theory ⊕ deep-learning ⊕ deep-materialism ⊕ deepgoog ⊕ defense ⊕ definite-planning ⊕ definition ⊕ degrees-of-freedom ⊕ democracy ⊕ demographic-transition ⊕ demographics ⊕ dennett ⊕ density ⊕ dental ⊕ dependence-independence ⊕ descriptive ⊕ design ⊕ desktop ⊕ detail-architecture ⊕ deterrence ⊕ developing-world ⊕ developmental ⊕ devops ⊕ devtools ⊕ diaspora ⊕ diet ⊕ differential-privacy ⊕ dignity ⊕ dimensionality ⊕ diogenes ⊕ direct-indirect ⊕ direction ⊕ dirty-hands ⊕ discipline ⊕ discovery ⊕ discrete ⊕ discrimination ⊕ discussion ⊕ disease ⊕ distributed ⊕ distribution ⊕ divergence ⊕ diversity ⊕ diy ⊕ documentary ⊕ documentation ⊕ domestication ⊕ dominant-minority ⊕ dotnet ⊕ douthatish ⊕ draft ⊕ drama ⊕ driving ⊕ dropbox ⊕ drugs ⊕ DSL ⊕ duality ⊕ duplication ⊕ duty ⊕ dynamic ⊕ dynamical ⊕ dysgenics ⊕ early-modern ⊕ earth ⊕ easterly ⊕ eastern-europe ⊕ ecology ⊕ econ-metrics ⊕ econ-productivity ⊕ econometrics ⊕ economics ⊕ econotariat ⊕ ecosystem ⊕ ed-yong ⊕ eden ⊕ eden-heaven ⊕ editors ⊕ education ⊕ EEA ⊕ effect-size ⊕ effective-altruism ⊕ efficiency ⊕ egalitarianism-hierarchy ⊕ ego-depletion ⊕ EGT ⊕ eh ⊕ einstein ⊕ elections ⊕ electromag ⊕ elegance ⊕ elite ⊕ email ⊕ embedded-cognition ⊕ embeddings ⊕ embodied ⊕ embodied-cognition ⊕ embodied-pack ⊕ embodied-street-fighting ⊕ emergent ⊕ emotion ⊕ empirical ⊕ ems ⊕ endo-exo ⊕ endocrine ⊕ endogenous-exogenous ⊕ ends-means ⊕ energy-resources ⊕ engineering ⊕ enhancement ⊕ enlightenment-renaissance-restoration-reformation ⊕ ensembles ⊕ entertainment ⊕ entrepreneurialism ⊕ entropy-like ⊕ environment ⊕ environmental-effects ⊕ envy ⊕ epidemiology ⊕ epigenetics ⊕ epistemic ⊕ equilibrium ⊕ ergo ⊕ ergodic ⊕ eric-kaufmann ⊕ error ⊕ error-handling ⊕ essay ⊕ essence-existence ⊕ estimate ⊕ ethanol ⊕ ethical-algorithms ⊕ ethics ⊕ ethnocentrism ⊕ ethnography ⊕ EU ⊕ europe ⊕ events ⊕ evidence ⊕ evidence-based ⊕ evolution ⊕ evopsych ⊕ examples ⊕ existence ⊕ exit-voice ⊕ exocortex ⊕ expansionism ⊕ expectancy ⊕ experiment ⊕ expert ⊕ expert-experience ⊕ explanans ⊕ explanation ⊕ exploratory ⊕ explore-exploit ⊕ exposition ⊕ expression-survival ⊕ externalities ⊕ extra-introversion ⊕ extratricky ⊕ extrema ⊕ facebook ⊕ failure ⊕ faq ⊕ farmers-and-foragers ⊕ fashun ⊕ FDA ⊕ features ⊕ fermi ⊕ fertility ⊕ feudal ⊕ feynman ⊕ fiction ⊕ field-study ⊕ fighting ⊕ film ⊕ finance ⊕ finiteness ⊕ fisher ⊕ fitness ⊕ fitsci ⊕ flexibility ⊕ fluid ⊕ flux-stasis ⊕ flynn ⊕ focus ⊕ food ⊕ foreign-lang ⊕ foreign-policy ⊕ form-design ⊕ formal-methods ⊕ formal-values ⊕ forms-instances ⊕ forum ⊕ frameworks ⊕ free ⊕ free-riding ⊕ french ⊕ frequency ⊕ frequentist ⊕ frontend ⊕ frontier ⊕ functional ⊕ fungibility-liquidity ⊕ futurism ⊕ gallic ⊕ galor-like ⊕ galton ⊕ game-theory ⊕ games ⊕ garett-jones ⊕ gavisti ⊕ gbooks ⊕ GCTA ⊕ gedanken ⊕ gelman ⊕ gender ⊕ gender-diff ⊕ gene-drift ⊕ gene-flow ⊕ general-survey ⊕ generalization ⊕ generative ⊕ genetic-correlation ⊕ genetic-load ⊕ genetics ⊕ genomics ⊕ geoengineering ⊕ geography ⊕ geometry ⊕ geopolitics ⊕ germanic ⊕ get-fit ⊕ giants ⊕ gibbon ⊕ gif ⊕ gig-econ ⊕ gilens-page ⊕ git ⊕ github ⊕ gnon ⊕ gnosis-logos ⊕ gnu ⊕ gnxp ⊕ god-man-beast-victim ⊕ golang ⊕ good-evil ⊕ google ⊕ gotchas ⊕ government ⊕ gowers ⊕ grad-school ⊕ gradient-descent ⊕ graphical-models ⊕ graphics ⊕ graphs ⊕ gravity ⊕ gray-econ ⊕ great-powers ⊕ gregory-clark ⊕ grokkability ⊕ grokkability-clarity ⊕ ground-up ⊕ group-level ⊕ group-selection ⊕ growth ⊕ growth-econ ⊕ growth-mindset ⊕ GT-101 ⊕ gtd ⊕ guessing ⊕ guide ⊕ guilt-shame ⊕ GWAS ⊕ gwern ⊕ GxE ⊕ h2o ⊕ habit ⊕ hacker ⊕ haidt ⊕ hanson ⊕ hanushek ⊕ happy-sad ⊕ hard-tech ⊕ hardware ⊕ hari-seldon ⊕ harvard ⊕ haskell ⊕ hate ⊕ hci ⊕ health ⊕ healthcare ⊕ heavy-industry ⊕ heavyweights ⊕ henrich ⊕ hetero-advantage ⊕ heterodox ⊕ heuristic ⊕ hi-order-bits ⊕ hidden-motives ⊕ high-dimension ⊕ high-variance ⊕ higher-ed ⊕ history ⊕ hive-mind ⊕ hmm ⊕ hn ⊕ homepage ⊕ homo-hetero ⊕ honor ⊕ houellebecq ⊕ housing ⊕ howto ⊕ hsu ⊕ huge-data-the-biggest ⊕ human-bean ⊕ human-capital ⊕ human-ml ⊕ human-study ⊕ humanity ⊕ humility ⊕ huntington ⊕ hypochondria ⊕ hypocrisy ⊕ hypothesis-testing ⊕ ide ⊕ ideas ⊕ identification-equivalence ⊕ identity ⊕ identity-politics ⊕ ideology ⊕ idk ⊕ IEEE ⊕ iidness ⊕ illusion ⊕ immune ⊕ impact ⊕ impetus ⊕ impro ⊕ incentives ⊕ increase-decrease ⊕ india ⊕ individualism-collectivism ⊕ induction ⊕ industrial-org ⊕ industrial-revolution ⊕ inequality ⊕ inference ⊕ info-dynamics ⊕ info-econ ⊕ info-foraging ⊕ infographic ⊕ information-theory ⊕ infrastructure ⊕ inhibition ⊕ init ⊕ innovation ⊕ input-output ⊕ insight ⊕ instinct ⊕ institutions ⊕ insurance ⊕ integrity ⊕ intel ⊕ intellectual-property ⊕ intelligence ⊕ interdisciplinary ⊕ interests ⊕ interface ⊕ interface-compatibility ⊕ internet ⊕ interpretability ⊕ intersection-connectedness ⊕ intervention ⊕ interview ⊕ interview-prep ⊕ intricacy ⊕ intuition ⊕ investigative-journo ⊕ investing ⊕ ioannidis ⊕ ios ⊕ IoT ⊕ iq ⊕ iran ⊕ iraq-syria ⊕ iron-age ⊕ is-ought ⊕ islam ⊕ israel ⊕ isteveish ⊕ iteration-recursion ⊕ janus ⊕ japan ⊕ jargon ⊕ javascript ⊕ jazz ⊕ jobs ⊕ journos-pundits ⊕ judaism ⊕ judgement ⊕ julia ⊕ justice ⊕ jvm ⊕ kaggle ⊕ keyboard ⊕ kinship ⊕ kissinger ⊕ knowledge ⊕ korea ⊕ krugman ⊕ kumbaya-kult ⊕ labor ⊕ land ⊕ language ⊕ large-factor ⊕ larry-summers ⊕ latency-throughput ⊕ latent-variables ⊕ latex ⊕ latin-america ⊕ law ⊕ leadership ⊕ leaks ⊕ learning ⊕ lectures ⊕ lee-kuan-yew ⊕ left-wing ⊕ legacy ⊕ legibility ⊕ len:long ⊕ len:short ⊕ lens ⊕ lesswrong ⊕ let-me-see ⊕ letters ⊕ levers ⊕ leviathan ⊕ lexical ⊕ libraries ⊕ life-history ⊕ lifehack ⊕ lifestyle ⊕ lifts-projections ⊕ limits ⊕ linear-algebra ⊕ linear-models ⊕ linearity ⊕ liner-notes ⊕ linguistics ⊕ links ⊕ linux ⊕ lisp ⊕ list ⊕ literature ⊕ live-coding ⊕ lived-experience ⊕ llvm ⊕ lmao ⊕ local-global ⊕ logic ⊕ logistics ⊕ lol ⊕ long-short-run ⊕ long-term ⊕ longevity ⊕ longform ⊕ longitudinal ⊕ love-hate ⊕ low-hanging ⊕ lower-bounds ⊕ lurid ⊕ machiavelli ⊕ machine-learning ⊕ macro ⊕ madisonian ⊕ magnitude ⊕ maker ⊕ malaise ⊕ male-variability ⊕ malthus ⊕ management ⊕ managerial-state ⊕ manifolds ⊕ map-territory ⊕ maps ⊕ marginal ⊕ marginal-rev ⊕ market-failure ⊕ market-power ⊕ markets ⊕ markov ⊕ martial ⊕ matching ⊕ math ⊕ math.CO ⊕ math.DS ⊕ math.NT ⊕ mathtariat ⊕ matrix-factorization ⊕ maxim-gun ⊕ meaningness ⊕ measure ⊕ measurement ⊕ mechanics ⊕ media ⊕ medicine ⊕ medieval ⊕ mediterranean ⊕ memes(ew) ⊕ memetics ⊕ memory-management ⊕ MENA ⊕ mena4 ⊕ mendel-randomization ⊕ mental-math ⊕ meta-analysis ⊕ meta:math ⊕ meta:medicine ⊕ meta:prediction ⊕ meta:reading ⊕ meta:research ⊕ meta:rhetoric ⊕ meta:science ⊕ meta:war ⊕ metabolic ⊕ metabuch ⊕ metal-to-virtual ⊕ metameta ⊕ methodology ⊕ metric-space ⊕ metrics ⊕ micro ⊕ microbiz ⊕ microfoundations ⊕ microsoft ⊕ midwest ⊕ migrant-crisis ⊕ migration ⊕ military ⊕ mindful ⊕ minimalism ⊕ minimum-viable ⊕ miri-cfar ⊕ missing-heritability ⊕ mit ⊕ ML-MAP-E ⊕ mobile ⊕ mobility ⊕ model-class ⊕ model-organism ⊕ models ⊕ modernity ⊕ mokyr-allen-mccloskey ⊕ moloch ⊕ moments ⊕ monetary-fiscal ⊕ money ⊕ money-for-time ⊕ monte-carlo ⊕ mooc ⊕ mood-affiliation ⊕ morality ⊕ mostly-modern ⊕ motivation ⊕ move-fast-(and-break-things) ⊕ moxie ⊕ msr ⊕ multi ⊕ multiplicative ⊕ murray ⊕ music ⊕ music-theory ⊕ musk ⊕ mutation ⊕ mystic ⊕ myth ⊕ n-factor ⊕ narrative ⊕ nascent-state ⊕ nationalism-globalism ⊕ natural-experiment ⊕ nature ⊕ navigation ⊕ near-far ⊕ negotiation ⊕ neocons ⊕ network-structure ⊕ networking ⊕ neuro ⊕ neuro-nitgrit ⊕ neurons ⊕ new-religion ⊕ news ⊕ nibble ⊕ nietzschean ⊕ nihil ⊕ nitty-gritty ⊕ nl-and-so-can-you ⊕ nlp ⊕ no-go ⊕ noahpinion ⊕ noble-lie ⊕ noblesse-oblige ⊕ noise-structure ⊕ nonlinearity ⊕ nootropics ⊕ nordic ⊕ norms ⊕ north-weingast-like ⊕ northeast ⊕ nostalgia ⊕ notation ⊕ notetaking ⊕ novelty ⊕ nuclear ⊕ null-result ⊕ number ⊕ numerics ⊕ nutrition ⊕ nyc ⊕ obama ⊕ obesity ⊕ objective-measure ⊕ objektbuch ⊕ observer-report ⊕ ocaml-sml ⊕ occam ⊕ occident ⊕ oceans ⊕ ocr ⊕ off-convex ⊕ offense-defense ⊕ old-anglo ⊕ oly ⊕ online-learning ⊕ oop ⊕ open-closed ⊕ open-problems ⊕ open-things ⊕ openai ⊕ operational ⊕ opioids ⊕ opsec ⊕ optimate ⊕ optimism ⊕ optimization ⊕ order-disorder ⊕ orders ⊕ ORFE ⊕ org:anglo ⊕ org:biz ⊕ org:bleg ⊕ org:bv ⊕ org:com ⊕ org:data ⊕ org:davos ⊕ org:econlib ⊕ org:edge ⊕ org:edu ⊕ org:euro ⊕ org:fin ⊕ org:foreign ⊕ org:gov ⊕ org:health ⊕ org:junk ⊕ org:lite ⊕ org:local ⊕ org:mag ⊕ org:mat ⊕ org:med ⊕ org:nat ⊕ org:ngo ⊕ org:popup ⊕ org:rec ⊕ org:sci ⊕ org:theos ⊕ organization ⊕ organizing ⊕ orient ⊕ orwellian ⊕ os ⊕ oscillation ⊕ oss ⊕ osx ⊕ other-xtian ⊕ outcome-risk ⊕ outdoors ⊕ outliers ⊕ oxbridge ⊕ p2p ⊕ p:** ⊕ p:null ⊕ p:someday ⊕ p:whenever ⊕ paganism ⊕ paleocon ⊕ papers ⊕ parable ⊕ paradox ⊕ parallax ⊕ parasites-microbiome ⊕ parenting ⊕ pareto ⊕ parsimony ⊕ passive-investing ⊕ paste ⊕ paternal-age ⊕ path-dependence ⊕ patho-altruism ⊕ patience ⊕ paul-romer ⊕ paulg ⊕ paying-rent ⊕ pdf ⊕ peace-violence ⊕ pennsylvania ⊕ people ⊕ performance ⊕ personal-finance ⊕ personality ⊕ persuasion ⊕ perturbation ⊕ pessimism ⊕ peter-singer ⊕ phalanges ⊕ pharma ⊕ phase-transition ⊕ phd ⊕ philosophy ⊕ photography ⊕ phys-energy ⊕ physics ⊕ pic ⊕ piketty ⊕ pinboard ⊕ pinker ⊕ piracy ⊕ planning ⊕ play ⊕ plots ⊕ pls ⊕ plt ⊕ poast ⊕ podcast ⊕ poetry ⊕ polanyi-marx ⊕ polarization ⊕ policy ⊕ polis ⊕ polisci ⊕ political-econ ⊕ politics ⊕ poll ⊕ polynomials ⊕ pop-diff ⊕ pop-structure ⊕ popsci ⊕ population ⊕ population-genetics ⊕ populism ⊕ postmortem ⊕ postrat ⊕ power ⊕ ppl ⊕ practice ⊕ pragmatic ⊕ pre-2013 ⊕ pre-ww2 ⊕ prediction ⊕ prediction-markets ⊕ preference-falsification ⊕ prejudice ⊕ prepping ⊕ preprint ⊕ presentation ⊕ primitivism ⊕ princeton ⊕ prioritizing ⊕ priors-posteriors ⊕ privacy ⊕ pro-rata ⊕ probability ⊕ problem-solving ⊕ procrastination ⊕ product-management ⊕ productivity ⊕ prof ⊕ profile ⊕ programming ⊕ progression ⊕ project ⊕ proof-systems ⊕ proofs ⊕ propaganda ⊕ properties ⊕ property-rights ⊕ proposal ⊕ protestant-catholic ⊕ protocol-metadata ⊕ prudence ⊕ pseudoE ⊕ pseudorandomness ⊕ psych-architecture ⊕ psychedelics ⊕ psychiatry ⊕ psycho-atoms ⊕ psychology ⊕ psychometrics ⊕ public-goodish ⊕ public-health ⊕ publishing ⊕ putnam-like ⊕ puzzles ⊕ python ⊕ q-n-a ⊕ qra ⊕ QTL ⊕ quality ⊕ quantified-self ⊕ quantitative-qualitative ⊕ quantum ⊕ quantum-info ⊕ questions ⊕ quixotic ⊕ quiz ⊕ quotes ⊕ r-lang ⊕ race ⊕ rand-complexity ⊕ random ⊕ random-matrices ⊕ randy-ayndy ⊕ ranking ⊕ rant ⊕ rat-pack ⊕ rationality ⊕ ratty ⊕ reading ⊕ real-nominal ⊕ realness ⊕ realpolitik ⊕ reason ⊕ recent-selection ⊕ recommendations ⊕ recruiting ⊕ red-queen ⊕ reddit ⊕ redistribution ⊕ reduction ⊕ reference ⊕ reflection ⊕ regional-scatter-plots ⊕ regression ⊕ regression-to-mean ⊕ regularization ⊕ regularizer ⊕ regulation ⊕ reinforcement ⊕ relativity ⊕ religion ⊕ rent-seeking ⊕ replication ⊕ repo ⊕ reputation ⊕ research ⊕ research-program ⊕ resources-effects ⊕ responsibility ⊕ retention ⊕ retrofit ⊕ revealed-preference ⊕ review ⊕ revolution ⊕ rhetoric ⊕ rhythm ⊕ right-wing ⊕ rigidity ⊕ rigor ⊕ rigorous-crypto ⊕ rindermann-thompson ⊕ risk ⊕ ritual ⊕ roadmap ⊕ robotics ⊕ robust ⊕ rock ⊕ roots ⊕ rot ⊕ rounding ⊕ rsc ⊕ russia ⊕ rust ⊕ s-factor ⊕ s:* ⊕ s:** ⊕ s:*** ⊕ saas ⊕ safety ⊕ sales ⊕ sampling ⊕ sampling-bias ⊕ sanctity-degradation ⊕ sanjeev-arora ⊕ sapiens ⊕ scala ⊕ scale ⊕ scaling-tech ⊕ scaling-up ⊕ schelling ⊕ scholar ⊕ sci-comp ⊕ science ⊕ science-anxiety ⊕ scifi-fantasy ⊕ scitariat ⊕ scott-sumner ⊕ search ⊕ securities ⊕ security ⊕ selection ⊕ self-control ⊕ self-interest ⊕ self-report ⊕ selfish-gene ⊕ sensitivity ⊕ sentiment ⊕ sequential ⊕ serene ⊕ sex ⊕ sexuality ⊕ shakespeare ⊕ shalizi ⊕ shannon ⊕ shift ⊕ shipping ⊕ short-circuit ⊕ sib-study ⊕ SIGGRAPH ⊕ signal-noise ⊕ signaling ⊕ similarity ⊕ simler ⊕ simplex ⊕ simplification-normalization ⊕ simulation ⊕ singularity ⊕ sinosphere ⊕ skeleton ⊕ skunkworks ⊕ sky ⊕ sleep ⊕ sleuthin ⊕ slides ⊕ slippery-slope ⊕ smart-contracts ⊕ smoothness ⊕ social ⊕ social-capital ⊕ social-choice ⊕ social-norms ⊕ social-psych ⊕ social-science ⊕ social-structure ⊕ sociality ⊕ society ⊕ sociology ⊕ socs-and-mops ⊕ software ⊕ solid-study ⊕ solzhenitsyn ⊕ space ⊕ space-complexity ⊕ span-cover ⊕ sparsity ⊕ spatial ⊕ speaking ⊕ spearhead ⊕ speculation ⊕ speed ⊕ speedometer ⊕ spengler ⊕ spock ⊕ sports ⊕ spreading ⊕ ssc ⊕ stackex ⊕ stagnation ⊕ stamina ⊕ stanford ⊕ startups ⊕ stat-mech ⊕ stat-power ⊕ state ⊕ state-of-art ⊕ statesmen ⊕ static-dynamic ⊕ stats ⊕ status ⊕ stereotypes ⊕ stock-flow ⊕ stoic ⊕ stories ⊕ strategy ⊕ straussian ⊕ stream ⊕ street-fighting ⊕ stress ⊕ strings ⊕ stripe ⊕ structure ⊕ study ⊕ studying ⊕ stylized-facts ⊕ sub-super ⊕ subculture ⊕ subjective-objective ⊕ sublinear ⊕ success ⊕ sulla ⊕ summary ⊕ summer-2014 ⊕ supply-demand ⊕ survey ⊕ survival ⊕ sv ⊕ symmetry ⊕ synchrony ⊕ syntax ⊕ synthesis ⊕ system-design ⊕ systematic-ad-hoc ⊕ systems ⊕ szabo ⊕ tactics ⊕ tails ⊕ tainter ⊕ talks ⊕ taubes-guyenet ⊕ taxes ⊕ tcs ⊕ tcstariat ⊕ teaching ⊕ tech ⊕ tech-infrastructure ⊕ technical-writing ⊕ technocracy ⊕ technology ⊕ techtariat ⊕ telos-atelos ⊕ temperance ⊕ temperature ⊕ terminal ⊕ terrorism ⊕ tetlock ⊕ texas ⊕ the-basilisk ⊕ the-bones ⊕ the-classics ⊕ the-devil ⊕ the-founding ⊕ the-great-west-whale ⊕ the-monster ⊕ the-south ⊕ the-trenches ⊕ the-watchers ⊕ the-west ⊕ the-world-is-just-atoms ⊕ theory-of-mind ⊕ theory-practice ⊕ theos ⊕ thermo ⊕ thick-thin ⊕ thiel ⊕ things ⊕ thinking ⊕ threat-modeling ⊕ thucydides ⊕ time ⊕ time-preference ⊕ time-series ⊕ time-use ⊕ tip-of-tongue ⊕ tocqueville ⊕ todo ⊕ toolkit ⊕ tools ⊕ top-n ⊕ toxo-gondii ⊕ toxoplasmosis ⊕ toys ⊕ traces ⊕ track-record ⊕ tracker ⊕ trade ⊕ tradeoffs ⊕ tradition ⊕ transitions ⊕ transportation ⊕ travel ⊕ trees ⊕ trends ⊕ tribalism ⊕ tricks ⊕ trivia ⊕ troll ⊕ trump ⊕ trust ⊕ truth ⊕ tumblr ⊕ turchin ⊕ tutorial ⊕ tutoring ⊕ tv ⊕ twin-study ⊕ twitter ⊕ types ⊕ ubiquity ⊕ ui ⊕ unaffiliated ⊕ uncertainty ⊕ unintended-consequences ⊕ uniqueness ⊕ unit ⊕ universalism-particularism ⊕ unix ⊕ unsupervised ⊕ urban ⊕ urban-rural ⊕ urbit ⊕ us-them ⊕ usa ⊕ utopia-dystopia ⊕ ux ⊕ vague ⊕ values ⊕ vampire-squid ⊕ variance-components ⊕ vcs ⊕ venture ⊕ vgr ⊕ video ⊕ virginia-DC ⊕ virtu ⊕ virtualization ⊕ visual-understanding ⊕ visualization ⊕ visuo ⊕ vitality ⊕ volo-avolo ⊕ von-neumann ⊕ vr ⊕ vulgar ⊕ walls ⊕ walter-scheidel ⊕ war ⊕ washington ⊕ water ⊕ waves ⊕ wealth ⊕ wealth-of-nations ⊕ web ⊕ webapp ⊕ weird ⊕ welfare-state ⊕ west-hunter ⊕ westminster ⊕ whiggish-hegelian ⊕ white-paper ⊕ whole-partial-many ⊕ wiki ⊕ wild-ideas ⊕ winner-take-all ⊕ wire-guided ⊕ wisdom ⊕ within-group ⊕ within-without ⊕ wkfly ⊕ woah ⊕ wonkish ⊕ workflow ⊕ working-stiff ⊕ workshop ⊕ world ⊕ world-war ⊕ worrydream ⊕ worse-is-better/the-right-thing ⊕ writing ⊕ wtf ⊕ wut ⊕ X-not-about-Y ⊕ xenobio ⊕ yak-shaving ⊕ yarvin ⊕ yc ⊕ yoga ⊕ yvain ⊕ zeitgeist ⊕ zero-positive-sum ⊕ zooming ⊕ 🌞 ⊕ 🎓 ⊕ 🎩 ⊕ 🐝 ⊕ 🐸 ⊕ 👽 ⊕ 🔬 ⊕ 🖥 ⊕ 🤖 ⊕ 🦀 ⊕ 🦉 ⊕Copy this bookmark: