nhaliday + minimalism 45
About This Website - Gwern.net
ratty gwern people summary workflow exocortex long-short-run software oss vcs internet web flux-stasis time sequential spreading longform discipline writing vulgar subculture scifi-fantasy fiction meta:reading tools priors-posteriors meta:prediction lesswrong planning info-foraging r-lang feynman giants heavyweights learning mindful retention notetaking pdf backup profile confidence epistemic rationality yak-shaving checking wire-guided hn forum aggregator quotes aphorism time-series data frontend minimalism form-design
7 weeks ago by nhaliday
ratty gwern people summary workflow exocortex long-short-run software oss vcs internet web flux-stasis time sequential spreading longform discipline writing vulgar subculture scifi-fantasy fiction meta:reading tools priors-posteriors meta:prediction lesswrong planning info-foraging r-lang feynman giants heavyweights learning mindful retention notetaking pdf backup profile confidence epistemic rationality yak-shaving checking wire-guided hn forum aggregator quotes aphorism time-series data frontend minimalism form-design
7 weeks ago by nhaliday
58 Bytes of CSS to look great nearly everywhere | Hacker News
7 weeks ago by nhaliday
Author mentions this took a long time to arrive at.
I recommend "Web Design in 4 Minutes" from the CSS guru behind Bulma:
https://jgthms.com/web-design-in-4-minutes/
[ed.: lottsa sensible criticism of the above in the comments]
https://news.ycombinator.com/item?id=12166687
hn
commentary
techtariat
design
form-design
howto
web
frontend
minimum-viable
efficiency
minimalism
parsimony
move-fast-(and-break-things)
tutorial
multi
mobile
init
advice
I recommend "Web Design in 4 Minutes" from the CSS guru behind Bulma:
https://jgthms.com/web-design-in-4-minutes/
[ed.: lottsa sensible criticism of the above in the comments]
https://news.ycombinator.com/item?id=12166687
7 weeks ago by nhaliday
Ask HN: Favorite note-taking software? | Hacker News
7 weeks ago by nhaliday
Ask HN: What is your ideal note-taking software and/or hardware?: https://news.ycombinator.com/item?id=13221158
my wishlist as of 2019:
- web + desktop macOS + mobile iOS (at least viewing on the last but ideally also editing)
- sync across all those
- open-source data format that's easy to manipulate for scripting purposes
- flexible organization: mostly tree hierarchical (subsuming linear/unorganized) but with the option for directed (acyclic) graph (possibly a second layer of structure/linking)
- can store plain text, LaTeX, diagrams, and raster/vector images (video prob not necessary except as links to elsewhere)
- full-text search
- somehow digest/import data from Pinboard, Workflowy, Papers 3/Bookends, and Skim, ideally absorbing most of their functionality
- so, eg, track notes/annotations side-by-side w/ original PDF/DjVu/ePub documents (to replace Papers3/Bookends/Skim), and maybe web pages too (to replace Pinboard)
- OCR of handwritten notes (how to handle equations/diagrams?)
- various forms of NLP analysis of everything (topic models, clustering, etc)
- maybe version control (less important than export)
candidates?:
- Evernote prob ruled out do to heavy use of proprietary data formats (unless I can find some way to export with tolerably clean output)
- Workflowy/Dynalist are good but only cover a subset of functionality I want
- org-mode doesn't interact w/ mobile well (and I haven't evaluated it in detail otherwise)
- TiddlyWiki/Zim are in the running, but not sure about mobile
- idk about vimwiki but I'm not that wedded to vim and it seems less widely used than org-mode/TiddlyWiki/Zim so prob pass on that
- Quiver/Joplin/Inkdrop look similar and cover a lot of bases, TODO: evaluate more
- Trilium looks especially promising, tho read-only mobile and for macOS desktop look at this: https://github.com/zadam/trilium/issues/511
- RocketBook is interesting scanning/OCR solution but prob not sufficient due to proprietary data format
- TODO: many more candidates, eg, TreeSheets, Gingko, OneNote (macOS?...), Notion (proprietary data format...), Zotero, Nodebook (https://nodebook.io/landing), Polar (https://getpolarized.io), Roam (looks very promising)
Ask HN: What do you use for you personal note taking activity?: https://news.ycombinator.com/item?id=15736102
Ask HN: What are your note-taking techniques?: https://news.ycombinator.com/item?id=9976751
Ask HN: How do you take notes (useful note-taking strategies)?: https://news.ycombinator.com/item?id=13064215
Ask HN: How to get better at taking notes?: https://news.ycombinator.com/item?id=21419478
Ask HN: How did you build up your personal knowledge base?: https://news.ycombinator.com/item?id=21332957
nice comment from math guy on structure and difference between math and CS: https://news.ycombinator.com/item?id=21338628
useful comment collating related discussions: https://news.ycombinator.com/item?id=21333383
highlights:
Designing a Personal Knowledge base: https://news.ycombinator.com/item?id=8270759
Ask HN: How to organize personal knowledge?: https://news.ycombinator.com/item?id=17892731
Do you use a personal 'knowledge base'?: https://news.ycombinator.com/item?id=21108527
Ask HN: How do you share/organize knowledge at work and life?: https://news.ycombinator.com/item?id=21310030
other stuff:
plain text: https://news.ycombinator.com/item?id=21685660
https://www.getdnote.com/blog/how-i-built-personal-knowledge-base-for-myself/
Tiago Forte: https://www.buildingasecondbrain.com
hn search: https://hn.algolia.com/?query=notetaking&type=story
Slant comparison commentary: https://news.ycombinator.com/item?id=7011281
good comparison of options here in comments here (and Trilium itself looks good): https://news.ycombinator.com/item?id=18840990
https://en.wikipedia.org/wiki/Comparison_of_note-taking_software
wikis:
https://www.slant.co/versus/5116/8768/~tiddlywiki_vs_zim
https://www.wikimatrix.org/compare/tiddlywiki+zim
http://tiddlymap.org/
https://www.zim-wiki.org/manual/Plugins/BackLinks_Pane.html
https://zim-wiki.org/manual/Plugins/Link_Map.html
apps:
Roam: https://news.ycombinator.com/item?id=21440289
intriguing but probably not appropriate for my needs: https://www.sophya.ai/
Inkdrop: https://news.ycombinator.com/item?id=20103589
Joplin: https://news.ycombinator.com/item?id=15815040
https://news.ycombinator.com/item?id=21555238
https://wreeto.com/
Leo Editor (combines tree outlining w/ literate programming/scripting, I think?): https://news.ycombinator.com/item?id=17769892
Frame: https://news.ycombinator.com/item?id=18760079
https://www.reddit.com/r/TheMotte/comments/cb18sy/anyone_use_a_personal_wiki_software_to_catalog/
https://archive.is/xViTY
Notion: https://news.ycombinator.com/item?id=18904648
https://www.reddit.com/r/slatestarcodex/comments/ap437v/modified_cornell_method_the_optimal_notetaking/
https://archive.is/e9oHu
https://www.reddit.com/r/slatestarcodex/comments/bt8a1r/im_about_to_start_a_one_month_journaling_test/
https://www.reddit.com/r/slatestarcodex/comments/9cot3m/question_how_do_you_guys_learn_things/
https://archive.is/HUH8V
https://www.reddit.com/r/slatestarcodex/comments/d7bvcp/how_to_read_a_book_for_understanding/
https://archive.is/VL2mi
Anki:
https://www.reddit.com/r/Anki/comments/as8i4t/use_anki_for_technical_books/
https://www.freecodecamp.org/news/how-anki-saved-my-engineering-career-293a90f70a73/
https://www.reddit.com/r/slatestarcodex/comments/ch24q9/anki_is_it_inferior_to_the_3x5_index_card_an/
https://archive.is/OaGc5
maybe not the best source for a review/advice
interesting comment(s) about tree outliners and spreadsheets: https://news.ycombinator.com/item?id=21170434
tablet:
https://www.inkandswitch.com/muse-studio-for-ideas.html
https://www.inkandswitch.com/capstone-manuscript.html
https://news.ycombinator.com/item?id=20255457
hn
discussion
recommendations
software
tools
desktop
app
notetaking
exocortex
wkfly
wiki
productivity
multi
comparison
crosstab
properties
applicability-prereqs
nlp
info-foraging
chart
webapp
reference
q-n-a
retention
workflow
reddit
social
ratty
ssc
learning
studying
commentary
structure
thinking
network-structure
things
collaboration
ocr
trees
graphs
LaTeX
search
todo
project
money-for-time
synchrony
pinboard
state
duplication
worrydream
simplification-normalization
links
minimalism
design
neurons
ai-control
openai
miri-cfar
parsimony
intricacy
my wishlist as of 2019:
- web + desktop macOS + mobile iOS (at least viewing on the last but ideally also editing)
- sync across all those
- open-source data format that's easy to manipulate for scripting purposes
- flexible organization: mostly tree hierarchical (subsuming linear/unorganized) but with the option for directed (acyclic) graph (possibly a second layer of structure/linking)
- can store plain text, LaTeX, diagrams, and raster/vector images (video prob not necessary except as links to elsewhere)
- full-text search
- somehow digest/import data from Pinboard, Workflowy, Papers 3/Bookends, and Skim, ideally absorbing most of their functionality
- so, eg, track notes/annotations side-by-side w/ original PDF/DjVu/ePub documents (to replace Papers3/Bookends/Skim), and maybe web pages too (to replace Pinboard)
- OCR of handwritten notes (how to handle equations/diagrams?)
- various forms of NLP analysis of everything (topic models, clustering, etc)
- maybe version control (less important than export)
candidates?:
- Evernote prob ruled out do to heavy use of proprietary data formats (unless I can find some way to export with tolerably clean output)
- Workflowy/Dynalist are good but only cover a subset of functionality I want
- org-mode doesn't interact w/ mobile well (and I haven't evaluated it in detail otherwise)
- TiddlyWiki/Zim are in the running, but not sure about mobile
- idk about vimwiki but I'm not that wedded to vim and it seems less widely used than org-mode/TiddlyWiki/Zim so prob pass on that
- Quiver/Joplin/Inkdrop look similar and cover a lot of bases, TODO: evaluate more
- Trilium looks especially promising, tho read-only mobile and for macOS desktop look at this: https://github.com/zadam/trilium/issues/511
- RocketBook is interesting scanning/OCR solution but prob not sufficient due to proprietary data format
- TODO: many more candidates, eg, TreeSheets, Gingko, OneNote (macOS?...), Notion (proprietary data format...), Zotero, Nodebook (https://nodebook.io/landing), Polar (https://getpolarized.io), Roam (looks very promising)
Ask HN: What do you use for you personal note taking activity?: https://news.ycombinator.com/item?id=15736102
Ask HN: What are your note-taking techniques?: https://news.ycombinator.com/item?id=9976751
Ask HN: How do you take notes (useful note-taking strategies)?: https://news.ycombinator.com/item?id=13064215
Ask HN: How to get better at taking notes?: https://news.ycombinator.com/item?id=21419478
Ask HN: How did you build up your personal knowledge base?: https://news.ycombinator.com/item?id=21332957
nice comment from math guy on structure and difference between math and CS: https://news.ycombinator.com/item?id=21338628
useful comment collating related discussions: https://news.ycombinator.com/item?id=21333383
highlights:
Designing a Personal Knowledge base: https://news.ycombinator.com/item?id=8270759
Ask HN: How to organize personal knowledge?: https://news.ycombinator.com/item?id=17892731
Do you use a personal 'knowledge base'?: https://news.ycombinator.com/item?id=21108527
Ask HN: How do you share/organize knowledge at work and life?: https://news.ycombinator.com/item?id=21310030
other stuff:
plain text: https://news.ycombinator.com/item?id=21685660
https://www.getdnote.com/blog/how-i-built-personal-knowledge-base-for-myself/
Tiago Forte: https://www.buildingasecondbrain.com
hn search: https://hn.algolia.com/?query=notetaking&type=story
Slant comparison commentary: https://news.ycombinator.com/item?id=7011281
good comparison of options here in comments here (and Trilium itself looks good): https://news.ycombinator.com/item?id=18840990
https://en.wikipedia.org/wiki/Comparison_of_note-taking_software
wikis:
https://www.slant.co/versus/5116/8768/~tiddlywiki_vs_zim
https://www.wikimatrix.org/compare/tiddlywiki+zim
http://tiddlymap.org/
https://www.zim-wiki.org/manual/Plugins/BackLinks_Pane.html
https://zim-wiki.org/manual/Plugins/Link_Map.html
apps:
Roam: https://news.ycombinator.com/item?id=21440289
intriguing but probably not appropriate for my needs: https://www.sophya.ai/
Inkdrop: https://news.ycombinator.com/item?id=20103589
Joplin: https://news.ycombinator.com/item?id=15815040
https://news.ycombinator.com/item?id=21555238
https://wreeto.com/
Leo Editor (combines tree outlining w/ literate programming/scripting, I think?): https://news.ycombinator.com/item?id=17769892
Frame: https://news.ycombinator.com/item?id=18760079
https://www.reddit.com/r/TheMotte/comments/cb18sy/anyone_use_a_personal_wiki_software_to_catalog/
https://archive.is/xViTY
Notion: https://news.ycombinator.com/item?id=18904648
https://www.reddit.com/r/slatestarcodex/comments/ap437v/modified_cornell_method_the_optimal_notetaking/
https://archive.is/e9oHu
https://www.reddit.com/r/slatestarcodex/comments/bt8a1r/im_about_to_start_a_one_month_journaling_test/
https://www.reddit.com/r/slatestarcodex/comments/9cot3m/question_how_do_you_guys_learn_things/
https://archive.is/HUH8V
https://www.reddit.com/r/slatestarcodex/comments/d7bvcp/how_to_read_a_book_for_understanding/
https://archive.is/VL2mi
Anki:
https://www.reddit.com/r/Anki/comments/as8i4t/use_anki_for_technical_books/
https://www.freecodecamp.org/news/how-anki-saved-my-engineering-career-293a90f70a73/
https://www.reddit.com/r/slatestarcodex/comments/ch24q9/anki_is_it_inferior_to_the_3x5_index_card_an/
https://archive.is/OaGc5
maybe not the best source for a review/advice
interesting comment(s) about tree outliners and spreadsheets: https://news.ycombinator.com/item?id=21170434
tablet:
https://www.inkandswitch.com/muse-studio-for-ideas.html
https://www.inkandswitch.com/capstone-manuscript.html
https://news.ycombinator.com/item?id=20255457
7 weeks ago by nhaliday
When is proof by contradiction necessary? | Gowers's Weblog
nibble org:bleg gowers mathtariat math proofs contradiction volo-avolo structure math.CA math.NT algebra parsimony elegance minimalism efficiency technical-writing necessity-sufficiency degrees-of-freedom simplification-normalization
8 weeks ago by nhaliday
nibble org:bleg gowers mathtariat math proofs contradiction volo-avolo structure math.CA math.NT algebra parsimony elegance minimalism efficiency technical-writing necessity-sufficiency degrees-of-freedom simplification-normalization
8 weeks ago by nhaliday
Don't ask if a monorepo is good for you – ask if you're good enough for a monorepo
techtariat git vcs best-practices programming engineering collaboration contrarianism debate google facebook quality code-organizing dan-luu minimalism working-stiff shipping cost-benefit workflow coupling-cohesion direction worse-is-better/the-right-thing whole-partial-many number
11 weeks ago by nhaliday
techtariat git vcs best-practices programming engineering collaboration contrarianism debate google facebook quality code-organizing dan-luu minimalism working-stiff shipping cost-benefit workflow coupling-cohesion direction worse-is-better/the-right-thing whole-partial-many number
11 weeks ago by nhaliday
Unix philosophy - Wikipedia
august 2019 by nhaliday
1. Make each program do one thing well. To do a new job, build afresh rather than complicate old programs by adding new "features".
2. Expect the output of every program to become the input to another, as yet unknown, program. Don't clutter output with extraneous information. Avoid stringently columnar or binary input formats. Don't insist on interactive input.
3. Design and build software, even operating systems, to be tried early, ideally within weeks. Don't hesitate to throw away the clumsy parts and rebuild them.
4. Use tools in preference to unskilled help to lighten a programming task, even if you have to detour to build the tools and expect to throw some of them out after you've finished using them.
wiki
concept
philosophy
lens
ideas
design
system-design
programming
engineering
systems
unix
subculture
composition-decomposition
coupling-cohesion
metabuch
skeleton
hi-order-bits
summary
list
top-n
quotes
aphorism
minimalism
minimum-viable
best-practices
intricacy
parsimony
protocol-metadata
2. Expect the output of every program to become the input to another, as yet unknown, program. Don't clutter output with extraneous information. Avoid stringently columnar or binary input formats. Don't insist on interactive input.
3. Design and build software, even operating systems, to be tried early, ideally within weeks. Don't hesitate to throw away the clumsy parts and rebuild them.
4. Use tools in preference to unskilled help to lighten a programming task, even if you have to detour to build the tools and expect to throw some of them out after you've finished using them.
august 2019 by nhaliday
Three best practices for building successful data pipelines - O'Reilly Media
august 2019 by nhaliday
Drawn from their experiences and my own, I’ve identified three key areas that are often overlooked in data pipelines, and those are making your analysis:
1. Reproducible
2. Consistent
3. Productionizable
...
Science that cannot be reproduced by an external third party is just not science — and this does apply to data science. One of the benefits of working in data science is the ability to apply the existing tools from software engineering. These tools let you isolate all the dependencies of your analyses and make them reproducible.
Dependencies fall into three categories:
1. Analysis code ...
2. Data sources ...
3. Algorithmic randomness ...
...
Establishing consistency in data
...
There are generally two ways of establishing the consistency of data sources. The first is by checking-in all code and data into a single revision control repository. The second method is to reserve source control for code and build a pipeline that explicitly depends on external data being in a stable, consistent format and location.
Checking data into version control is generally considered verboten for production software engineers, but it has a place in data analysis. For one thing, it makes your analysis very portable by isolating all dependencies into source control. Here are some conditions under which it makes sense to have both code and data in source control:
Small data sets ...
Regular analytics ...
Fixed source ...
Productionizability: Developing a common ETL
...
1. Common data format ...
2. Isolating library dependencies ...
https://blog.koresoftware.com/blog/etl-principles
Rigorously enforce the idempotency constraint
For efficiency, seek to load data incrementally
Always ensure that you can efficiently process historic data
Partition ingested data at the destination
Rest data between tasks
Pool resources for efficiency
Store all metadata together in one place
Manage login details in one place
Specify configuration details once
Parameterize sub flows and dynamically run tasks where possible
Execute conditionally
Develop your own workflow framework and reuse workflow components
more focused on details of specific technologies:
https://medium.com/@rchang/a-beginners-guide-to-data-engineering-part-i-4227c5c457d7
https://www.cloudera.com/documentation/director/cloud/topics/cloud_de_best_practices.html
techtariat
org:com
best-practices
engineering
code-organizing
machine-learning
data-science
yak-shaving
nitty-gritty
workflow
config
vcs
replication
homo-hetero
multi
org:med
design
system-design
links
shipping
minimalism
volo-avolo
causation
random
invariance
structure
arrows
protocol-metadata
interface-compatibility
1. Reproducible
2. Consistent
3. Productionizable
...
Science that cannot be reproduced by an external third party is just not science — and this does apply to data science. One of the benefits of working in data science is the ability to apply the existing tools from software engineering. These tools let you isolate all the dependencies of your analyses and make them reproducible.
Dependencies fall into three categories:
1. Analysis code ...
2. Data sources ...
3. Algorithmic randomness ...
...
Establishing consistency in data
...
There are generally two ways of establishing the consistency of data sources. The first is by checking-in all code and data into a single revision control repository. The second method is to reserve source control for code and build a pipeline that explicitly depends on external data being in a stable, consistent format and location.
Checking data into version control is generally considered verboten for production software engineers, but it has a place in data analysis. For one thing, it makes your analysis very portable by isolating all dependencies into source control. Here are some conditions under which it makes sense to have both code and data in source control:
Small data sets ...
Regular analytics ...
Fixed source ...
Productionizability: Developing a common ETL
...
1. Common data format ...
2. Isolating library dependencies ...
https://blog.koresoftware.com/blog/etl-principles
Rigorously enforce the idempotency constraint
For efficiency, seek to load data incrementally
Always ensure that you can efficiently process historic data
Partition ingested data at the destination
Rest data between tasks
Pool resources for efficiency
Store all metadata together in one place
Manage login details in one place
Specify configuration details once
Parameterize sub flows and dynamically run tasks where possible
Execute conditionally
Develop your own workflow framework and reuse workflow components
more focused on details of specific technologies:
https://medium.com/@rchang/a-beginners-guide-to-data-engineering-part-i-4227c5c457d7
https://www.cloudera.com/documentation/director/cloud/topics/cloud_de_best_practices.html
august 2019 by nhaliday
Less is exponentially more
june 2019 by nhaliday
https://news.ycombinator.com/item?id=16548684
https://news.ycombinator.com/item?id=6417319
https://news.ycombinator.com/item?id=4158865
https://aras-p.info/blog/2018/12/28/Modern-C-Lamentations/
https://thephd.github.io/perspective-standardization-in-2018
https://sean-parent.stlab.cc/2018/12/30/cpp-ruminations.html
http://ericniebler.com/2018/12/05/standard-ranges/
techtariat
rsc
worse-is-better/the-right-thing
blowhards
diogenes
reflection
rhetoric
c(pp)
systems
programming
pls
plt
types
thinking
engineering
nitty-gritty
stories
stock-flow
network-structure
arrows
composition-decomposition
comparison
jvm
golang
degrees-of-freedom
roots
performance
hn
commentary
multi
ideology
intricacy
parsimony
minimalism
tradeoffs
impetus
design
google
python
cracker-prog
aphorism
science
critique
classification
characterization
examples
subculture
culture
grokkability
incentives
interests
latency-throughput
grokkability-clarity
https://news.ycombinator.com/item?id=6417319
https://news.ycombinator.com/item?id=4158865
https://aras-p.info/blog/2018/12/28/Modern-C-Lamentations/
https://thephd.github.io/perspective-standardization-in-2018
https://sean-parent.stlab.cc/2018/12/30/cpp-ruminations.html
http://ericniebler.com/2018/12/05/standard-ranges/
june 2019 by nhaliday
Boring languages
june 2019 by nhaliday
Choose Boring Technology: http://boringtechnology.club/
https://news.ycombinator.com/item?id=20323246
techtariat
dan-luu
list
links
examples
programming
engineering
pls
contrarianism
worse-is-better/the-right-thing
regularizer
hardware
c(pp)
os
dbs
caching
editors
desktop
terminal
git
vcs
yak-shaving
huge-data-the-biggest
debate
critique
jvm
rust
ocaml-sml
dotnet
top-n
tradeoffs
cost-benefit
pragmatic
ubiquity
multi
hn
commentary
slides
nitty-gritty
carmack
shipping
working-stiff
tech
frontier
uncertainty
debugging
correctness
measure
comparison
best-practices
software
intricacy
degrees-of-freedom
minimalism
graphs
analogy
optimization
models
thinking
prioritizing
ecosystem
attention
bounded-cognition
tech-infrastructure
cynicism-idealism
https://news.ycombinator.com/item?id=20323246
june 2019 by nhaliday
I Want a New Drug | West Hunter
february 2017 by nhaliday
Big pharma has taken a new course over the past few years. In the past, most useful drugs originated in some kind of living organism – penicillin, quinine, insulin, etc etc. Nowadays, big pharmaceutical companies use combinatorial chemistry and computer modeling. Merck has sold off its biological-products research arm. This new approach, combined with doubled spending on drug R&D, has been a resounding failure. The rate of development of fundamentally new drugs – ‘new molecular entities’ – is running about 40% of that seen in the 1970s. Since big pharma makes its money from drugs that are still on patent, this slowed innovation is a real threat to their bottom line.
...
I think that this is an instance of a more general trend: often a modern, advanced approach shows up, and it persists long after it’s been shown to be a miserable failure. You can see some of the reasons why: the people trained in the new technique would lose out if it were abandoned. Hard to imagine combinatorial chemists rooting around in a garbage can looking for moldy fruit.
west-hunter
discussion
pharma
medicine
FDA
randy-ayndy
regularizer
critique
minimalism
bio
nature
low-hanging
drugs
error
scitariat
info-dynamics
innovation
ideas
discovery
meta:medicine
stagnation
parasites-microbiome
the-trenches
alt-inst
dirty-hands
regulation
civil-liberty
proposal
corporation
fashun
prioritizing
...
I think that this is an instance of a more general trend: often a modern, advanced approach shows up, and it persists long after it’s been shown to be a miserable failure. You can see some of the reasons why: the people trained in the new technique would lose out if it were abandoned. Hard to imagine combinatorial chemists rooting around in a garbage can looking for moldy fruit.
february 2017 by nhaliday
Kevin Simler on Twitter: ""Spirituality of science" open-ended tweetstorm, 0/???"
postrat simler twitter discussion social art list video science nature new-religion meaningness aesthetics minimalism frisson virtu :) sanctity-degradation deep-materialism theos beauty elegance
december 2016 by nhaliday
postrat simler twitter discussion social art list video science nature new-religion meaningness aesthetics minimalism frisson virtu :) sanctity-degradation deep-materialism theos beauty elegance
december 2016 by nhaliday
A Bad Carver
november 2016 by nhaliday
decondensation and its malcontents
- condensation provided plausible deniability (so some people benefited from it)
- some recondensation happening, partially facilitated by technology
good take: https://twitter.com/lumenphosphor/status/794554089728249857
Personal image for the dangers of decondensation is "hidden micronutrients" - what if Soylent doesn't *really* contain everything you need?
postrat
carcinisation
essay
culture
society
trends
insight
things
aesthetics
vgr
meaningness
history
antiquity
🦀
water
unintended-consequences
2016
minimalism
multi
twitter
social
commentary
- condensation provided plausible deniability (so some people benefited from it)
- some recondensation happening, partially facilitated by technology
good take: https://twitter.com/lumenphosphor/status/794554089728249857
Personal image for the dangers of decondensation is "hidden micronutrients" - what if Soylent doesn't *really* contain everything you need?
november 2016 by nhaliday
Beauty is Fit | Carcinisation
november 2016 by nhaliday
Cage’s music is an example of the tendency for high-status human domains to ignore fit with human nervous systems in favor of fit with increasingly rarified abstract cultural systems. Human nervous systems are limited. Representation of existing forms, and generating pleasure and poignancy in human minds, are often disdained as solved problems. Domains unhinged from the desires and particularities of human nervous systems and bodies become inhuman; human flourishing, certainly, is not a solved problem. However, human nervous systems themselves create and seek out “fit” of the more abstract sort; the domain of abstract systems is part of the natural human environment, and the forms that exist there interact with humans as symbiotes. Theorems and novels and money and cathedrals rely on humans for reproduction, like parasites, but offer many benefits to humans in exchange. Humans require an environment that fits their nervous systems, but part of the definition of “fit” in this case is the need for humans to feel that they are involved in something greater (and perhaps more abstract) than this “animal” kind of fit.
essay
reflection
carcinisation
🦀
aesthetics
postrat
thinking
culture
insight
operational
minimalism
parsimony
beauty
elegance
november 2016 by nhaliday
Ra | Otium
october 2016 by nhaliday
Ra = smooth, blank, prestigeful (or maybe just statusful) authority
--
Vagueness, mental fog, “underconfidence”, avoidance, evasion, blanking out, etc. are hallmarks of Ra. If cornered, a person embodying Ra will abruptly switch from blurry vagueness to anger and nihilism.
Ra is involved in the sense of “everyone but me is in on the joke, there is a Thing that I don’t understand myself but is the most important Thing, and I must approximate or imitate or cargo-cult the Thing, and anybody who doesn’t is bad.”
Ra causes persistent brain fog or confusion, especially around economic thinking or cost-benefit analysis or quantitative estimates.
Ra causes a disinclination to express oneself. An impression that a person who is unknown or mysterious is more attractive or favorably received than a person who is an “open book.”
Ra is fake Horus.
things
thinking
mystic
postrat
status
signaling
essay
civilization
society
power
insight
hmm
metabuch
🦀
hidden-motives
leviathan
models
2016
core-rats
minimalism
frisson
ratty
vague
cost-benefit
schelling
order-disorder
emotion
info-dynamics
elegance
judgement
--
Vagueness, mental fog, “underconfidence”, avoidance, evasion, blanking out, etc. are hallmarks of Ra. If cornered, a person embodying Ra will abruptly switch from blurry vagueness to anger and nihilism.
Ra is involved in the sense of “everyone but me is in on the joke, there is a Thing that I don’t understand myself but is the most important Thing, and I must approximate or imitate or cargo-cult the Thing, and anybody who doesn’t is bad.”
Ra causes persistent brain fog or confusion, especially around economic thinking or cost-benefit analysis or quantitative estimates.
Ra causes a disinclination to express oneself. An impression that a person who is unknown or mysterious is more attractive or favorably received than a person who is an “open book.”
Ra is fake Horus.
october 2016 by nhaliday
Advantages of monolithic version control
programming engineering pragmatic vcs dan-luu summary rhetoric analysis techtariat minimalism shipping working-stiff code-organizing contrarianism google facebook best-practices cost-benefit workflow collaboration coupling-cohesion worse-is-better/the-right-thing whole-partial-many number
october 2016 by nhaliday
programming engineering pragmatic vcs dan-luu summary rhetoric analysis techtariat minimalism shipping working-stiff code-organizing contrarianism google facebook best-practices cost-benefit workflow collaboration coupling-cohesion worse-is-better/the-right-thing whole-partial-many number
october 2016 by nhaliday
In praise of choicelessness | Meaningness
culture anthropology society chapman meaningness water embedded-cognition walls minimalism individualism-collectivism analytical-holistic context homo-hetero info-dynamics social-capital theos religion buddhism asia developing-world psycho-atoms
october 2016 by nhaliday
culture anthropology society chapman meaningness water embedded-cognition walls minimalism individualism-collectivism analytical-holistic context homo-hetero info-dynamics social-capital theos religion buddhism asia developing-world psycho-atoms
october 2016 by nhaliday
Epistemic learned helplessness - Jackdaws love my big sphinx of quartz
october 2016 by nhaliday
I don’t think I’m overselling myself too much to expect that I could argue circles around the average uneducated person. Like I mean that on most topics, I could demolish their position and make them look like an idiot. Reduce them to some form of “Look, everything you say fits together and I can’t explain why you’re wrong, I just know you are!” Or, more plausibly, “Shut up I don’t want to talk about this!”
And there are people who can argue circles around me. Maybe not on every topic, but on topics where they are experts and have spent their whole lives honing their arguments. When I was young I used to read pseudohistory books; Immanuel Velikovsky’s Ages in Chaos is a good example of the best this genre has to offer. I read it and it seemed so obviously correct, so perfect, that I could barely bring myself to bother to search out rebuttals.
And then I read the rebuttals, and they were so obviously correct, so devastating, that I couldn’t believe I had ever been so dumb as to believe Velikovsky.
And then I read the rebuttals to the rebuttals, and they were so obviously correct that I felt silly for ever doubting.
And so on for several more iterations, until the labyrinth of doubt seemed inescapable. What finally broke me out wasn’t so much the lucidity of the consensus view so much as starting to sample different crackpots. Some were almost as bright and rhetorically gifted as Velikovsky, all presented insurmountable evidence for their theories, and all had mutually exclusive ideas. After all, Noah’s Flood couldn’t have been a cultural memory both of the fall of Atlantis and of a change in the Earth’s orbit, let alone of a lost Ice Age civilization or of megatsunamis from a meteor strike. So given that at least some of those arguments are wrong and all seemed practically proven, I am obviously just gullible in the field of ancient history. Given a total lack of independent intellectual steering power and no desire to spend thirty years building an independent knowledge base of Near Eastern history, I choose to just accept the ideas of the prestigious people with professorships in Archaeology, rather than those of the universally reviled crackpots who write books about Venus being a comet.
You could consider this a form of epistemic learned helplessness, where I know any attempt to evaluate the arguments is just going to be a bad idea so I don’t even try. If you have a good argument that the Early Bronze Age worked completely differently from the way mainstream historians believe, I just don’t want to hear about it. If you insist on telling me anyway, I will nod, say that your argument makes complete sense, and then totally refuse to change my mind or admit even the slightest possibility that you might be right.
(This is the correct Bayesian action: if I know that a false argument sounds just as convincing as a true argument, argument convincingness provides no evidence either way. I should ignore it and stick with my prior.)
...
Even the smartest people I know have a commendable tendency not to take certain ideas seriously. Bostrom’s simulation argument, the anthropic doomsday argument, Pascal’s Mugging – I’ve never heard anyone give a coherent argument against any of these, but I’ve also never met anyone who fully accepts them and lives life according to their implications.
A friend tells me of a guy who once accepted fundamentalist religion because of Pascal’s Wager. I will provisionally admit that this person “takes ideas seriously”. Everyone else gets partial credit, at best.
...
Responsible doctors are at the other end of the spectrum from terrorists here. I once heard someone rail against how doctors totally ignored all the latest and most exciting medical studies. The same person, practically in the same breath, then railed against how 50% to 90% of medical studies are wrong. These two observations are not unrelated. Not only are there so many terrible studies, but pseudomedicine (not the stupid homeopathy type, but the type that links everything to some obscure chemical on an out-of-the-way metabolic pathway) has, for me, proven much like pseudohistory – unless I am an expert in that particular subsubfield of medicine, it can sound very convincing even when it’s very wrong.
The medical establishment offers a shiny tempting solution. First, a total unwillingness to trust anything, no matter how plausible it sounds, until it’s gone through an endless cycle of studies and meta-analyses. Second, a bunch of Institutes and Collaborations dedicated to filtering through all these studies and analyses and telling you what lessons you should draw from them.
I’m glad that some people never develop epistemic learned helplessness, or develop only a limited amount of it, or only in certain domains. It seems to me that although these people are more likely to become terrorists or Velikovskians or homeopaths, they’re also the only people who can figure out if something basic and unquestionable is wrong, and make this possibility well-known enough that normal people start becoming willing to consider it.
But I’m also glad epistemic learned helplessness exists. It seems like a pretty useful social safety valve most of the time.
yvain
essay
thinking
rationality
philosophy
reflection
ratty
ssc
epistemic
🤖
2013
minimalism
intricacy
p:null
info-dynamics
truth
reason
s:**
contrarianism
subculture
inference
bayesian
priors-posteriors
debate
rhetoric
pessimism
nihil
spreading
flux-stasis
robust
parsimony
dark-arts
illusion
And there are people who can argue circles around me. Maybe not on every topic, but on topics where they are experts and have spent their whole lives honing their arguments. When I was young I used to read pseudohistory books; Immanuel Velikovsky’s Ages in Chaos is a good example of the best this genre has to offer. I read it and it seemed so obviously correct, so perfect, that I could barely bring myself to bother to search out rebuttals.
And then I read the rebuttals, and they were so obviously correct, so devastating, that I couldn’t believe I had ever been so dumb as to believe Velikovsky.
And then I read the rebuttals to the rebuttals, and they were so obviously correct that I felt silly for ever doubting.
And so on for several more iterations, until the labyrinth of doubt seemed inescapable. What finally broke me out wasn’t so much the lucidity of the consensus view so much as starting to sample different crackpots. Some were almost as bright and rhetorically gifted as Velikovsky, all presented insurmountable evidence for their theories, and all had mutually exclusive ideas. After all, Noah’s Flood couldn’t have been a cultural memory both of the fall of Atlantis and of a change in the Earth’s orbit, let alone of a lost Ice Age civilization or of megatsunamis from a meteor strike. So given that at least some of those arguments are wrong and all seemed practically proven, I am obviously just gullible in the field of ancient history. Given a total lack of independent intellectual steering power and no desire to spend thirty years building an independent knowledge base of Near Eastern history, I choose to just accept the ideas of the prestigious people with professorships in Archaeology, rather than those of the universally reviled crackpots who write books about Venus being a comet.
You could consider this a form of epistemic learned helplessness, where I know any attempt to evaluate the arguments is just going to be a bad idea so I don’t even try. If you have a good argument that the Early Bronze Age worked completely differently from the way mainstream historians believe, I just don’t want to hear about it. If you insist on telling me anyway, I will nod, say that your argument makes complete sense, and then totally refuse to change my mind or admit even the slightest possibility that you might be right.
(This is the correct Bayesian action: if I know that a false argument sounds just as convincing as a true argument, argument convincingness provides no evidence either way. I should ignore it and stick with my prior.)
...
Even the smartest people I know have a commendable tendency not to take certain ideas seriously. Bostrom’s simulation argument, the anthropic doomsday argument, Pascal’s Mugging – I’ve never heard anyone give a coherent argument against any of these, but I’ve also never met anyone who fully accepts them and lives life according to their implications.
A friend tells me of a guy who once accepted fundamentalist religion because of Pascal’s Wager. I will provisionally admit that this person “takes ideas seriously”. Everyone else gets partial credit, at best.
...
Responsible doctors are at the other end of the spectrum from terrorists here. I once heard someone rail against how doctors totally ignored all the latest and most exciting medical studies. The same person, practically in the same breath, then railed against how 50% to 90% of medical studies are wrong. These two observations are not unrelated. Not only are there so many terrible studies, but pseudomedicine (not the stupid homeopathy type, but the type that links everything to some obscure chemical on an out-of-the-way metabolic pathway) has, for me, proven much like pseudohistory – unless I am an expert in that particular subsubfield of medicine, it can sound very convincing even when it’s very wrong.
The medical establishment offers a shiny tempting solution. First, a total unwillingness to trust anything, no matter how plausible it sounds, until it’s gone through an endless cycle of studies and meta-analyses. Second, a bunch of Institutes and Collaborations dedicated to filtering through all these studies and analyses and telling you what lessons you should draw from them.
I’m glad that some people never develop epistemic learned helplessness, or develop only a limited amount of it, or only in certain domains. It seems to me that although these people are more likely to become terrorists or Velikovskians or homeopaths, they’re also the only people who can figure out if something basic and unquestionable is wrong, and make this possibility well-known enough that normal people start becoming willing to consider it.
But I’m also glad epistemic learned helplessness exists. It seems like a pretty useful social safety valve most of the time.
october 2016 by nhaliday
What's worked in computer science
reflection engineering dan-luu list analysis history programming formal-methods carmack contrarianism pragmatic comparison len:short 🖥 2015 techtariat evidence-based data empirical shipping security distributed concurrency internet web pls plt functional working-stiff best-practices correctness worse-is-better/the-right-thing quotes aphorism critique essay papers ubiquity cost-benefit s:** types scala ocaml-sml haskell dotnet minimalism intricacy static-dynamic protocol-metadata big-picture methodology
october 2016 by nhaliday
reflection engineering dan-luu list analysis history programming formal-methods carmack contrarianism pragmatic comparison len:short 🖥 2015 techtariat evidence-based data empirical shipping security distributed concurrency internet web pls plt functional working-stiff best-practices correctness worse-is-better/the-right-thing quotes aphorism critique essay papers ubiquity cost-benefit s:** types scala ocaml-sml haskell dotnet minimalism intricacy static-dynamic protocol-metadata big-picture methodology
october 2016 by nhaliday
Overcoming Bias : All Is Simple Parts Interacting Simply
physics thinking synthesis hanson idk len:long essay philosophy neuro dennett new-religion map-territory models occam minimalism big-picture analytical-holistic parsimony metameta ratty structure complex-systems reduction detail-architecture cybernetics lens emergent composition-decomposition elegance coupling-cohesion
september 2016 by nhaliday
physics thinking synthesis hanson idk len:long essay philosophy neuro dennett new-religion map-territory models occam minimalism big-picture analytical-holistic parsimony metameta ratty structure complex-systems reduction detail-architecture cybernetics lens emergent composition-decomposition elegance coupling-cohesion
september 2016 by nhaliday
My Beautiful Bubble, Bryan Caplan | EconLog | Library of Economics and Liberty
september 2016 by nhaliday
funny rejoinder from Sailer:
Of course, if there were a big war, it would be nice to be defended by all those dreary Americans you despise.
And, the irony is, they'd do it, too, just because you are an American.
reflection
society
community
economics
contrarianism
lifestyle
philosophy
len:short
econotariat
cracker-econ
org:econlib
network-structure
minimalism
-_-
polarization
individualism-collectivism
isteveish
subculture
nationalism-globalism
Of course, if there were a big war, it would be nice to be defended by all those dreary Americans you despise.
And, the irony is, they'd do it, too, just because you are an American.
september 2016 by nhaliday
On Refusing to Read - The Chronicle of Higher Education
september 2016 by nhaliday
The activity of nonreading is something that scholars rarely discuss. When they — or others whose identities are bound up with books — do so, the discussions tend to have a shamefaced quality. Blame "cultural capital" — the sense of superiority associated with laying claim to books that mark one’s high social status. More entertainingly, blame Humiliation, the delicious game that a diabolical English professor invents in David Lodge’s 1975 academic satire, Changing Places. In a game of Humiliation, players win points for not having read canonical books that everyone else in the game has read. One hapless junior faculty member in the novel wins a departmental round but loses his tenure case. In real life, the game has been most happily played by the tenured professor secure in his reputation. Changing Places had apparently inspired my adviser’s confession to someone at some point, and the information then wound through the gossip mill to reach me, standing around in the mid-1990s with a beer, trying to hide my own growing list of unread books.
Consider, however, the fact that, as Matthew Wilkens points out, in 2011 more than 50,000 new novels were published in the United States alone. "The problem of abundance" is a problem for every person who has an internet connection, and it is a professional problem in every corner of literary study. Nonreading, seen in this light, is not a badge of shame, but the way of the future. Franco Moretti has been making this point for years about the literary production of the 18th and 19th centuries, inspiring a few labs-worth of scholars to turn to machine reading — for example, using algorithms to find patterns in a particular era’s literary works. This is a form of not reading that holds tight to the dream that our literary scholarship should be based on the activity of reading as much as humanly or inhumanly possible.
academia
literature
learning
attention
contrarianism
essay
rhetoric
len:long
org:mag
org:edu
minimalism
news
signal-noise
serene
culture
time-use
inhibition
info-foraging
prioritizing
explore-exploit
Consider, however, the fact that, as Matthew Wilkens points out, in 2011 more than 50,000 new novels were published in the United States alone. "The problem of abundance" is a problem for every person who has an internet connection, and it is a professional problem in every corner of literary study. Nonreading, seen in this light, is not a badge of shame, but the way of the future. Franco Moretti has been making this point for years about the literary production of the 18th and 19th centuries, inspiring a few labs-worth of scholars to turn to machine reading — for example, using algorithms to find patterns in a particular era’s literary works. This is a form of not reading that holds tight to the dream that our literary scholarship should be based on the activity of reading as much as humanly or inhumanly possible.
september 2016 by nhaliday
The Awe Delusion
june 2016 by nhaliday
Art is a technology. If you did a Casablanca / Law & Order double feature you might notice that although Casablanca perhaps has more ‘artistic value’ (that horribly vague phrase), Law & Order tells its stories with a mind-boggling efficiency that vastly outstrips the former. Some time after 1960 filmmakers learned how to tell more story with less.
thinking
art
philosophy
vgr
postrat
literature
insight
mystic
essay
hmm
aesthetics
len:short
🦀
minimalism
beauty
parsimony
elegance
june 2016 by nhaliday
Why Google Stores Billions of Lines of Code in a Single Repository | July 2016 | Communications of the ACM
engineering google vcs collaboration best-practices pragmatic techtariat scaling-tech carmack contrarianism code-organizing shipping working-stiff cost-benefit minimalism programming workflow coupling-cohesion worse-is-better/the-right-thing whole-partial-many number
june 2016 by nhaliday
engineering google vcs collaboration best-practices pragmatic techtariat scaling-tech carmack contrarianism code-organizing shipping working-stiff cost-benefit minimalism programming workflow coupling-cohesion worse-is-better/the-right-thing whole-partial-many number
june 2016 by nhaliday
Rebooting Life (Part 3)
april 2016 by nhaliday
A Dark Room: The Best-Selling Game That No One Can Explain: http://www.newyorker.com/tech/elements/a-dark-room-the-best-selling-game-that-no-one-can-explain
https://www.reddit.com/r/startups/comments/4f74dv/quit_my_full_time_corporate_job_built_an_ios_game/
reflection
microbiz
productivity
stories
explanation
techtariat
backup
games
indie
org:mag
news
minimalism
mobile
apple
business
entrepreneurialism
reddit
social
discussion
https://www.reddit.com/r/startups/comments/4f74dv/quit_my_full_time_corporate_job_built_an_ios_game/
april 2016 by nhaliday
Rob Pike: Notes on Programming in C
august 2014 by nhaliday
Issues of typography
...
Sometimes they care too much: pretty printers mechanically produce pretty output that accentuates irrelevant detail in the program, which is as sensible as putting all the prepositions in English text in bold font. Although many people think programs should look like the Algol-68 report (and some systems even require you to edit programs in that style), a clear program is not made any clearer by such presentation, and a bad program is only made laughable.
Typographic conventions consistently held are important to clear presentation, of course - indentation is probably the best known and most useful example - but when the ink obscures the intent, typography has taken over.
...
Naming
...
Finally, I prefer minimum-length but maximum-information names, and then let the context fill in the rest. Globals, for instance, typically have little context when they are used, so their names need to be relatively evocative. Thus I say maxphysaddr (not MaximumPhysicalAddress) for a global variable, but np not NodePointer for a pointer locally defined and used. This is largely a matter of taste, but taste is relevant to clarity.
...
Pointers
C is unusual in that it allows pointers to point to anything. Pointers are sharp tools, and like any such tool, used well they can be delightfully productive, but used badly they can do great damage (I sunk a wood chisel into my thumb a few days before writing this). Pointers have a bad reputation in academia, because they are considered too dangerous, dirty somehow. But I think they are powerful notation, which means they can help us express ourselves clearly.
Consider: When you have a pointer to an object, it is a name for exactly that object and no other.
...
Comments
A delicate matter, requiring taste and judgement. I tend to err on the side of eliminating comments, for several reasons. First, if the code is clear, and uses good type names and variable names, it should explain itself. Second, comments aren't checked by the compiler, so there is no guarantee they're right, especially after the code is modified. A misleading comment can be very confusing. Third, the issue of typography: comments clutter code.
But I do comment sometimes. Almost exclusively, I use them as an introduction to what follows.
...
Complexity
Most programs are too complicated - that is, more complex than they need to be to solve their problems efficiently. Why? Mostly it's because of bad design, but I will skip that issue here because it's a big one. But programs are often complicated at the microscopic level, and that is something I can address here.
Rule 1. You can't tell where a program is going to spend its time. Bottlenecks occur in surprising places, so don't try to second guess and put in a speed hack until you've proven that's where the bottleneck is.
Rule 2. Measure. Don't tune for speed until you've measured, and even then don't unless one part of the code overwhelms the rest.
Rule 3. Fancy algorithms are slow when n is small, and n is usually small. Fancy algorithms have big constants. Until you know that n is frequently going to be big, don't get fancy. (Even if n does get big, use Rule 2 first.) For example, binary trees are always faster than splay trees for workaday problems.
Rule 4. Fancy algorithms are buggier than simple ones, and they're much harder to implement. Use simple algorithms as well as simple data structures.
The following data structures are a complete list for almost all practical programs:
array
linked list
hash table
binary tree
Of course, you must also be prepared to collect these into compound data structures. For instance, a symbol table might be implemented as a hash table containing linked lists of arrays of characters.
Rule 5. Data dominates. If you've chosen the right data structures and organized things well, the algorithms will almost always be self-evident. Data structures, not algorithms, are central to programming. (See The Mythical Man-Month: Essays on Software Engineering by F. P. Brooks, page 102.)
Rule 6. There is no Rule 6.
Programming with data.
...
One of the reasons data-driven programs are not common, at least among beginners, is the tyranny of Pascal. Pascal, like its creator, believes firmly in the separation of code and data. It therefore (at least in its original form) has no ability to create initialized data. This flies in the face of the theories of Turing and von Neumann, which define the basic principles of the stored-program computer. Code and data are the same, or at least they can be. How else can you explain how a compiler works? (Functional languages have a similar problem with I/O.)
Function pointers
Another result of the tyranny of Pascal is that beginners don't use function pointers. (You can't have function-valued variables in Pascal.) Using function pointers to encode complexity has some interesting properties.
Some of the complexity is passed to the routine pointed to. The routine must obey some standard protocol - it's one of a set of routines invoked identically - but beyond that, what it does is its business alone. The complexity is distributed.
There is this idea of a protocol, in that all functions used similarly must behave similarly. This makes for easy documentation, testing, growth and even making the program run distributed over a network - the protocol can be encoded as remote procedure calls.
I argue that clear use of function pointers is the heart of object-oriented programming. Given a set of operations you want to perform on data, and a set of data types you want to respond to those operations, the easiest way to put the program together is with a group of function pointers for each type. This, in a nutshell, defines class and method. The O-O languages give you more of course - prettier syntax, derived types and so on - but conceptually they provide little extra.
...
Include files
Simple rule: include files should never include include files. If instead they state (in comments or implicitly) what files they need to have included first, the problem of deciding which files to include is pushed to the user (programmer) but in a way that's easy to handle and that, by construction, avoids multiple inclusions. Multiple inclusions are a bane of systems programming. It's not rare to have files included five or more times to compile a single C source file. The Unix /usr/include/sys stuff is terrible this way.
There's a little dance involving #ifdef's that can prevent a file being read twice, but it's usually done wrong in practice - the #ifdef's are in the file itself, not the file that includes it. The result is often thousands of needless lines of code passing through the lexical analyzer, which is (in good compilers) the most expensive phase.
Just follow the simple rule.
cf https://stackoverflow.com/questions/1101267/where-does-the-compiler-spend-most-of-its-time-during-parsing
First, I don't think it actually is true: in many compilers, most time is not spend in lexing source code. For example, in C++ compilers (e.g. g++), most time is spend in semantic analysis, in particular in overload resolution (trying to find out what implicit template instantiations to perform). Also, in C and C++, most time is often spend in optimization (creating graph representations of individual functions or the whole translation unit, and then running long algorithms on these graphs).
When comparing lexical and syntactical analysis, it may indeed be the case that lexical analysis is more expensive. This is because both use state machines, i.e. there is a fixed number of actions per element, but the number of elements is much larger in lexical analysis (characters) than in syntactical analysis (tokens).
https://news.ycombinator.com/item?id=7728207
programming
systems
philosophy
c(pp)
summer-2014
intricacy
engineering
rhetoric
contrarianism
diogenes
parsimony
worse-is-better/the-right-thing
data-structures
list
algorithms
stylized-facts
essay
ideas
performance
functional
state
pls
oop
gotchas
blowhards
duplication
compilers
syntax
lexical
checklists
metabuch
lens
notation
thinking
neurons
guide
pareto
heuristic
time
cost-benefit
multi
q-n-a
stackex
plt
hn
commentary
minimalism
techtariat
rsc
writing
technical-writing
cracker-prog
code-organizing
grokkability
protocol-metadata
direct-indirect
grokkability-clarity
latency-throughput
...
Sometimes they care too much: pretty printers mechanically produce pretty output that accentuates irrelevant detail in the program, which is as sensible as putting all the prepositions in English text in bold font. Although many people think programs should look like the Algol-68 report (and some systems even require you to edit programs in that style), a clear program is not made any clearer by such presentation, and a bad program is only made laughable.
Typographic conventions consistently held are important to clear presentation, of course - indentation is probably the best known and most useful example - but when the ink obscures the intent, typography has taken over.
...
Naming
...
Finally, I prefer minimum-length but maximum-information names, and then let the context fill in the rest. Globals, for instance, typically have little context when they are used, so their names need to be relatively evocative. Thus I say maxphysaddr (not MaximumPhysicalAddress) for a global variable, but np not NodePointer for a pointer locally defined and used. This is largely a matter of taste, but taste is relevant to clarity.
...
Pointers
C is unusual in that it allows pointers to point to anything. Pointers are sharp tools, and like any such tool, used well they can be delightfully productive, but used badly they can do great damage (I sunk a wood chisel into my thumb a few days before writing this). Pointers have a bad reputation in academia, because they are considered too dangerous, dirty somehow. But I think they are powerful notation, which means they can help us express ourselves clearly.
Consider: When you have a pointer to an object, it is a name for exactly that object and no other.
...
Comments
A delicate matter, requiring taste and judgement. I tend to err on the side of eliminating comments, for several reasons. First, if the code is clear, and uses good type names and variable names, it should explain itself. Second, comments aren't checked by the compiler, so there is no guarantee they're right, especially after the code is modified. A misleading comment can be very confusing. Third, the issue of typography: comments clutter code.
But I do comment sometimes. Almost exclusively, I use them as an introduction to what follows.
...
Complexity
Most programs are too complicated - that is, more complex than they need to be to solve their problems efficiently. Why? Mostly it's because of bad design, but I will skip that issue here because it's a big one. But programs are often complicated at the microscopic level, and that is something I can address here.
Rule 1. You can't tell where a program is going to spend its time. Bottlenecks occur in surprising places, so don't try to second guess and put in a speed hack until you've proven that's where the bottleneck is.
Rule 2. Measure. Don't tune for speed until you've measured, and even then don't unless one part of the code overwhelms the rest.
Rule 3. Fancy algorithms are slow when n is small, and n is usually small. Fancy algorithms have big constants. Until you know that n is frequently going to be big, don't get fancy. (Even if n does get big, use Rule 2 first.) For example, binary trees are always faster than splay trees for workaday problems.
Rule 4. Fancy algorithms are buggier than simple ones, and they're much harder to implement. Use simple algorithms as well as simple data structures.
The following data structures are a complete list for almost all practical programs:
array
linked list
hash table
binary tree
Of course, you must also be prepared to collect these into compound data structures. For instance, a symbol table might be implemented as a hash table containing linked lists of arrays of characters.
Rule 5. Data dominates. If you've chosen the right data structures and organized things well, the algorithms will almost always be self-evident. Data structures, not algorithms, are central to programming. (See The Mythical Man-Month: Essays on Software Engineering by F. P. Brooks, page 102.)
Rule 6. There is no Rule 6.
Programming with data.
...
One of the reasons data-driven programs are not common, at least among beginners, is the tyranny of Pascal. Pascal, like its creator, believes firmly in the separation of code and data. It therefore (at least in its original form) has no ability to create initialized data. This flies in the face of the theories of Turing and von Neumann, which define the basic principles of the stored-program computer. Code and data are the same, or at least they can be. How else can you explain how a compiler works? (Functional languages have a similar problem with I/O.)
Function pointers
Another result of the tyranny of Pascal is that beginners don't use function pointers. (You can't have function-valued variables in Pascal.) Using function pointers to encode complexity has some interesting properties.
Some of the complexity is passed to the routine pointed to. The routine must obey some standard protocol - it's one of a set of routines invoked identically - but beyond that, what it does is its business alone. The complexity is distributed.
There is this idea of a protocol, in that all functions used similarly must behave similarly. This makes for easy documentation, testing, growth and even making the program run distributed over a network - the protocol can be encoded as remote procedure calls.
I argue that clear use of function pointers is the heart of object-oriented programming. Given a set of operations you want to perform on data, and a set of data types you want to respond to those operations, the easiest way to put the program together is with a group of function pointers for each type. This, in a nutshell, defines class and method. The O-O languages give you more of course - prettier syntax, derived types and so on - but conceptually they provide little extra.
...
Include files
Simple rule: include files should never include include files. If instead they state (in comments or implicitly) what files they need to have included first, the problem of deciding which files to include is pushed to the user (programmer) but in a way that's easy to handle and that, by construction, avoids multiple inclusions. Multiple inclusions are a bane of systems programming. It's not rare to have files included five or more times to compile a single C source file. The Unix /usr/include/sys stuff is terrible this way.
There's a little dance involving #ifdef's that can prevent a file being read twice, but it's usually done wrong in practice - the #ifdef's are in the file itself, not the file that includes it. The result is often thousands of needless lines of code passing through the lexical analyzer, which is (in good compilers) the most expensive phase.
Just follow the simple rule.
cf https://stackoverflow.com/questions/1101267/where-does-the-compiler-spend-most-of-its-time-during-parsing
First, I don't think it actually is true: in many compilers, most time is not spend in lexing source code. For example, in C++ compilers (e.g. g++), most time is spend in semantic analysis, in particular in overload resolution (trying to find out what implicit template instantiations to perform). Also, in C and C++, most time is often spend in optimization (creating graph representations of individual functions or the whole translation unit, and then running long algorithms on these graphs).
When comparing lexical and syntactical analysis, it may indeed be the case that lexical analysis is more expensive. This is because both use state machines, i.e. there is a fixed number of actions per element, but the number of elements is much larger in lexical analysis (characters) than in syntactical analysis (tokens).
https://news.ycombinator.com/item?id=7728207
august 2014 by nhaliday
related tags
-_- ⊕ :) ⊕ abstraction ⊕ academia ⊕ accelerationism ⊕ advice ⊕ aesthetics ⊕ aggregator ⊕ ai ⊕ ai-control ⊕ algebra ⊕ algorithms ⊕ alignment ⊕ alt-inst ⊕ analogy ⊕ analysis ⊕ analytical-holistic ⊕ anthropology ⊕ antiquity ⊕ aphorism ⊕ apollonian-dionysian ⊕ app ⊕ apple ⊕ applicability-prereqs ⊕ arrows ⊕ art ⊕ ascetic ⊕ asia ⊕ attention ⊕ aversion ⊕ backup ⊕ bayesian ⊕ beauty ⊕ best-practices ⊕ big-picture ⊕ bio ⊕ blog ⊕ blowhards ⊕ books ⊕ bots ⊕ bounded-cognition ⊕ brands ⊕ buddhism ⊕ business ⊕ c(pp) ⊕ caching ⊕ canada ⊕ carcinisation ⊕ carmack ⊕ causation ⊕ chapman ⊕ characterization ⊕ chart ⊕ checking ⊕ checklists ⊕ civil-liberty ⊕ civilization ⊕ classification ⊕ clothing ⊕ cocktail ⊕ code-organizing ⊕ collaboration ⊕ commentary ⊕ community ⊕ comparison ⊕ compilers ⊕ complex-systems ⊕ composition-decomposition ⊕ concept ⊕ concurrency ⊕ confidence ⊕ config ⊕ consumerism ⊕ context ⊕ contradiction ⊕ contrarianism ⊕ coordination ⊕ core-rats ⊕ corporation ⊕ correctness ⊕ cost-benefit ⊕ coupling-cohesion ⊕ cracker-econ ⊕ cracker-prog ⊕ creative ⊕ critique ⊕ crosstab ⊕ culture ⊕ cybernetics ⊕ cynicism-idealism ⊕ dan-luu ⊕ dark-arts ⊕ data ⊕ data-science ⊕ data-structures ⊕ dbs ⊕ debate ⊕ debugging ⊕ deep-materialism ⊕ degrees-of-freedom ⊕ dennett ⊕ design ⊕ desktop ⊕ detail-architecture ⊕ developing-world ⊕ diogenes ⊕ direct-indirect ⊕ direction ⊕ dirty-hands ⊕ discipline ⊕ discovery ⊕ discussion ⊕ distributed ⊕ dotnet ⊕ drugs ⊕ duplication ⊕ economics ⊕ econotariat ⊕ ecosystem ⊕ editors ⊕ efficiency ⊕ elegance ⊕ embedded ⊕ embedded-cognition ⊕ embodied ⊕ emergent ⊕ emotion ⊕ empirical ⊕ engineering ⊕ entrepreneurialism ⊕ entropy-like ⊕ epistemic ⊕ error ⊕ essay ⊕ ethical-algorithms ⊕ evidence-based ⊕ examples ⊕ exocortex ⊕ explanation ⊕ explore-exploit ⊕ externalities ⊕ facebook ⊕ farmers-and-foragers ⊕ fashun ⊕ FDA ⊕ feynman ⊕ fiction ⊕ flexibility ⊕ flux-stasis ⊕ food ⊕ form-design ⊕ formal-methods ⊕ forum ⊕ frisson ⊕ frontend ⊕ frontier ⊕ functional ⊕ futurism ⊕ games ⊕ giants ⊕ git ⊕ golang ⊕ google ⊕ gotchas ⊕ gowers ⊕ graphs ⊕ grokkability ⊕ grokkability-clarity ⊕ guide ⊕ gwern ⊕ hanson ⊕ hardware ⊕ haskell ⊕ health ⊕ heavyweights ⊕ heuristic ⊕ hi-order-bits ⊕ hidden-motives ⊕ history ⊕ hmm ⊕ hn ⊕ homo-hetero ⊕ howto ⊕ huge-data-the-biggest ⊕ ideas ⊕ ideology ⊕ idk ⊕ illusion ⊕ impetus ⊕ incentives ⊕ indie ⊕ individualism-collectivism ⊕ inference ⊕ info-dynamics ⊕ info-foraging ⊕ inhibition ⊕ init ⊕ innovation ⊕ insight ⊕ institutions ⊕ interests ⊕ interface-compatibility ⊕ internet ⊕ intricacy ⊕ invariance ⊕ isteveish ⊕ judgement ⊕ jvm ⊕ latency-throughput ⊕ LaTeX ⊕ learning ⊕ legibility ⊕ len:long ⊕ len:short ⊕ lens ⊕ lesswrong ⊕ leviathan ⊕ lexical ⊕ libraries ⊕ lifestyle ⊕ links ⊕ list ⊕ literature ⊕ logic ⊕ long-short-run ⊕ long-term ⊕ longform ⊕ low-hanging ⊕ machine-learning ⊕ map-territory ⊕ math ⊕ math.CA ⊕ math.NT ⊕ mathtariat ⊕ meaningness ⊕ measure ⊕ measurement ⊕ medicine ⊕ mena4 ⊕ meta:medicine ⊕ meta:prediction ⊕ meta:reading ⊕ meta:rhetoric ⊕ metabuch ⊕ metal-to-virtual ⊕ metameta ⊕ methodology ⊕ metrics ⊕ microbiz ⊕ mindful ⊕ minimalism ⊖ minimum-viable ⊕ miri-cfar ⊕ mobile ⊕ models ⊕ money-for-time ⊕ move-fast-(and-break-things) ⊕ multi ⊕ mystic ⊕ nationalism-globalism ⊕ nature ⊕ necessity-sufficiency ⊕ network-structure ⊕ neuro ⊕ neurons ⊕ new-religion ⊕ news ⊕ nibble ⊕ nihil ⊕ nitty-gritty ⊕ nlp ⊕ notation ⊕ notetaking ⊕ number ⊕ ocaml-sml ⊕ occam ⊕ ocr ⊕ oop ⊕ openai ⊕ operational ⊕ optimization ⊕ order-disorder ⊕ org:bleg ⊕ org:com ⊕ org:econlib ⊕ org:edu ⊕ org:fin ⊕ org:mag ⊕ org:med ⊕ organizing ⊕ os ⊕ oss ⊕ p:null ⊕ papers ⊕ parasites-microbiome ⊕ pareto ⊕ parsimony ⊕ pdf ⊕ people ⊕ performance ⊕ personal-finance ⊕ pessimism ⊕ pharma ⊕ philosophy ⊕ physics ⊕ pic ⊕ pinboard ⊕ planning ⊕ pls ⊕ plt ⊕ polarization ⊕ politics ⊕ postrat ⊕ power ⊕ pragmatic ⊕ presentation ⊕ prioritizing ⊕ priors-posteriors ⊕ productivity ⊕ profile ⊕ programming ⊕ project ⊕ proofs ⊕ properties ⊕ proposal ⊕ protocol-metadata ⊕ psycho-atoms ⊕ python ⊕ q-n-a ⊕ quality ⊕ quotes ⊕ r-lang ⊕ random ⊕ randy-ayndy ⊕ rant ⊕ rationality ⊕ ratty ⊕ reason ⊕ recommendations ⊕ reddit ⊕ reduction ⊕ reference ⊕ reflection ⊕ regularizer ⊕ regulation ⊕ religion ⊕ replication ⊕ repo ⊕ retention ⊕ review ⊕ rhetoric ⊕ robust ⊕ roots ⊕ rsc ⊕ rust ⊕ s:** ⊕ sanctity-degradation ⊕ scala ⊕ scaling-tech ⊕ schelling ⊕ science ⊕ scifi-fantasy ⊕ scitariat ⊕ search ⊕ security ⊕ sequential ⊕ serene ⊕ shipping ⊕ signal-noise ⊕ signaling ⊕ simler ⊕ simplification-normalization ⊕ skeleton ⊕ sleep ⊕ slides ⊕ social ⊕ social-capital ⊕ social-structure ⊕ society ⊕ software ⊕ soylent ⊕ spreading ⊕ ssc ⊕ stackex ⊕ stagnation ⊕ startups ⊕ state ⊕ static-dynamic ⊕ status ⊕ stock-flow ⊕ store ⊕ stories ⊕ stream ⊕ structure ⊕ studying ⊕ stylized-facts ⊕ subculture ⊕ summary ⊕ summer-2014 ⊕ synchrony ⊕ syntax ⊕ synthesis ⊕ system-design ⊕ systematic-ad-hoc ⊕ systems ⊕ tech ⊕ tech-infrastructure ⊕ technical-writing ⊕ techtariat ⊕ terminal ⊕ tetlock ⊕ the-trenches ⊕ theos ⊕ things ⊕ thinking ⊕ time ⊕ time-series ⊕ time-use ⊕ todo ⊕ tools ⊕ top-n ⊕ tradeoffs ⊕ trees ⊕ trends ⊕ trivia ⊕ truth ⊕ tutorial ⊕ twitter ⊕ types ⊕ ubiquity ⊕ uncertainty ⊕ unintended-consequences ⊕ unix ⊕ vague ⊕ vcs ⊕ vgr ⊕ video ⊕ virtu ⊕ volo-avolo ⊕ vulgar ⊕ walls ⊕ water ⊕ web ⊕ webapp ⊕ weird ⊕ west-hunter ⊕ whole-partial-many ⊕ wiki ⊕ wire-guided ⊕ wkfly ⊕ workflow ⊕ working-stiff ⊕ worrydream ⊕ worse-is-better/the-right-thing ⊕ writing ⊕ yak-shaving ⊕ yvain ⊕ 🐸 ⊕ 🖥 ⊕ 🤖 ⊕ 🦀 ⊕Copy this bookmark: