nhaliday + recommendations   264

Ask HN: What's a promising area to work on? | Hacker News
hn  discussion  q-n-a  ideas  impact  trends  the-bones  speedometer  technology  applications  tech  cs  programming  list  top-n  recommendations  lens  machine-learning  deep-learning  security  privacy  crypto  software  hardware  cloud  biotech  CRISPR  bioinformatics  biohacking  blockchain  cryptocurrency  crypto-anarchy  healthcare  graphics  SIGGRAPH  vr  automation  universalism-particularism  expert-experience  reddit  social  arbitrage  supply-demand  ubiquity  cost-benefit  compensation  chart  career  planning  strategy  long-term  advice  sub-super  commentary  rhetoric  org:com  techtariat  human-capital  prioritizing  tech-infrastructure  working-stiff  data-science 
22 hours ago by nhaliday
How I Choose What To Read — David Perell
unaffiliated  advice  reflection  checklists  metabuch  learning  studying  info-foraging  skeleton  books  heuristic  contrarianism  ubiquity  time  track-record  thinking  blowhards  bret-victor  worrydream  list  top-n  recommendations  arbitrage  trust  aphorism 
yesterday by nhaliday
Ask HN: How do you manage your one-man project? | Hacker News
The main thing is to not fall into the "productivity porn" trap of trying to find the best tool instead of actually getting stuff done - when something simple is more than enough.
hn  discussion  productivity  workflow  exocortex  management  prioritizing  parsimony  recommendations  software  desktop  app  webapp  notetaking  discipline  q-n-a 
27 days ago by nhaliday
Ask HN: Learning modern web design and CSS | Hacker News
Ask HN: Best way to learn HTML and CSS for web design?: https://news.ycombinator.com/item?id=11048409
Ask HN: How to learn design as a hacker?: https://news.ycombinator.com/item?id=8182084

Ask HN: How to learn front-end beyond the basics?: https://news.ycombinator.com/item?id=19468043
Ask HN: What is the best JavaScript stack for a beginner to learn?: https://news.ycombinator.com/item?id=8780385
Free resources for learning full-stack web development: https://news.ycombinator.com/item?id=13890114

Ask HN: What is essential reading for learning modern web development?: https://news.ycombinator.com/item?id=14888251
Ask HN: A Syllabus for Modern Web Development?: https://news.ycombinator.com/item?id=2184645

Ask HN: Modern day web development for someone who last did it 15 years ago: https://news.ycombinator.com/item?id=20656411
hn  discussion  design  form-design  frontend  web  tutorial  links  recommendations  init  pareto  efficiency  minimum-viable  move-fast-(and-break-things)  advice  roadmap  multi  hacker  games  puzzles  learning  guide  dynamic  retention  DSL  working-stiff  q-n-a  javascript  frameworks  ecosystem  libraries  client-server  hci  ux  books  chart 
28 days ago by nhaliday
Ask HN: Favorite note-taking software? | Hacker News
Ask HN: What is your ideal note-taking software and/or hardware?: https://news.ycombinator.com/item?id=13221158

my wishlist as of 2019:
- web + desktop macOS + mobile iOS (at least viewing on the last but ideally also editing)
- sync across all those
- open-source data format that's easy to manipulate for scripting purposes
- flexible organization: mostly tree hierarchical (subsuming linear/unorganized) but with the option for directed (acyclic) graph (possibly a second layer of structure/linking)
- can store plain text, LaTeX, diagrams, and raster/vector images (video prob not necessary except as links to elsewhere)
- full-text search
- somehow digest/import data from Pinboard, Workflowy, Papers 3/Bookends, and Skim, ideally absorbing most of their functionality
- so, eg, track notes/annotations side-by-side w/ original PDF/DjVu/ePub documents (to replace Papers3/Bookends/Skim), and maybe web pages too (to replace Pinboard)
- OCR of handwritten notes (how to handle equations/diagrams?)
- various forms of NLP analysis of everything (topic models, clustering, etc)
- maybe version control (less important than export)

- Evernote prob ruled out do to heavy use of proprietary data formats (unless I can find some way to export with tolerably clean output)
- Workflowy/Dynalist are good but only cover a subset of functionality I want
- org-mode doesn't interact w/ mobile well (and I haven't evaluated it in detail otherwise)
- TiddlyWiki/Zim are in the running, but not sure about mobile
- idk about vimwiki but I'm not that wedded to vim and it seems less widely used than org-mode/TiddlyWiki/Zim so prob pass on that
- Quiver/Joplin/Inkdrop look similar and cover a lot of bases, TODO: evaluate more
- Trilium looks especially promising, tho read-only mobile and for macOS desktop look at this: https://github.com/zadam/trilium/issues/511
- RocketBook is interesting scanning/OCR solution but prob not sufficient due to proprietary data format
- TODO: many more candidates, eg, TreeSheets, Gingko, OneNote (macOS?...), Notion (proprietary data format...), Zotero, Nodebook (https://nodebook.io/landing), Polar (https://getpolarized.io), Roam (looks very promising)

Ask HN: What do you use for you personal note taking activity?: https://news.ycombinator.com/item?id=15736102

Ask HN: What are your note-taking techniques?: https://news.ycombinator.com/item?id=9976751

Ask HN: How do you take notes (useful note-taking strategies)?: https://news.ycombinator.com/item?id=13064215

Ask HN: How to get better at taking notes?: https://news.ycombinator.com/item?id=21419478

Ask HN: How did you build up your personal knowledge base?: https://news.ycombinator.com/item?id=21332957
nice comment from math guy on structure and difference between math and CS: https://news.ycombinator.com/item?id=21338628
useful comment collating related discussions: https://news.ycombinator.com/item?id=21333383
Designing a Personal Knowledge base: https://news.ycombinator.com/item?id=8270759
Ask HN: How to organize personal knowledge?: https://news.ycombinator.com/item?id=17892731
Do you use a personal 'knowledge base'?: https://news.ycombinator.com/item?id=21108527
Ask HN: How do you share/organize knowledge at work and life?: https://news.ycombinator.com/item?id=21310030

other stuff:
Tiago Forte: https://www.buildingasecondbrain.com

hn search: https://hn.algolia.com/?query=notetaking&type=story

Slant comparison commentary: https://news.ycombinator.com/item?id=7011281

good comparison of options here in comments here (and Trilium itself looks good): https://news.ycombinator.com/item?id=18840990



Roam: https://news.ycombinator.com/item?id=21440289

Inkdrop: https://news.ycombinator.com/item?id=20103589

Joplin: https://news.ycombinator.com/item?id=15815040

Frame: https://news.ycombinator.com/item?id=18760079

Notion: https://news.ycombinator.com/item?id=18904648

hn  discussion  recommendations  software  tools  desktop  app  notetaking  exocortex  wkfly  wiki  productivity  multi  comparison  crosstab  properties  applicability-prereqs  nlp  info-foraging  chart  webapp  reference  q-n-a  retention  workflow  reddit  social  ratty  ssc  learning  studying  commentary  structure  thinking  network-structure  things  collaboration  ocr  trees  graphs  LaTeX  search  todo  project  money-for-time  synchrony  pinboard  state  duplication  worrydream  simplification-normalization  links  minimalism  design  neurons  ai-control  openai  miri-cfar 
4 weeks ago by nhaliday
Choose the best - Slant
I've noticed I fairly often agree w/ the rankings from this (at least when they show up in my search results). more accurate than I would've expected
organization  community  aggregator  data  database  search  review  software  tools  devtools  app  recommendations  ranking  list  top-n  workflow  track-record  saas  tech-infrastructure  consumerism  hardware  sleuthin  judgement 
4 weeks ago by nhaliday
C++ IDE for Linux? - Stack Overflow
- Vim/Emacs + Unix/GNU tools,
- VSCode or Sublime
- CodeLite
- Netbeans
- QT Creator
q-n-a  stackex  programming  c(pp)  devtools  tools  ide  software  recommendations  unix  linux 
9 weeks ago by nhaliday
[Tutorial] A way to Practice Competitive Programming : From Rating 1000 to 2400+ - Codeforces
this guy really didn't take that long to reach red..., as of today he's done 20 contests in 2y to my 44 contests in 7y (w/ a long break)...>_>

tho he has 3 times as many submissions as me. maybe he does a lot of virtual rounds?

some snippets from the PDF guide linked:
To be rating 1900, skills as follows are needed:
- You know and can use major algorithms like these:
Brute force DP DFS BFS Dijkstra
Binary Indexed Tree nCr, nPr Mod inverse Bitmasks Binary Search
- You can code faster (For example, 5 minutes for R1100 problems, 10 minutes for
R1400 problems)

If you are not good at fast-coding and fast-debugging, you should solve AtCoder problems. Actually, and statistically, many Japanese are good at fast-coding relatively while not so good at solving difficult problems. I think that’s because of AtCoder.

I recommend to solve problem C and D in AtCoder Beginner Contest. On average, if you can solve problem C of AtCoder Beginner Contest within 10 minutes and problem D within 20 minutes, you are Div1 in FastCodingForces :)


Interestingly, typical problems are concentrated in Div2-only round problems. If you are not good at Div2-only round, it is likely that you are not good at using typical algorithms, especially 10 algorithms that are written above.

If you can use some typical problem but not good at solving more than R1500 in Codeforces, you should begin TopCoder. This type of practice is effective for people who are good at Div.2 only round but not good at Div.1+Div.2 combined or Div.1+Div.2 separated round.

Sometimes, especially in Div1+Div2 round, some problems need mathematical concepts or thinking. Since there are a lot of problems which uses them (and also light-implementation!) in TopCoder, you should solve TopCoder problems.

I recommend to solve Div1Easy of recent 100 SRMs. But some problems are really difficult, (e.g. even red-ranked coder could not solve) so before you solve, you should check how many percent of people did solve this problem. You can use https://competitiveprogramming.info/ to know some informations.

To be rating 2200, skills as follows are needed:
- You know and can use 10 algorithms which I stated in pp.11 and segment trees
(including lazy propagations)
- You can solve problems very fast: For example, 5 mins for R1100, 10 mins for
R1500, 15 mins for R1800, 40 mins for R2000.
- You have decent skills for mathematical-thinking or considering problems
- Strong mental which can think about the solution more than 1 hours, and don’t give up even if you are below average in Div1 in the middle of the contest

This is only my way to practice, but I did many virtual contests when I was rating 2000. In this page, virtual contest does not mean “Virtual Participation” in Codeforces. It means choosing 4 or 5 problems which the difficulty is near your rating (For example, if you are rating 2000, choose R2000 problems in Codeforces) and solve them within 2 hours. You can use https://vjudge.net/. In this website, you can make virtual contests from problems on many online judges. (e.g. AtCoder, Codeforces, Hackerrank, Codechef, POJ, ...)

If you cannot solve problem within the virtual contests and could not be able to find the solution during the contest, you should read editorial. Google it. (e.g. If you want to know editorial of Codeforces Round #556 (Div. 1), search “Codeforces Round #556 editorial” in google) There is one more important thing to gain rating in Codeforces. To solve problem fast, you should equip some coding library (or template code). For example, I think that equipping segment tree libraries, lazy segment tree libraries, modint library, FFT library, geometry library, etc. is very effective.

2200 to 2400:
Rating 2200 and 2400 is actually very different ...

To be rating 2400, skills as follows are needed:
- You should have skills that stated in previous section (rating 2200)
- You should solve difficult problems which are only solved by less than 100 people in Div1 contests


At first, there are a lot of educational problems in AtCoder. I recommend you should solve problem E and F (especially 700-900 points problem in AtCoder) of AtCoder Regular Contest, especially ARC058-ARC090. Though old AtCoder Regular Contests are balanced for “considering” and “typical”, but sadly, AtCoder Grand Contest and recent AtCoder Regular Contest problems are actually too biased for considering I think, so I don’t recommend if your goal is gain rating in Codeforces. (Though if you want to gain rating more than 2600, you should solve problems from AtCoder Grand Contest)

For me, actually, after solving AtCoder Regular Contests, my average performance in CF virtual contest increased from 2100 to 2300 (I could not reach 2400 because start was early)

If you cannot solve problems, I recommend to give up and read editorial as follows:
Point value 600 700 800 900 1000-
CF rating R2000 R2200 R2400 R2600 R2800
Time to editorial 40 min 50 min 60 min 70 min 80 min

If you solve AtCoder educational problems, your skills of competitive programming will be increased. But there is one more problem. Without practical skills, you rating won’t increase. So, you should do 50+ virtual participations (especially Div.1) in Codeforces. In virtual participation, you can learn how to compete as a purple/orange-ranked coder (e.g. strategy) and how to use skills in Codeforces contests that you learned in AtCoder. I strongly recommend to read editorial of all problems except too difficult one (e.g. Less than 30 people solved in contest) after the virtual contest. I also recommend to write reflections about strategy, learns and improvements after reading editorial on notebooks after the contests/virtual.

In addition, about once a week, I recommend you to make time to think about much difficult problem (e.g. R2800 in Codeforces) for couple of hours. If you could not reach the solution after thinking couple of hours, I recommend you to read editorial because you can learn a lot. Solving high-level problems may give you chance to gain over 100 rating in a single contest, but also can give you chance to solve easier problems faster.
oly  oly-programming  problem-solving  learning  practice  accretion  strategy  hmm  pdf  guide  reflection  advice  wire-guided  marginal  stylized-facts  speed  time  cost-benefit  tools  multi  sleuthin  review  comparison  puzzles  contest  aggregator  recommendations  objektbuch  time-use  growth  studying  🖥  👳  yoga 
august 2019 by nhaliday
Organizing complexity is the most important skill in software development | Hacker News
- John D. Cook

Organization is the hardest part for me personally in getting better as a developer. How to build a structure that is easy to change and extend. Any tips where to find good books or online sources?
hn  commentary  techtariat  reflection  lens  engineering  programming  software  intricacy  parsimony  structure  coupling-cohesion  composition-decomposition  multi  poast  books  recommendations  abstraction  complex-systems  system-design  design  code-organizing  human-capital 
july 2019 by nhaliday
Call graph - Wikipedia
I've found both static and dynamic versions useful (former mostly when I don't want to go thru pain of compiling something)

best options AFAICT:

C/C++ and maybe Go: https://github.com/gperftools/gperftools

static: https://github.com/Vermeille/clang-callgraph
I had to go through some extra pain to get this to work:
- if you use Homebrew LLVM (that's slightly incompatible w/ macOS c++filt, make sure to pass -n flag)
- similarly macOS sed needs two extra backslashes for each escape of the angle brackets

another option: doxygen

Go: https://stackoverflow.com/questions/31362332/creating-call-graph-in-golang
both static and dynamic in one tool

Java: https://github.com/gousiosg/java-callgraph
both static and dynamic in one tool

more up-to-date forks: https://github.com/daneads/pycallgraph2 and https://github.com/YannLuo/pycallgraph
old docs: https://pycallgraph.readthedocs.io/en/master/

static: https://github.com/davidfraser/pyan

various: https://github.com/jrfonseca/gprof2dot

I believe all the dynamic tools listed here support weighting nodes and edges by CPU time/samples (inclusive and exclusive of descendants) and discrete calls. In the case of the gperftools and the Java option you probably have to parse the output to get the latter, tho.

IIRC Dtrace has probes for function entry/exit. So that's an option as well.
concept  wiki  reference  tools  devtools  graphs  trees  programming  code-dive  let-me-see  big-picture  libraries  software  recommendations  list  top-n  links  c(pp)  golang  python  javascript  jvm  stackex  q-n-a  howto  yak-shaving  visualization  dataviz  performance  structure  oss  osx  unix  linux  static-dynamic 
july 2019 by nhaliday
Amazon Products Visualization - YASIV
based off a single test run, this works really well, at least for popular books (all I was interested in at the time)
tools  search  recommendations  consumerism  books  aggregator  exploratory  let-me-see  network-structure  amazon  similarity  graphs  visualization 
july 2019 by nhaliday
C++ Core Guidelines
This document is a set of guidelines for using C++ well. The aim of this document is to help people to use modern C++ effectively. By “modern C++” we mean effective use of the ISO C++ standard (currently C++17, but almost all of our recommendations also apply to C++14 and C++11). In other words, what would you like your code to look like in 5 years’ time, given that you can start now? In 10 years’ time?

“Within C++ is a smaller, simpler, safer language struggling to get out.” – Bjarne Stroustrup


The guidelines are focused on relatively higher-level issues, such as interfaces, resource management, memory management, and concurrency. Such rules affect application architecture and library design. Following the rules will lead to code that is statically type safe, has no resource leaks, and catches many more programming logic errors than is common in code today. And it will run fast - you can afford to do things right.

We are less concerned with low-level issues, such as naming conventions and indentation style. However, no topic that can help a programmer is out of bounds.

Our initial set of rules emphasize safety (of various forms) and simplicity. They may very well be too strict. We expect to have to introduce more exceptions to better accommodate real-world needs. We also need more rules.


The rules are designed to be supported by an analysis tool. Violations of rules will be flagged with references (or links) to the relevant rule. We do not expect you to memorize all the rules before trying to write code.

This will be a long wall of text, and kinda random! My main points are:
1. C++ compile times are important,
2. Non-optimized build performance is important,
3. Cognitive load is important. I don’t expand much on this here, but if a programming language or a library makes me feel stupid, then I’m less likely to use it or like it. C++ does that a lot :)
programming  engineering  pls  best-practices  systems  c(pp)  guide  metabuch  objektbuch  reference  cheatsheet  elegance  frontier  libraries  intricacy  advanced  advice  recommendations  big-picture  novelty  lens  philosophy  state  error  types  concurrency  memory-management  performance  abstraction  plt  compilers  expert-experience  multi  checking  devtools  flux-stasis  safety  system-design  techtariat  time  measure  dotnet  comparison  examples  build-packaging  thinking  worse-is-better/the-right-thing  cost-benefit  tradeoffs  essay  commentary  oop  correctness  computer-memory  error-handling  resources-effects  latency-throughput 
june 2019 by nhaliday
Hardware is unforgiving
Today, anyone with a CS 101 background can take Geoffrey Hinton's course on neural networks and deep learning, and start applying state of the art machine learning techniques in production within a couple months. In software land, you can fix minor bugs in real time. If it takes a whole day to run your regression test suite, you consider yourself lucky because it means you're in one of the few environments that takes testing seriously. If the architecture is fundamentally flawed, you pull out your copy of Feathers' “Working Effectively with Legacy Code” and you apply minor fixes until you're done.

This isn't to say that software isn't hard, it's just a different kind of hard: the sort of hard that can be attacked with genius and perseverance, even without experience. But, if you want to build a ship, and you "only" have a decade of experience with carpentry, milling, metalworking, etc., well, good luck. You're going to need it. With a large ship, “minor” fixes can take days or weeks, and a fundamental flaw means that your ship sinks and you've lost half a year of work and tens of millions of dollars. By the time you get to something with the complexity of a modern high-performance microprocessor, a minor bug discovered in production costs three months and five million dollars. A fundamental flaw in the architecture will cost you five years and hundreds of millions of dollars2.

Physical mistakes are costly. There's no undo and editing isn't simply a matter of pressing some keys; changes consume real, physical resources. You need enough wisdom and experience to avoid common mistakes entirely – especially the ones that can't be fixed.
techtariat  comparison  software  hardware  programming  engineering  nitty-gritty  realness  roots  explanans  startups  tech  sv  the-world-is-just-atoms  examples  stories  economics  heavy-industry  hard-tech  cs  IEEE  oceans  trade  korea  asia  recruiting  britain  anglo  expert-experience  growth-econ  world  developing-world  books  recommendations  intricacy  dan-luu  age-generation  system-design  correctness  metal-to-virtual  psycho-atoms  move-fast-(and-break-things)  kumbaya-kult 
june 2019 by nhaliday
algorithm, algorithmic, algorithmicx, algorithm2e, algpseudocode = confused - TeX - LaTeX Stack Exchange
algorithm2e is only one currently maintained, but answerer prefers style of algorithmicx, and after perusing the docs, so do I
q-n-a  stackex  libraries  list  recommendations  comparison  publishing  cs  programming  algorithms  tools 
june 2019 by nhaliday
package writing - Where do I start LaTeX programming? - TeX - LaTeX Stack Exchange
I think there are three categories which need to be mastered (perhaps not all in the same degree) in order to become comfortable around TeX programming:

1. TeX programming. That's very basic, it deals with expansion control, counters, scopes, basic looping constructs and so on.

2. TeX typesetting. That's on a higher level, it includes control over boxes, lines, glues, modes, and perhaps about 1000 parameters.

3. Macro packages like LaTeX.
q-n-a  stackex  programming  latex  howto  nitty-gritty  yak-shaving  links  list  recommendations  books  guide  DSL 
may 2019 by nhaliday
documentation - Materials for learning TikZ - TeX - LaTeX Stack Exchange
The way I learned all three was basically demand-driven --- "learning by doing". Whenever I needed something "new", I'd dig into the manual and try stuff until either it worked (not always most elegantly), or in desperation go to the examples website, or moan here on TeX-'n-Friends. Occasionally supplemented by trying to answer "challenging" questions here.

yeah I kinda figured that was the right approach. just not worth the time to be proactive.
q-n-a  stackex  latex  list  links  tutorial  guide  learning  yak-shaving  recommendations  programming  visuo  dataviz  prioritizing  technical-writing 
may 2019 by nhaliday
Should I go for TensorFlow or PyTorch?
Honestly, most experts that I know love Pytorch and detest TensorFlow. Karpathy and Justin from Stanford for example. You can see Karpthy's thoughts and I've asked Justin personally and the answer was sharp: PYTORCH!!! TF has lots of PR but its API and graph model are horrible and will waste lots of your research time.



Updated Mar 12
Update after 2019 TF summit:

TL/DR: previously I was in the pytorch camp but with TF 2.0 it’s clear that Google is really going to try to have parity or try to be better than Pytorch in all aspects where people voiced concerns (ease of use/debugging/dynamic graphs). They seem to be allocating more resources on development than Facebook so the longer term currently looks promising for Google. Prior to TF 2.0 I thought that Pytorch team had more momentum. One area where FB/Pytorch is still stronger is Google is a bit more closed and doesn’t seem to release reproducible cutting edge models such as AlphaGo whereas FAIR released OpenGo for instance. Generally you will end up running into models that are only implemented in one framework of the other so chances are you might end up learning both.
q-n-a  qra  comparison  software  recommendations  cost-benefit  tradeoffs  python  libraries  machine-learning  deep-learning  data-science  sci-comp  tools  google  facebook  tech  competition  best-practices  trends  debugging  expert-experience  ecosystem  theory-practice  pragmatic  wire-guided  static-dynamic  state  academia  frameworks  open-closed 
may 2019 by nhaliday
unix - How can I profile C++ code running on Linux? - Stack Overflow
If your goal is to use a profiler, use one of the suggested ones.

However, if you're in a hurry and you can manually interrupt your program under the debugger while it's being subjectively slow, there's a simple way to find performance problems.

Just halt it several times, and each time look at the call stack. If there is some code that is wasting some percentage of the time, 20% or 50% or whatever, that is the probability that you will catch it in the act on each sample. So that is roughly the percentage of samples on which you will see it. There is no educated guesswork required. If you do have a guess as to what the problem is, this will prove or disprove it.

You may have multiple performance problems of different sizes. If you clean out any one of them, the remaining ones will take a larger percentage, and be easier to spot, on subsequent passes. This magnification effect, when compounded over multiple problems, can lead to truly massive speedup factors.

Caveat: Programmers tend to be skeptical of this technique unless they've used it themselves. They will say that profilers give you this information, but that is only true if they sample the entire call stack, and then let you examine a random set of samples. (The summaries are where the insight is lost.) Call graphs don't give you the same information, because they don't summarize at the instruction level, and
they give confusing summaries in the presence of recursion.
They will also say it only works on toy programs, when actually it works on any program, and it seems to work better on bigger programs, because they tend to have more problems to find. They will say it sometimes finds things that aren't problems, but that is only true if you see something once. If you see a problem on more than one sample, it is real.


gprof, Valgrind and gperftools - an evaluation of some tools for application level CPU profiling on Linux: http://gernotklingler.com/blog/gprof-valgrind-gperftools-evaluation-tools-application-level-cpu-profiling-linux/
gprof is the dinosaur among the evaluated profilers - its roots go back into the 1980’s. It seems it was widely used and a good solution during the past decades. But its limited support for multi-threaded applications, the inability to profile shared libraries and the need for recompilation with compatible compilers and special flags that produce a considerable runtime overhead, make it unsuitable for using it in today’s real-world projects.

Valgrind delivers the most accurate results and is well suited for multi-threaded applications. It’s very easy to use and there is KCachegrind for visualization/analysis of the profiling data, but the slow execution of the application under test disqualifies it for larger, longer running applications.

The gperftools CPU profiler has a very little runtime overhead, provides some nice features like selectively profiling certain areas of interest and has no problem with multi-threaded applications. KCachegrind can be used to analyze the profiling data. Like all sampling based profilers, it suffers statistical inaccuracy and therefore the results are not as accurate as with Valgrind, but practically that’s usually not a big problem (you can always increase the sampling frequency if you need more accurate results). I’m using this profiler on a large code-base and from my personal experience I can definitely recommend using it.
q-n-a  stackex  programming  engineering  performance  devtools  tools  advice  checklists  hacker  nitty-gritty  tricks  lol  multi  unix  linux  techtariat  analysis  comparison  recommendations  software  measurement  oly-programming  concurrency  debugging  metabuch 
may 2019 by nhaliday
Applied Cryptography Engineering — Quarrelsome
You should own Ferguson and Schneier’s follow-up, Cryptography Engineering (C.E.). Written partly in penance, the new book deftly handles material the older book stumbles over. C.E. wants to teach you the right way to work with cryptography without wasting time on GOST and El Gamal.
techtariat  books  recommendations  critique  security  crypto  best-practices  gotchas  programming  engineering  advice  hn 
may 2019 by nhaliday
Delta debugging - Wikipedia
good overview of with examples: https://www.csm.ornl.gov/~sheldon/bucket/Automated-Debugging.pdf

Not as useful for my usecases (mostly contest programming) as QuickCheck. Input is generally pretty structured and I don't have a long history of code in VCS. And when I do have the latter git-bisect is probably enough.

good book tho: http://www.whyprogramsfail.com/toc.php
WHY PROGRAMS FAIL: A Guide to Systematic Debugging\
wiki  reference  programming  systems  debugging  c(pp)  python  tools  devtools  links  hmm  formal-methods  divide-and-conquer  vcs  git  search  yak-shaving  pdf  white-paper  multi  examples  stories  books  unit  caltech  recommendations  advanced  correctness 
may 2019 by nhaliday
quality - Is the average number of bugs per loc the same for different programming languages? - Software Engineering Stack Exchange
Contrary to intuition, the number of errors per 1000 lines of does seem to be relatively constant, reguardless of the specific language involved. Steve McConnell, author of Code Complete and Software Estimation: Demystifying the Black Art goes over this area in some detail.

I don't have my copies readily to hand - they're sitting on my bookshelf at work - but a quick Google found a relevant quote:

Industry Average: "about 15 - 50 errors per 1000 lines of delivered code."
(Steve) further says this is usually representative of code that has some level of structured programming behind it, but probably includes a mix of coding techniques.

Quoted from Code Complete, found here: http://mayerdan.com/ruby/2012/11/11/bugs-per-line-of-code-ratio/

If memory serves correctly, Steve goes into a thorough discussion of this, showing that the figures are constant across languages (C, C++, Java, Assembly and so on) and despite difficulties (such as defining what "line of code" means).

Most importantly he has lots of citations for his sources - he's not offering unsubstantiated opinions, but has the references to back them up.

[ed.: I think this is delivered code? So after testing, debugging, etc. I'm more interested in the metric for the moment after you've gotten something to compile.

edit: cf https://pinboard.in/u:nhaliday/b:0a6eb68166e6]
q-n-a  stackex  programming  engineering  nitty-gritty  error  flux-stasis  books  recommendations  software  checking  debugging  pro-rata  pls  comparison  parsimony  measure  data  objektbuch  speculation  accuracy  density  correctness  estimate  street-fighting  multi  quality  stylized-facts  methodology 
april 2019 by nhaliday
Diving into Chinese philosophy – Gene Expression
Back when I was in college one of my roommates was taking a Chinese philosophy class for a general education requirement. A double major in mathematics and economics (he went on to get an economics Ph.D.) he found the lack of formal rigor in the field rather maddening. I thought this was fair, but I suggested to him that the this-worldy and often non-metaphysical orientation of much of Chinese philosophy made it less amenable to formal and logical analysis.


IMO the much more problematic thing about premodern Chinese political philosophy from the point of view of the West is its lack of interest in constitutionalism and the rule of law, stemming from a generally less rationalist approach than the Classical Westerns, than any sort of inherent anti-individualism or collectivism or whatever. For someone like Aristotle the constitutional rule of law was the highest moral good in itself and the definition of justice, very much not so for Confucius or for Zhu Xi. They still believed in Justice in the sense of people getting what they deserve, but they didn’t really consider the written rule of law an appropriate way to conceptualize it. OG Confucius leaned more towards the unwritten traditions and rituals passed down from the ancestors, and Neoconfucianism leaned more towards a sort of Universal Reason that could be accessed by the individual’s subjective understanding but which again need not be written down necessarily (although unlike Kant/the Enlightenment it basically implies that such subjective reasoning will naturally lead one to reaffirming the ancient traditions). In left-right political spectrum terms IMO this leads to a well-defined right and left and a big old hole in the center where classical republicanism would be in the West. This resonates pretty well with modern East Asian political history IMO

Is logos a proper noun?
Or, is Aristotelian Logic translatable into Chinese?
gnxp  scitariat  books  recommendations  discussion  reflection  china  asia  sinosphere  philosophy  logic  rigor  rigidity  flexibility  leviathan  law  individualism-collectivism  analytical-holistic  systematic-ad-hoc  the-classics  canon  morality  ethics  formal-values  justice  reason  tradition  government  polisci  left-wing  right-wing  order-disorder  eden-heaven  analogy  similarity  comparison  thinking  summary  top-n  n-factor  universalism-particularism  duality  rationality  absolute-relative  subjective-objective  the-self  apollonian-dionysian  big-peeps  history  iron-age  antidemos  democracy  institutions  darwinian  multi  language  concept  conceptual-vocab  inference  linguistics  foreign-lang  mediterranean  europe  germanic  mostly-modern  gallic  culture 
march 2018 by nhaliday
Books 2017 | West Hunter
Arabian Sands
The Aryans
The Big Show
The Camel and the Wheel
Civil War on Western Waters
Company Commander
Double-edged Secrets
The Forgotten Soldier
Genes in Conflict
Hive Mind
The horse, the wheel, and language
The Penguin Atlas of Medieval History
Habitable Planets for Man
The genetical theory of natural selection
The Rise of the Greeks
To Lose a Battle
The Jewish War
Tropical Gangsters
The Forgotten Revolution
Egil’s Saga
Time Patrol

Russo: https://westhunt.wordpress.com/2017/12/14/books-2017/#comment-98568
west-hunter  scitariat  books  recommendations  list  top-n  confluence  2017  info-foraging  canon  🔬  ideas  s:*  history  mostly-modern  world-war  britain  old-anglo  travel  MENA  frontier  reflection  europe  gallic  war  sapiens  antiquity  archaeology  technology  divergence  the-great-west-whale  transportation  nature  long-short-run  intel  tradecraft  japan  asia  usa  spearhead  garett-jones  hive-mind  economics  broad-econ  giants  fisher  space  iron-age  medieval  the-classics  civilization  judaism  conquest-empire  africa  developing-world  institutions  science  industrial-revolution  the-trenches  wild-ideas  innovation  speedometer  nordic  mediterranean  speculation  fiction  scifi-fantasy  time  encyclopedic  multi  poast  critique  cost-benefit  tradeoffs  quixotic 
december 2017 by nhaliday
Sources on Technical History | Salo Forum - Chic Nihilism
This is a thread where people can chip in and list some good sources for the history of technology and mechanisms (hopefully with illustrations), books on infrastructure or industrial geography, or survey books in engineering. This is a thread that remains focused on the "technical" and not historical side.

Now, on the history of technology alone if I comprehensively listed every book, paper, etc., I've read on the subject since childhood then this thread would run well over 100 pages (seriously). I'll try to compress it by dealing with entire authors, journals, and publishers even.

First, a note on preliminaries: the best single-volume primer on the physics, internal components and subsystems of military weapons (including radar, submarines) is Craig Payne's Principles of Naval Weapons Systems. Make sure to get the second edition, the first edition is useless.
gnon  🐸  chan  poast  links  reading  technology  dirty-hands  the-world-is-just-atoms  military  defense  letters  discussion  list  books  recommendations  confluence  arms  war  heavy-industry  mostly-modern  world-war  history  encyclopedic  meta:war  offense-defense  quixotic  war-nerd 
november 2017 by nhaliday
Open Thread, 11/26/2017 – Gene Expression
A few days ago there was a Twitter thing about top five books that have influenced you. It’s hard for me to name five, but I put three books down for three different reasons:

- Principles of Population Genetics, because it gives you a model for how to analyze and understand evolutionary processes. There are other books out there besides Principles of Population Genetics. But if you buy this book you don’t need to buy another (at SMBE this year I confused Andy Clark with Mike Lynch for a second when introducing myself. #awkward)
- The Fall of Rome. A lot of historical writing can be tendentious. I’ve also noticed an unfortunate tendency of historians dropping into contemporary arguments and pretty much lying through omission or elision to support their political side (it usually goes “actually, I’m a specialist in this topic and my side is 100% correct because of obscure-stuff where I’m shading the facts”). The Fall of Rome illustrates the solidity that an archaeological and materialist take can give the field. This sort of materialism isn’t the final word, but it needs to be the start of the conversation.
- From Dawn to Decadence: 1500 to the Present: 500 Years of Western Cultural Life. To know things is important in and of itself. My own personal experience is that the returns to knowing things in a particular domain or area do not exhibit a linear return. Rather, it exhibits a logistic curve. Initially, it’s hard to make sense of anything from the facts, but at some point comprehension and insight increase rapidly, until you reach the plateau of diminishing marginal returns.

If you haven’t, I recommend you subscribe to Patrick Wyman’s Tides of History podcast. I pretty much wait now for every new episode.
gnxp  scitariat  open-things  links  commentary  books  recommendations  list  top-n  confluence  bio  genetics  population-genetics  history  iron-age  the-classics  mediterranean  gibbon  letters  academia  social-science  truth  westminster  meta:rhetoric  debate  politics  nonlinearity  convexity-curvature  knowledge  learning  cost-benefit  aphorism  metabuch  podcast  psychology  evopsych  replication  social-psych  ego-depletion  stereotypes 
november 2017 by nhaliday
self study - Looking for a good and complete probability and statistics book - Cross Validated
I never had the opportunity to visit a stats course from a math faculty. I am looking for a probability theory and statistics book that is complete and self-sufficient. By complete I mean that it contains all the proofs and not just states results.
nibble  q-n-a  overflow  data-science  stats  methodology  books  recommendations  list  top-n  confluence  proofs  rigor  reference  accretion 
october 2017 by nhaliday
« earlier      
per page:    204080120160

bundles : meta

related tags

2016-election  80000-hours  :/  aaronson  ability-competence  absolute-relative  abstraction  academia  accretion  accuracy  acemoglu  acm  acmtariat  additive  aDNA  advanced  adversarial  advertising  advice  aesthetics  africa  age-generation  age-of-discovery  aggregator  aging  agri-mindset  agriculture  ai-control  albion  algebra  algorithmic-econ  algorithms  alien-character  alignment  allodium  alt-inst  altruism  amazon  american-nations  AMT  analogy  analysis  analytical-holistic  anarcho-tyranny  anglo  anglosphere  anomie  anthropology  antidemos  antiquity  aphorism  apollonian-dionysian  app  applicability-prereqs  applications  arbitrage  archaeology  archaics  aristos  arms  art  article  asia  assembly  attaq  audio  authoritarianism  autism  automation  axelrod  axioms  backup  baez  barons  bayesian  begin-middle-end  behavioral-econ  behavioral-gen  being-right  ben-recht  benchmarks  best-practices  bifl  big-list  big-peeps  big-picture  bio  biodet  biohacking  bioinformatics  biophysical-econ  biotech  bits  blockchain  blog  blowhards  boaz-barak  books  boolean-analysis  bounded-cognition  branches  bret-victor  brexit  britain  broad-econ  browser  build-packaging  business  c(pp)  c:***  california  caltech  canon  capitalism  carcinisation  career  cartoons  causation  chan  chapman  charity  chart  cheatsheet  checking  checklists  chemistry  china  christianity  christopher-lasch  circuits  civic  civil-liberty  civilization  class  class-warfare  classic  classical  classification  clever-rats  client-server  climate-change  cliometrics  cloud  clown-world  coalitions  cochrane  code-dive  code-organizing  coding-theory  cog-psych  cohesion  cold-war  collaboration  comedy  coming-apart  commentary  communication  communism  community  comparison  compensation  competition  compilers  complex-systems  complexity  composition-decomposition  computation  computer-memory  computer-vision  concept  conceptual-vocab  concurrency  confidence  config  confluence  confucian  conquest-empire  consumerism  contest  contracts  contrarianism  convexity-curvature  cooperate-defect  coordination  core-rats  corporation  correctness  correlation  corruption  cost-benefit  counter-revolution  counterexample  counterfactual  coupling-cohesion  courage  course  cracker-econ  cracker-prog  crime  criminal-justice  criminology  CRISPR  critique  crooked  crosstab  crypto  crypto-anarchy  cryptocurrency  cs  cultural-dynamics  culture  culture-war  current-events  curvature  cycles  cynicism-idealism  dan-luu  dark-arts  darwinian  data  data-science  data-structures  database  dataset  dataviz  dbs  death  debate  debt  debugging  decentralized  decision-making  decision-theory  deep-learning  deep-materialism  deepgoog  defense  definite-planning  degrees-of-freedom  democracy  demographic-transition  demographics  dennett  density  design  desktop  developing-world  developmental  devops  devtools  differential  dignity  diogenes  dirty-hands  discipline  discussion  disease  distributed  distribution  divergence  diversity  divide-and-conquer  diy  documentation  domestication  dominant-minority  dotnet  douthatish  draft  drama  dropbox  drugs  DSL  duality  duplication  duty  dynamic  dynamical  early-modern  eastern-europe  econ-metrics  econ-productivity  econometrics  economics  econotariat  ecosystem  eden  eden-heaven  editors  education  effective-altruism  efficiency  egalitarianism-hierarchy  ego-depletion  EGT  eh  einstein  elections  electromag  elegance  elite  email  embedded-cognition  embodied  empirical  ems  encyclopedic  endo-exo  endogenous-exogenous  energy-resources  engineering  enlightenment-renaissance-restoration-reformation  entertainment  environment  envy  epistemic  equilibrium  ergo  ergodic  eric-kaufmann  erik-demaine  error  error-handling  essay  estimate  ethics  ethnography  EU  europe  evidence  evidence-based  evolution  evopsych  examples  exegesis-hermeneutics  existence  exit-voice  exocortex  expansionism  experiment  expert  expert-experience  explanans  exploratory  exposition  expression-survival  externalities  extrema  facebook  farmers-and-foragers  fashun  features  fermi  fertility  fiction  fields  film  finance  fisher  fitness  flexibility  fluid  flux-stasis  foreign-lang  foreign-policy  form-design  formal-methods  formal-values  forum  frameworks  free  frequentist  frontend  frontier  functional  futurism  gallic  galor-like  game-theory  games  garett-jones  gavisti  gedanken  gelman  gender  gender-diff  gene-flow  general-survey  generalization  genetics  genomics  geography  geometry  germanic  giants  gibbon  git  github  gnon  gnosis-logos  gnxp  golang  google  gotchas  government  gowers  grad-school  graph-theory  graphical-models  graphics  graphs  gray-econ  great-powers  gregory-clark  grokkability  grokkability-clarity  ground-up  group-selection  growth  growth-econ  gtd  guide  guilt-shame  GWAS  gwern  hacker  hanson  hanushek  hard-tech  hardware  hari-seldon  harvard  haskell  hci  health  healthcare  heavy-industry  heavyweights  henrich  heterodox  heuristic  hi-order-bits  higher-ed  history  hive-mind  hmm  hn  homo-hetero  houellebecq  housing  howto  hsu  human-capital  human-ml  humility  huntington  hypothesis-testing  ide  ideas  identity-politics  ideology  idk  IEEE  iidness  illusion  impact  impetus  impro  incentives  india  individualism-collectivism  industrial-org  industrial-revolution  inequality  inference  info-dynamics  info-econ  info-foraging  infographic  information-theory  init  innovation  input-output  insight  institutions  integration-extension  integrity  intel  intelligence  interdisciplinary  interests  interface  interface-compatibility  internet  intervention  interview  intricacy  investing  iq  iran  iraq-syria  iron-age  is-ought  islam  isteveish  iteration-recursion  japan  javascript  jazz  jobs  journos-pundits  judaism  judgement  justice  jvm  kaggle  keyboard  kinship  knowledge  korea  krugman  kumbaya-kult  labor  language  latency-throughput  latent-variables  latex  latin-america  law  leadership  learning  learning-theory  lectures  lee-kuan-yew  left-wing  legacy  legibility  len:long  len:short  lens  lesswrong  let-me-see  letters  leviathan  libraries  life-history  limits  linear-algebra  linear-models  linearity  linguistics  links  linux  lisp  list  literature  lived-experience  local-global  logic  lol  long-short-run  long-term  longevity  longitudinal  lower-bounds  machiavelli  machine-learning  macro  madisonian  magnitude  maker  malaise  malthus  management  managerial-state  map-territory  maps  marginal  marginal-rev  market-power  marketing  markets  markov  martial  martingale  matching  math  math.CA  math.CO  math.CT  math.DS  math.FA  math.GR  math.NT  math.RT  mathtariat  matrix-factorization  maxim-gun  meaningness  measure  measurement  mechanics  mechanism-design  media  medicine  medieval  mediterranean  memory-management  MENA  mena4  meta-analysis  meta:medicine  meta:prediction  meta:rhetoric  meta:science  meta:war  metabolic  metabuch  metal-to-virtual  methodology  metrics  micro  microfoundations  microsoft  migrant-crisis  migration  military  minimalism  minimum-viable  miri-cfar  mit  ML-MAP-E  mobility  model-class  models  modernity  mokyr-allen-mccloskey  monetary-fiscal  money  money-for-time  monte-carlo  mooc  morality  mostly-modern  move-fast-(and-break-things)  multi  multiplicative  murray  music  music-theory  myth  n-factor  narrative  nascent-state  nationalism-globalism  natural-experiment  nature  near-far  neocons  network-structure  networking  neuro  neurons  new-religion  news  nibble  nietzschean  nihil  nitty-gritty  nl-and-so-can-you  nlp  no-go  noahpinion  noblesse-oblige  nonlinearity  nootropics  nordic  north-weingast-like  northeast  nostalgia  notetaking  novelty  nuclear  null-result  numerics  objektbuch  ocaml-sml  occident  oceans  ocr  ocw  offense-defense  old-anglo  oly  oly-programming  oop  open-closed  open-things  openai  opioids  opsec  optimate  optimism  optimization  order-disorder  orders  ORFE  org:anglo  org:biz  org:bleg  org:bv  org:com  org:econlib  org:edu  org:junk  org:lite  org:mag  org:med  org:nat  org:ngo  org:popup  org:rec  org:sci  org:theos  organization  organizing  orient  orwellian  os  oss  osx  other-xtian  outdoors  overflow  p:null  p:someday  p:whenever  paleocon  papers  parasites-microbiome  parenting  pareto  parsimony  paste  path-dependence  patho-altruism  patience  paul-romer  pdf  peace-violence  people  performance  persuasion  pessimism  phalanges  philosophy  phys-energy  physics  pic  pinboard  pinker  piracy  plan9  planning  play  pls  plt  poast  podcast  polanyi-marx  policy  polis  polisci  political-econ  politics  poll  polynomials  pop-diff  pop-structure  population  population-genetics  postrat  power  practice  pragmatic  pre-2013  pre-ww2  prediction  preference-falsification  prepping  preprint  presentation  primitivism  prioritizing  priors-posteriors  privacy  pro-rata  probabilistic-method  probability  problem-solving  productivity  profile  programming  project  proofs  propaganda  properties  property-rights  proposal  protestant-catholic  protocol-metadata  prudence  pseudoE  psych-architecture  psychedelics  psycho-atoms  psychology  psychometrics  public-goodish  publishing  putnam-like  puzzles  python  q-n-a  qra  quality  quantitative-qualitative  quantum  quixotic  quora  quotes  r-lang  race  rand-approx  random  randy-ayndy  ranking  rant  rationality  ratty  reading  realness  realpolitik  reason  recommendations  recruiting  reddit  redistribution  reference  reflection  regression  regularizer  regulation  reinforcement  relativity  religion  rent-seeking  replication  repo  research  resources-effects  retention  retrofit  review  revolution  rhetoric  right-wing  rigidity  rigor  rindermann-thompson  risk  roadmap  robust  rock  roots  rot  rsc  russia  rust  s:*  s:***  s:null  saas  safety  sapiens  scala  scale  scaruffi  schelling  scholar  scholar-pack  schools  sci-comp  science  science-anxiety  scifi-fantasy  scitariat  search  security  selection  self-interest  sentiment  sequential  sex  sexuality  shalizi  shannon  shift  shipping  sib-study  SIGGRAPH  signal-noise  signaling  similarity  simler  simplification-normalization  simulation  sinosphere  skeleton  skunkworks  sky  sleuthin  smoothness  social  social-capital  social-choice  social-norms  social-psych  social-science  social-structure  society  sociology  soft-question  software  space  span-cover  spatial  spearhead  speculation  speed  speedometer  spock  sports  ssc  stackex  stanford  startups  stat-mech  stat-power  state  state-of-art  statesmen  static-dynamic  stats  status  stereotypes  stochastic-processes  stories  strategy  straussian  stream  street-fighting  stripe  structure  study  studying  stylized-facts  sub-super  subculture  subjective-objective  sulla  summary  summer-2014  supply-demand  survey  survival  sv  synchrony  syntax  synthesis  system-design  systematic-ad-hoc  systems  tainter  tapes  tcs  tcstariat  teaching  tech  tech-infrastructure  technical-writing  technocracy  technology  techtariat  terminal  terrorism  tetlock  the-bones  the-classics  the-founding  the-great-west-whale  the-self  the-south  the-trenches  the-watchers  the-west  the-world-is-just-atoms  theory-practice  theos  thermo  thiel  things  thinking  thucydides  time  time-complexity  time-preference  time-series  time-use  tocqueville  todo  tolkienesque  toolkit  tools  top-n  topics  topology  traces  track-record  trade  tradecraft  tradeoffs  tradition  transportation  travel  trees  trends  tribalism  tricki  tricks  troll  trump  trust  truth  tumblr  turchin  turing  tutorial  tv  twitter  types  ubiquity  unaffiliated  unintended-consequences  uniqueness  unit  universalism-particularism  unix  urban-rural  us-them  usa  ux  values  vampire-squid  vcs  vgr  video  virginia-DC  virtu  visual-understanding  visualization  visuo  vitality  volo-avolo  vr  vulgar  walter-scheidel  war  war-nerd  water  wealth-of-nations  web  webapp  west-hunter  westminster  whiggish-hegelian  white-paper  wiki  wild-ideas  wire-guided  wisdom  within-group  within-without  wkfly  wonkish  wordlessness  workflow  working-stiff  world  world-war  worrydream  worse-is-better/the-right-thing  writing  X-not-about-Y  xenobio  yak-shaving  yarvin  yc  yoga  yvain  zeitgeist  🌞  🎩  🐸  👳  👽  🔬  🖥  🤖  🦀  🦉 

Copy this bookmark: