nhaliday + stackex   277

history - Why are UNIX/POSIX system call namings so illegible? - Unix & Linux Stack Exchange
It's due to the technical constraints of the time. The POSIX standard was created in the 1980s and referred to UNIX, which was born in the 1970. Several C compilers at that time were limited to identifiers that were 6 or 8 characters long, so that settled the standard for the length of variable and function names.

http://neverworkintheory.org/2017/11/26/abbreviated-full-names.html
We carried out a family of controlled experiments to investigate whether the use of abbreviated identifier names, with respect to full-word identifier names, affects fault fixing in C and Java source code. This family consists of an original (or baseline) controlled experiment and three replications. We involved 100 participants with different backgrounds and experiences in total. Overall results suggested that there is no difference in terms of effort, effectiveness, and efficiency to fix faults, when source code contains either only abbreviated or only full-word identifier names. We also conducted a qualitative study to understand the values, beliefs, and assumptions that inform and shape fault fixing when identifier names are either abbreviated or full-word. We involved in this qualitative study six professional developers with 1--3 years of work experience. A number of insights emerged from this qualitative study and can be considered a useful complement to the quantitative results from our family of experiments. One of the most interesting insights is that developers, when working on source code with abbreviated identifier names, adopt a more methodical approach to identify and fix faults by extending their focus point and only in a few cases do they expand abbreviated identifiers.
q-n-a  stackex  trivia  programming  os  systems  legacy  legibility  ux  libraries  unix  linux  hacker  cracker-prog  multi  evidence-based  empirical  expert-experience  engineering  study  best-practices  comparison  quality  debugging  efficiency  time 
5 days ago by nhaliday
python - Executing multi-line statements in the one-line command-line? - Stack Overflow
you could do
> echo -e "import sys\nfor r in range(10): print 'rob'" | python
or w/out pipes:
> python -c "exec(\"import sys\nfor r in range(10): print 'rob'\")"
or
> (echo "import sys" ; echo "for r in range(10): print 'rob'") | python

[ed.: In fish
> python -c "import sys"\n"for r in range(10): print 'rob'"]
q-n-a  stackex  programming  yak-shaving  pls  python  howto  terminal  parsimony  syntax  gotchas 
16 days ago by nhaliday
Why is Google Translate so bad for Latin? A longish answer. : latin
hmm:
> All it does its correlate sequences of up to five consecutive words in texts that have been manually translated into two or more languages.
That sort of system ought to be perfect for a dead language, though. Dump all the Cicero, Livy, Lucretius, Vergil, and Oxford Latin Course into a database and we're good.

We're not exactly inundated with brand new Latin to translate.
--
> Dump all the Cicero, Livy, Lucretius, Vergil, and Oxford Latin Course into a database and we're good.
What makes you think that the Google folks haven't done so and used that to create the language models they use?
> That sort of system ought to be perfect for a dead language, though.
Perhaps. But it will be bad at translating novel English sentences to Latin.
foreign-lang  reddit  social  discussion  language  the-classics  literature  dataset  measurement  roots  traces  syntax  anglo  nlp  stackex  links  q-n-a  linguistics  lexical  deep-learning  sequential  hmm  project  arrows  generalization 
20 days ago by nhaliday
paradigms - What's your strongest opinion against functional programming? - Software Engineering Stack Exchange
The problem is that most common code inherently involves state -- business apps, games, UI, etc. There's no problem with some parts of an app being purely functional; in fact most apps could benefit in at least one area. But forcing the paradigm all over the place feels counter-intuitive.
q-n-a  stackex  programming  engineering  pls  functional  pragmatic  cost-benefit  rhetoric  debate  steel-man  business  regularizer  abstraction  state  realness 
21 days ago by nhaliday
c++ - Which is faster: Stack allocation or Heap allocation - Stack Overflow
On my machine, using g++ 3.4.4 on Windows, I get "0 clock ticks" for both stack and heap allocation for anything less than 100000 allocations, and even then I get "0 clock ticks" for stack allocation and "15 clock ticks" for heap allocation. When I measure 10,000,000 allocations, stack allocation takes 31 clock ticks and heap allocation takes 1562 clock ticks.

so maybe around 100x difference? what does that work out to in terms of total workload?

hmm:
http://vlsiarch.eecs.harvard.edu/wp-content/uploads/2017/02/asplos17mallacc.pdf
Recent work shows that dynamic memory allocation consumes nearly 7% of all cycles in Google datacenters.

That's not too bad actually. Seems like I shouldn't worry about shifting from heap to stack/globals unless profiling says it's important, particularly for non-oly stuff.

edit: Actually, factor x100 for 7% is pretty high, could be increase constant factor by almost an order of magnitude.
q-n-a  stackex  programming  c(pp)  systems  memory-management  performance  intricacy  comparison  benchmarks  data  objektbuch  empirical  google  papers  nibble  time  measure  pro-rata  distribution  multi  pdf  oly-programming  computer-memory 
23 days ago by nhaliday
c++ - Constexpr Math Functions - Stack Overflow
Actually, because of old and annoying legacy, almost none of the math functions can be constexpr, since they all have the side-effect of setting errno on various error conditions, usually domain errors.
--
Note, gcc has implemented most of the math function as constexpr although the extension is non-conforming this should change. So definitely doable. – Shafik Yaghmour Jan 12 '15 at 20:2
q-n-a  stackex  programming  pls  c(pp)  gotchas  legacy  numerics  state  resources-effects 
27 days ago by nhaliday
c++ - Why is size_t unsigned? - Stack Overflow
size_t is unsigned for historical reasons.

On an architecture with 16 bit pointers, such as the "small" model DOS programming, it would be impractical to limit strings to 32 KB.

For this reason, the C standard requires (via required ranges) ptrdiff_t, the signed counterpart to size_t and the result type of pointer difference, to be effectively 17 bits.

Those reasons can still apply in parts of the embedded programming world.

However, they do not apply to modern 32-bit or 64-bit programming, where a much more important consideration is that the unfortunate implicit conversion rules of C and C++ make unsigned types into bug attractors, when they're used for numbers (and hence, arithmetical operations and magnitude comparisions). With 20-20 hindsight we can now see that the decision to adopt those particular conversion rules, where e.g. string( "Hi" ).length() < -3 is practically guaranteed, was rather silly and impractical. However, that decision means that in modern programming, adopting unsigned types for numbers has severe disadvantages and no advantages – except for satisfying the feelings of those who find unsigned to be a self-descriptive type name, and fail to think of typedef int MyType.

Summing up, it was not a mistake. It was a decision for then very rational, practical programming reasons. It had nothing to do with transferring expectations from bounds-checked languages like Pascal to C++ (which is a fallacy, but a very very common one, even if some of those who do it have never heard of Pascal).
q-n-a  stackex  c(pp)  systems  embedded  hardware  measure  types  signum  gotchas  roots  explanans  pls  programming 
28 days ago by nhaliday
linux - How do I insert a tab character in Iterm? - Stack Overflow
However, this isn't an iTerm thing, this is your shell that's doing it.

ctrl-V for inserting nonprintable literals doesn't work in fish, neither in vi mode nor emacs mode. prob easiest to just switch to bash.
q-n-a  stackex  terminal  yak-shaving  gotchas  keyboard  tip-of-tongue  strings 
28 days ago by nhaliday
The End of the Editor Wars » Linux Magazine
Moreover, even if you assume a broad margin of error, the pollings aren't even close. With all the various text editors available today, Vi and Vim continue to be the choice of over a third of users, while Emacs well back in the pack, no longer a competitor for the most popular text editor.

https://www.quora.com/Are-there-more-Emacs-or-Vim-users
I believe Vim is actually more popular, but it's hard to find any real data on it. The best source I've seen is the annual StackOverflow developer survey where 15.2% of developers used Vim compared to a mere 3.2% for Emacs.

Oddly enough, the report noted that "Data scientists and machine learning developers are about 3 times more likely to use Emacs than any other type of developer," which is not necessarily what I would have expected.

[ed. NB: Vim still dominates overall.]

https://pinboard.in/u:nhaliday/b:6adc1b1ef4dc

Time To End The vi/Emacs Debate: https://cacm.acm.org/blogs/blog-cacm/226034-time-to-end-the-vi-emacs-debate/fulltext

Vim, Emacs and their forever war. Does it even matter any more?: https://blog.sourcerer.io/vim-emacs-and-their-forever-war-does-it-even-matter-any-more-697b1322d510
Like an episode of “Silicon Valley”, a discussion of Emacs vs. Vim used to have a polarizing effect that would guarantee a stimulating conversation, regardless of an engineer’s actual alignment. But nowadays, diehard Emacs and Vim users are getting much harder to find. Maybe I’m in the wrong orbit, but looking around today, I see that engineers are equally or even more likely to choose any one of a number of great (for any given definition of ‘great’) modern editors or IDEs such as Sublime Text, Visual Studio Code, Atom, IntelliJ (… or one of its siblings), Brackets, Visual Studio or Xcode, to name a few. It’s not surprising really — many top engineers weren’t even born when these editors were at version 1.0, and GUIs (for better or worse) hadn’t been invented.

...

… both forums have high traffic and up-to-the-minute comment and discussion threads. Some of the available statistics paint a reasonably healthy picture — Stackoverflow’s 2016 developer survey ranks Vim 4th out of 24 with 26.1% of respondents in the development environments category claiming to use it. Emacs came 15th with 5.2%. In combination, over 30% is, actually, quite impressive considering they’ve been around for several decades.

What’s odd, however, is that if you ask someone — say a random developer — to express a preference, the likelihood is that they will favor for one or the other even if they have used neither in anger. Maybe the meme has spread so widely that all responses are now predominantly ritualistic, and represent something more fundamental than peoples’ mere preference for an editor? There’s a rather obvious political hypothesis waiting to be made — that Emacs is the leftist, socialist, centralized state, while Vim represents the right and the free market, specialization and capitalism red in tooth and claw.

How is Emacs/Vim used in companies like Google, Facebook, or Quora? Are there any libraries or tools they share in public?: https://www.quora.com/How-is-Emacs-Vim-used-in-companies-like-Google-Facebook-or-Quora-Are-there-any-libraries-or-tools-they-share-in-public
In Google there's a fair amount of vim and emacs. I would say at least every other engineer uses one or another.

Among Software Engineers, emacs seems to be more popular, about 2:1. Among Site Reliability Engineers, vim is more popular, about 9:1.
--
People use both at Facebook, with (in my opinion) slightly better tooling for Emacs than Vim. We share a master.emacs and master.vimrc file, which contains the bare essentials (like syntactic highlighting for the Hack language). We also share a Ctags file that's updated nightly with a cron script.

Beyond the essentials, there's a group for Emacs users at Facebook that provides tips, tricks, and major-modes created by people at Facebook. That's where Adam Hupp first developed his excellent mural-mode (ahupp/mural), which does for Ctags what iDo did for file finding and buffer switching.
--
For emacs, it was very informal at Google. There wasn't a huge community of Emacs users at Google, so there wasn't much more than a wiki and a couple language styles matching Google's style guides.

https://trends.google.com/trends/explore?date=all&geo=US&q=%2Fm%2F07zh7,%2Fm%2F01yp0m

https://www.quora.com/Why-is-interest-in-Emacs-dropping
And it is still that. It’s just that emacs is no longer unique, and neither is Lisp.

Dynamically typed scripting languages with garbage collection are a dime a dozen now. Anybody in their right mind developing an extensible text editor today would just use python, ruby, lua, or JavaScript as the extension language and get all the power of Lisp combined with vibrant user communities and millions of lines of ready-made libraries that Stallman and Steele could only dream of in the 70s.

In fact, in many ways emacs and elisp have fallen behind: 40 years after Lambda, the Ultimate Imperative, elisp is still dynamically scoped, and it still doesn’t support multithreading — when I try to use dired to list the files on a slow NFS mount, the entire editor hangs just as thoroughly as it might have in the 1980s. And when I say “doesn’t support multithreading,” I don’t mean there is some other clever trick for continuing to do work while waiting on a system call, like asynchronous callbacks or something. There’s start-process which forks a whole new process, and that’s about it. It’s a concurrency model straight out of 1980s UNIX land.

But being essentially just a decent text editor has robbed emacs of much of its competitive advantage. In a world where every developer tool is scriptable with languages and libraries an order of magnitude more powerful than cranky old elisp, the reason to use emacs is not that it lets a programmer hit a button and evaluate the current expression interactively (which must have been absolutely amazing at one point in the past).

https://www.reddit.com/r/emacs/comments/bh5kk7/why_do_many_new_users_still_prefer_vim_over_emacs/

more general comparison, not just popularity:
Differences between Emacs and Vim: https://stackoverflow.com/questions/1430164/differences-between-Emacs-and-vim

https://www.reddit.com/r/emacs/comments/9hen7z/what_are_the_benefits_of_emacs_over_vim/

Technical Interview Performance by Editor/OS/Language: https://triplebyte.com/blog/technical-interview-performance-by-editor-os-language
[ed.: I'm guessing this is confounded to all hell.]

The #1 most common editor we see used in interviews is Sublime Text, with Vim close behind.

Emacs represents a fairly small market share today at just about a quarter the userbase of Vim in our interviews. This nicely matches the 4:1 ratio of Google Search Trends for the two editors.

...

Vim takes the prize here, but PyCharm and Emacs are close behind. We’ve found that users of these editors tend to pass our interview at an above-average rate.

On the other end of the spectrum is Eclipse: it appears that someone using either Vim or Emacs is more than twice as likely to pass our technical interview as an Eclipse user.

...

In this case, we find that the average Ruby, Swift, and C# users tend to be stronger, with Python and Javascript in the middle of the pack.

...

Here’s what happens after we select engineers to work with and send them to onsites:

[Python does best.]

There are no wild outliers here, but let’s look at the C++ segment. While C++ programmers have the most challenging time passing Triplebyte’s technical interview on average, the ones we choose to work with tend to have a relatively easier time getting offers at each onsite.

The Rise of Microsoft Visual Studio Code: https://triplebyte.com/blog/editor-report-the-rise-of-visual-studio-code
This chart shows the rates at which each editor's users pass our interview compared to the mean pass rate for all candidates. First, notice the preeminence of Emacs and Vim! Engineers who use these editors pass our interview at significantly higher rates than other engineers. And the effect size is not small. Emacs users pass our interview at a rate 50% higher than other engineers. What could explain this phenomenon? One possible explanation is that Vim and Emacs are old school. You might expect their users to have more experience and, thus, to do better. However, notice that VS Code is the third best editor—and it is brand new. This undercuts that narrative a bit (and makes VS Code look even more dominant).

Do Emacs and Vim users have some other characteristic that makes them more likely to succeed during interviews? Perhaps they tend to be more willing to invest time and effort customizing a complex editor in the short-term in order to get returns from a more powerful tool in the long-term?

...

Java and C# do have relatively low pass rates, although notice that Eclipse has a lower pass rate than Java (-21.4% vs. -16.7), so we cannot fully explain its poor performance as Java dragging it down.

Also, what's going on with Go? Go programmers are great! To dig deeper into these questions, I looked at editor usage by language:

...

Another finding from this chart is the difference between VS Code and Sublime. VS Code is primarily used for JavaScript development (61%) but less frequently for Python development (22%). With Sublime, the numbers are basically reversed (51% Python and 30% JavaScript). It's interesting that VS Code users pass interviews at a higher rate than Sublime engineers, even though they predominately use a language with a lower success rate (JavaSript).

To wrap things up, I sliced the data by experience level and location. Here you can see language usage by experience level:

...

Then there's editor usage by experience level:

...

Take all of this with a grain of salt. I want to end by saying that we don't think any of this is causative. That is, I don't recommend that you start using Emacs and Go (or stop using… [more]
news  linux  oss  tech  editors  devtools  tools  comparison  ranking  flux-stasis  trends  ubiquity  unix  increase-decrease  multi  q-n-a  qra  data  poll  stackex  sv  facebook  google  integration-extension  org:med  politics  stereotypes  coalitions  decentralized  left-wing  right-wing  chart  scale  time-series  distribution  top-n  list  discussion  ide  parsimony  intricacy  cost-benefit  tradeoffs  confounding  analysis  crosstab  pls  python  c(pp)  jvm  microsoft  golang  hmm  correlation  debate  critique 
4 weeks ago by nhaliday
algorithm, algorithmic, algorithmicx, algorithm2e, algpseudocode = confused - TeX - LaTeX Stack Exchange
algorithm2e is only one currently maintained, but answerer prefers style of algorithmicx, and after perusing the docs, so do I
q-n-a  stackex  libraries  list  recommendations  comparison  publishing  cs  programming  algorithms  tools 
6 weeks ago by nhaliday
macos - Converting cron to launchd - MAILTO - Stack Overflow
one way to convert to launchd is the lazy way (how i do it)

You pay $0.99 for Lingon from the app store; then you can just fill out a few fields and it makes the launchd...

otherwise: a launchd would look like this
q-n-a  stackex  howto  yak-shaving  osx  desktop  automation 
6 weeks ago by nhaliday
bibliographies - bibtex vs. biber and biblatex vs. natbib - TeX - LaTeX Stack Exchange
- bibtex and biber are external programs that process bibliography information and act (roughly) as the interface between your .bib file and your LaTeX document.
- natbib and biblatex are LaTeX packages that format citations and bibliographies; natbib works only with bibtex, while biblatex (at the moment) works with both bibtex and biber.)

natbib
The natbib package has been around for quite a long time, and although still maintained, it is fair to say that it isn't being further developed. It is still widely used, and very reliable.

Advantages
...
- The resulting bibliography code can be pasted directly into a document (often required for journal submissions). See Biblatex: submitting to a journal.

...

biblatex
The biblatex package is being actively developed in conjunction with the biber backend.

Advantages
*lots*

Disadvantages
- Journals and publishers may not accept documents that use biblatex if they have a house style with its own natbib compatible .bst file.
q-n-a  stackex  latex  comparison  cost-benefit  writing  scholar  technical-writing  yak-shaving  publishing 
7 weeks ago by nhaliday
c++ - How to check if LLDB loaded debug symbols from shared libraries? - Stack Overflow
Now this question is also answered in official LDDB documentation in "Troubleshooting LLDB" section, please see "How do I check if I have debug symbols?": https://lldb.llvm.org/use/troubleshooting.html#how-do-i-check-if-i-have-debug-symbols It gives a slightly different approach, even though the approach from the accepted answer worked quite fine for me. – dying_sphynx Nov 3 '18 at 10:58

One fairly simple way to do it is:
(lldb) image lookup -vn <SomeFunctionNameThatShouldHaveDebugInfo>
q-n-a  stackex  programming  yak-shaving  gotchas  howto  debugging  build-packaging  llvm  multi  documentation 
7 weeks ago by nhaliday
package writing - Where do I start LaTeX programming? - TeX - LaTeX Stack Exchange
I think there are three categories which need to be mastered (perhaps not all in the same degree) in order to become comfortable around TeX programming:

1. TeX programming. That's very basic, it deals with expansion control, counters, scopes, basic looping constructs and so on.

2. TeX typesetting. That's on a higher level, it includes control over boxes, lines, glues, modes, and perhaps about 1000 parameters.

3. Macro packages like LaTeX.
q-n-a  stackex  programming  latex  howto  nitty-gritty  yak-shaving  links  list  recommendations  books  guide 
7 weeks ago by nhaliday
packages - Are the TeX semantics and grammar defined somewhere in some official documents? - TeX - LaTeX Stack Exchange
The grammar of each TeX command is more or less completely given in The TeXBook. Note, however, that unlike most programming languages the lexical analysis and tokenisation of the input cannot be separated from execution as the catcode table which controls tokenisation is dynamically changeable. Thus parsing TeX tends to defeat most parser generation tools.

LaTeX is a set of macros written in TeX so is defined by its implementation, although there is fairly extensive documentation in The LaTeX Companion, the LaTeX book (LaTeX: A Document Preparation System), and elsewhere.
q-n-a  stackex  programming  compilers  latex  yak-shaving  nitty-gritty  syntax 
7 weeks ago by nhaliday
documentation - Materials for learning TikZ - TeX - LaTeX Stack Exchange
The way I learned all three was basically demand-driven --- "learning by doing". Whenever I needed something "new", I'd dig into the manual and try stuff until either it worked (not always most elegantly), or in desperation go to the examples website, or moan here on TeX-'n-Friends. Occasionally supplemented by trying to answer "challenging" questions here.

yeah I kinda figured that was the right approach. just not worth the time to be proactive.
q-n-a  stackex  latex  list  links  tutorial  guide  learning  yak-shaving  recommendations  programming  visuo  dataviz  prioritizing  technical-writing 
7 weeks ago by nhaliday
performance - What is the difference between latency, bandwidth and throughput? - Stack Overflow
Latency is the amount of time it takes to travel through the tube.
Bandwidth is how wide the tube is.
The amount of water flow will be your throughput

Vehicle Analogy:

Container travel time from source to destination is latency.
Container size is bandwidth.
Container load is throughput.

--

Note, bandwidth in particular has other common meanings, I've assumed networking because this is stackoverflow but if it was a maths or amateur radio forum I might be talking about something else entirely.
q-n-a  stackex  programming  IEEE  nitty-gritty  definition  jargon  network-structure  metrics  speedometer  time  stock-flow  performance 
7 weeks ago by nhaliday
What every computer scientist should know about floating-point arithmetic
Floating-point arithmetic is considered as esoteric subject by many people. This is rather surprising, because floating-point is ubiquitous in computer systems: Almost every language has a floating-point datatype; computers from PCs to supercomputers have floating-point accelerators; most compilers will be called upon to compile floating-point algorithms from time to time; and virtually every operating system must respond to floating-point exceptions such as overflow. This paper presents a tutorial on the aspects of floating-point that have a direct impact on designers of computer systems. It begins with background on floating-point representation and rounding error, continues with a discussion of the IEEE floating point standard, and concludes with examples of how computer system builders can better support floating point.

https://stackoverflow.com/questions/2729637/does-epsilon-really-guarantees-anything-in-floating-point-computations
"you must use an epsilon when dealing with floats" is a knee-jerk reaction of programmers with a superficial understanding of floating-point computations, for comparisons in general (not only to zero).

This is usually unhelpful because it doesn't tell you how to minimize the propagation of rounding errors, it doesn't tell you how to avoid cancellation or absorption problems, and even when your problem is indeed related to the comparison of two floats, it doesn't tell you what value of epsilon is right for what you are doing.

...

Regarding the propagation of rounding errors, there exists specialized analyzers that can help you estimate it, because it is a tedious thing to do by hand.

https://www.di.ens.fr/~cousot/projects/DAEDALUS/synthetic_summary/CEA/Fluctuat/index.html

This was part of HW1 of CS24:
https://en.wikipedia.org/wiki/Kahan_summation_algorithm
In particular, simply summing n numbers in sequence has a worst-case error that grows proportional to n, and a root mean square error that grows as {\displaystyle {\sqrt {n}}} {\sqrt {n}} for random inputs (the roundoff errors form a random walk).[2] With compensated summation, the worst-case error bound is independent of n, so a large number of values can be summed with an error that only depends on the floating-point precision.[2]

cf:
https://en.wikipedia.org/wiki/Pairwise_summation
In numerical analysis, pairwise summation, also called cascade summation, is a technique to sum a sequence of finite-precision floating-point numbers that substantially reduces the accumulated round-off error compared to naively accumulating the sum in sequence.[1] Although there are other techniques such as Kahan summation that typically have even smaller round-off errors, pairwise summation is nearly as good (differing only by a logarithmic factor) while having much lower computational cost—it can be implemented so as to have nearly the same cost (and exactly the same number of arithmetic operations) as naive summation.

In particular, pairwise summation of a sequence of n numbers xn works by recursively breaking the sequence into two halves, summing each half, and adding the two sums: a divide and conquer algorithm. Its worst-case roundoff errors grow asymptotically as at most O(ε log n), where ε is the machine precision (assuming a fixed condition number, as discussed below).[1] In comparison, the naive technique of accumulating the sum in sequence (adding each xi one at a time for i = 1, ..., n) has roundoff errors that grow at worst as O(εn).[1] Kahan summation has a worst-case error of roughly O(ε), independent of n, but requires several times more arithmetic operations.[1] If the roundoff errors are random, and in particular have random signs, then they form a random walk and the error growth is reduced to an average of {\displaystyle O(\varepsilon {\sqrt {\log n}})} O(\varepsilon {\sqrt {\log n}}) for pairwise summation.[2]

A very similar recursive structure of summation is found in many fast Fourier transform (FFT) algorithms, and is responsible for the same slow roundoff accumulation of those FFTs.[2][3]

https://eng.libretexts.org/Bookshelves/Electrical_Engineering/Book%3A_Fast_Fourier_Transforms_(Burrus)/10%3A_Implementing_FFTs_in_Practice/10.8%3A_Numerical_Accuracy_in_FFTs
However, these encouraging error-growth rates only apply if the trigonometric “twiddle” factors in the FFT algorithm are computed very accurately. Many FFT implementations, including FFTW and common manufacturer-optimized libraries, therefore use precomputed tables of twiddle factors calculated by means of standard library functions (which compute trigonometric constants to roughly machine precision). The other common method to compute twiddle factors is to use a trigonometric recurrence formula—this saves memory (and cache), but almost all recurrences have errors that grow as O(n‾√) , O(n) or even O(n2) which lead to corresponding errors in the FFT.

...

There are, in fact, trigonometric recurrences with the same logarithmic error growth as the FFT, but these seem more difficult to implement efficiently; they require that a table of Θ(logn) values be stored and updated as the recurrence progresses. Instead, in order to gain at least some of the benefits of a trigonometric recurrence (reduced memory pressure at the expense of more arithmetic), FFTW includes several ways to compute a much smaller twiddle table, from which the desired entries can be computed accurately on the fly using a bounded number (usually <3) of complex multiplications. For example, instead of a twiddle table with n entries ωkn , FFTW can use two tables with Θ(n‾√) entries each, so that ωkn is computed by multiplying an entry in one table (indexed with the low-order bits of k ) by an entry in the other table (indexed with the high-order bits of k ).

[ed.: Nicholas Higham's "Accuracy and Stability of Numerical Algorithms" seems like a good reference for this kind of analysis.]
nibble  pdf  papers  programming  systems  numerics  nitty-gritty  intricacy  approximation  accuracy  types  sci-comp  multi  q-n-a  stackex  hmm  oly-programming  accretion  formal-methods  yak-shaving  wiki  reference  algorithms  yoga  ground-up  divide-and-conquer  fourier  books  tidbits  chart  caltech  nostalgia 
7 weeks ago by nhaliday
c - Aligning to cache line and knowing the cache line size - Stack Overflow
To know the sizes, you need to look it up using the documentation for the processor, afaik there is no programatic way to do it. On the plus side however, most cache lines are of a standard size, based on intels standards. On x86 cache lines are 64 bytes, however, to prevent false sharing, you need to follow the guidelines of the processor you are targeting (intel has some special notes on its netburst based processors), generally you need to align to 64 bytes for this (intel states that you should also avoid crossing 16 byte boundries).

To do this in C or C++ requires that you use the standard aligned_alloc function or one of the compiler specific specifiers such as __attribute__((align(64))) or __declspec(align(64)). To pad between members in a struct to split them onto different cache lines, you need on insert a member big enough to align it to the next 64 byte boundery

...

sysctl hw.cachelinesize
q-n-a  stackex  trivia  systems  programming  c(pp)  assembly  howto  caching 
7 weeks ago by nhaliday
c++ - What is the difference between #include <filename> and #include "filename"? - Stack Overflow
In practice, the difference is in the location where the preprocessor searches for the included file.

For #include <filename> the preprocessor searches in an implementation dependent manner, normally in search directories pre-designated by the compiler/IDE. This method is normally used to include standard library header files.

For #include "filename" the preprocessor searches first in the same directory as the file containing the directive, and then follows the search path used for the #include <filename> form. This method is normally used to include programmer-defined header files.
q-n-a  stackex  programming  c(pp)  trivia  pls 
7 weeks ago by nhaliday
python - Does pandas iterrows have performance issues? - Stack Overflow
Generally, iterrows should only be used in very very specific cases. This is the general order of precedence for performance of various operations:

1) vectorization
2) using a custom cython routine
3) apply
a) reductions that can be performed in cython
b) iteration in python space
4) itertuples
5) iterrows
6) updating an empty frame (e.g. using loc one-row-at-a-time)
q-n-a  stackex  programming  python  libraries  gotchas  data-science  sci-comp  performance  checklists  objektbuch  best-practices 
7 weeks ago by nhaliday
c++ - Why is the code in most STL implementations so convoluted? - Stack Overflow
A similar questions have been previously posed:

Is there a readable implementation of the STL

Why STL implementation is so unreadable? How C++ could have been improved here?

--

Neil Butterworth, now listed as "anon", provided a useful link in his answer to the SO question "Is there a readable implementation of the STL?". Quoting his answer there:

There is a book The C++ Standard Template Library, co-authored by the original STL designers Stepanov & Lee (together with P.J. Plauger and David Musser), which describes a possible implementation, complete with code - see http://www.amazon.co.uk/C-Standard-Template-Library/dp/0134376331.

See also the other answers in that thread.

Anyway, most of the STL code (by STL I here mean the STL-like subset of the C++ standard library) is template code, and as such must be header-only, and since it's used in almost every program it pays to have that code as short as possible.

Thus, the natural trade-off point between conciseness and readability is much farther over on the conciseness end of the scale than with "normal" code.

--

About the variables names, library implementors must use "crazy" naming conventions, such as names starting with an underscore followed by an uppercase letter, because such names are reserved for them. They cannot use "normal" names, because those may have been redefined by a user macro.

Section 17.6.3.3.2 "Global names" §1 states:

Certain sets of names and function signatures are always reserved to the implementation:

Each name that contains a double underscore or begins with an underscore followed by an uppercase letter is reserved to the implementation for any use.

Each name that begins with an underscore is reserved to the implementation for use as a name in the global namespace.

(Note that these rules forbid header guards like __MY_FILE_H which I have seen quite often.)

--

Implementations vary. libc++ for example, is much easier on the eyes. There's still a bit of underscore noise though. As others have noted, the leading underscores are unfortunately required. Here's the same function in libc++:
q-n-a  stackex  programming  engineering  best-practices  c(pp)  systems  pls  nitty-gritty  libraries 
8 weeks ago by nhaliday
Dump include paths from g++ - Stack Overflow
g++ -E -x c++ - -v < /dev/null
clang++ -E -x c++ - -v < /dev/null
q-n-a  stackex  trivia  howto  programming  c(pp)  debugging 
8 weeks ago by nhaliday
c++ - Debugging template instantiations - Stack Overflow
Metashell is still in active development though: github.com/metashell/metashell
q-n-a  stackex  nitty-gritty  pls  types  c(pp)  debugging  devtools  tools  programming  howto  advice  checklists  multi  repo  github  wire-guided 
8 weeks ago by nhaliday
What makes Java easier to parse than C? - Stack Overflow
Parsing C++ is getting hard. Parsing Java is getting to be just as hard

cf the Linked questions too, lotsa good stuff
q-n-a  stackex  compilers  pls  plt  jvm  c(pp)  intricacy  syntax  automata-languages  cost-benefit  incentives  legacy 
8 weeks ago by nhaliday
Do you use source control for your database items? - Stack Overflow
Top 2 answers contradict each other but both agree that you should at least version the schema and other scripts.

My impression is that the guy linked in the accepted answer is arguing for a minority practice.
q-n-a  stackex  programming  engineering  dbs  vcs  gotchas  hmm  idk  init  nitty-gritty  debate  contrarianism  best-practices  rhetoric  links  advice  system-design 
8 weeks ago by nhaliday
What’s In A Name? Understanding Classical Music Titles | Parker Symphony Orchestra
Composition Type:
Symphony, sonata, piano quintet, concerto – these are all composition types. Classical music composers wrote works in many of these forms and often the same composer wrote multiple pieces in the same type. This is why saying you enjoy listening to “the Serenade” or “the Concerto” or “the Mazurka” is confusing. Even using the composer name often does not narrow down which piece you are referring to. For example, it is not enough to say “Beethoven Symphony”. He wrote 9 of them!

Generic Name:
Compositions often have a generic name that can describe the work’s composition type, key signature, featured instruments, etc. This could be something as simple as Symphony No. 2 (meaning the 2nd symphony written by that composer), Minuet in G major (minuet being a type of dance), or Concerto for Two Cellos (an orchestral work featuring two cellos as soloists). The problem with referring to a piece by the generic name, even along with the composer, is that, again, that may not enough to identify the exact work. While Symphony No. 2 by Mahler is sufficient since it is his only 2nd symphony, Minuet by Bach is not since he wrote many minuets over his lifetime.

Non-Generic Names:
Non-generic names, or classical music nicknames and sub-titles, are often more well-known than generic names. They can even be so famous that the composer name is not necessary to clarify which piece you are referring to. Eine Kleine Nachtmusik, the Trout Quintet, and the Surprise Symphony are all examples of non-generic names.

Who gave classical music works their non-generic names? Sometimes the composer added a subsidiary name to a work. These are called sub-titles and are considered part of the work’s formal title. The sub-title for Tchaikovsky’s Symphony No. 6 in B minor is “Pathetique”.

A nickname, on the other hand, is not part of the official title and was not assigned by the composer. It is a name that has become associated with a work. For example, Bach’s “Six Concerts à plusieurs instruments” are commonly known as the Brandenburg Concertos because they were presented as a gift to the Margrave of Brandenburg. The name was given by Bach’s biographer, Philipp Spitta, and it stuck. Mozart’s Symphony No. 41 earned the nickname Jupiter most likely because of its exuberant energy and grand scale. Schubert’s Symphony No. 8 is known as the Unfinished Symphony because he died and left it with only 2 complete movements.

In many cases, referring to a work by its non-generic name, especially with the composer name, is enough to identify a piece. Most classical music fans know which work you are referring to when you say “Beethoven’s Eroica Symphony”.

Non-Numeric Titles:
Some classical compositions do not have a generic name, but rather a non-numeric title. These are formal titles given by the composer that do not follow a sequential numeric naming convention. Works that fall into this category include the Symphony Fantastique by Berlioz, Handel’s Messiah, and Also Sprach Zarathustra by Richard Strauss.

Opus Number:
Opus numbers, abbreviated op., are used to distinguish compositions with similar titles and indicate the chronological order of production. Some composers assigned numbers to their own works, but many were inconsistent in their methods. As a result, some composers’ works are referred to with a catalogue number assigned by musicologists. The various catalogue-number systems commonly used include Köchel-Verzeichnis for Mozart (K) and Bach-Werke-Verzeichnis (BWV).

https://music.stackexchange.com/questions/6688/why-is-the-key-included-in-classical-music-titles
I was always curious why classical composers use names like this Étude in E-flat minor (Frédéric_Chopin) or Missa in G major (Johann Sebastian Bach). Is this from scales of this songs? Weren't they blocked to ever use this scale again? Why didn't they create unique titles?

--

Using a key did not prohibit a composer from using that key again (there are only thirty keys). Using a key did not prohibit them from using the same key on a work with the same form either. Bach wrote over thirty Prelude and Fugues. Four of these were Prelude and Fugue in A minor. They are now differentiated by their own BWV catalog numbers (assigned in 1950). Many pieces did have unique titles, but with the amounts of pieces the composers composed, unique titles were difficult to come up with. Also, most pieces had no lyrics. It is much easier to come up with a title when there are lyrics. So, they turned to this technique. It was used frequently during the Common Practice Period.

https://fredericksymphony.org/how-are-classical-music-compositions-named/
explanation  music  classical  trivia  duplication  q-n-a  stackex  music-theory  init  notation  multi  jargon 
8 weeks ago by nhaliday
haskell - Using -with-rtsopts ghc option as a pragma - Stack Overflow
When you specify that pragma at the top of the file, this is instead what happens (with ghc --make algo.hs):

ghc -c algo.hs -rtsopts -with-rtsopts=-K32M
ghc -o algo -package somepackage algo.o
The OPTIONS_GHC pragma tells the compiler about options to add when compiling that specific module into an object file. Because -rtsopts is a linker option (it tells GHC to link in a different set of command-line handling stuff), you can't specify it when compiling an object file. You must specify it when linking, and such options cannot be specified in a module header.
q-n-a  stackex  programming  haskell  functional  gotchas  hmm  oly  space-complexity  build-packaging 
8 weeks ago by nhaliday
oop - Functional programming vs Object Oriented programming - Stack Overflow
When you anticipate a different kind of software evolution:
- Object-oriented languages are good when you have a fixed set of operations on things, and as your code evolves, you primarily add new things. This can be accomplished by adding new classes which implement existing methods, and the existing classes are left alone.
- Functional languages are good when you have a fixed set of things, and as your code evolves, you primarily add new operations on existing things. This can be accomplished by adding new functions which compute with existing data types, and the existing functions are left alone.

When evolution goes the wrong way, you have problems:
- Adding a new operation to an object-oriented program may require editing many class definitions to add a new method.
- Adding a new kind of thing to a functional program may require editing many function definitions to add a new case.

This problem has been well known for many years; in 1998, Phil Wadler dubbed it the "expression problem". Although some researchers think that the expression problem can be addressed with such language features as mixins, a widely accepted solution has yet to hit the mainstream.

What are the typical problem definitions where functional programming is a better choice?

Functional languages excel at manipulating symbolic data in tree form. A favorite example is compilers, where source and intermediate languages change seldom (mostly the same things), but compiler writers are always adding new translations and code improvements or optimizations (new operations on things). Compilation and translation more generally are "killer apps" for functional languages.
q-n-a  stackex  programming  engineering  nitty-gritty  comparison  best-practices  cost-benefit  functional  data-structures  arrows  flux-stasis  atoms  compilers  examples  pls  plt  oop  types 
8 weeks ago by nhaliday
algorithm - Skip List vs. Binary Search Tree - Stack Overflow
Skip lists are more amenable to concurrent access/modification. Herb Sutter wrote an article about data structure in concurrent environments. It has more indepth information.

The most frequently used implementation of a binary search tree is a red-black tree. The concurrent problems come in when the tree is modified it often needs to rebalance. The rebalance operation can affect large portions of the tree, which would require a mutex lock on many of the tree nodes. Inserting a node into a skip list is far more localized, only nodes directly linked to the affected node need to be locked.
q-n-a  stackex  nibble  programming  tcs  data-structures  performance  concurrency  comparison  cost-benefit  applicability-prereqs  random  trees  tradeoffs 
8 weeks ago by nhaliday
c++ - Pointer to class data member "::*" - Stack Overflow
First encountered in emil-e/rapidcheck (gen::set).

Is this checked statically? That is, does the compiler allow me to pass an arbitrary value or does it check that every passed pointer to member pFooMember is created using &T::*fooMember? I think it's feasible to do that?
q-n-a  stackex  programming  pls  c(pp)  gotchas  weird  trivia  hmm  explanation  types  oop 
9 weeks ago by nhaliday
language design - Why does C++ need a separate header file? - Stack Overflow
C++ does it that way because C did it that way, so the real question is why did C do it that way? Wikipedia speaks a little to this.

Newer compiled languages (such as Java, C#) do not use forward declarations; identifiers are recognized automatically from source files and read directly from dynamic library symbols. This means header files are not needed.
q-n-a  stackex  programming  pls  c(pp)  compilers  trivia  roots  yak-shaving  flux-stasis  comparison  jvm 
9 weeks ago by nhaliday
c++ - Why are forward declarations necessary? - Stack Overflow
C++, while created almost 17 years later, was defined as a superset of C, and therefore had to use the same mechanism.

By the time Java rolled around in 1995, average computers had enough memory that holding a symbolic table, even for a complex project, was no longer a substantial burden. And Java wasn't designed to be backwards-compatible with C, so it had no need to adopt a legacy mechanism. C# was similarly unencumbered.

As a result, their designers chose to shift the burden of compartmentalizing symbolic declaration back off the programmer and put it on the computer again, since its cost in proportion to the total effort of compilation was minimal.
q-n-a  stackex  programming  pls  c(pp)  trivia  yak-shaving  roots  compilers  flux-stasis  comparison  jvm 
9 weeks ago by nhaliday
When to use C over C++, and C++ over C? - Software Engineering Stack Exchange
You pick C when
- you need portable assembler (which is what C is, really) for whatever reason,
- your platform doesn't provide C++ (a C compiler is much easier to implement),
- you need to interact with other languages that can only interact with C (usually the lowest common denominator on any platform) and your code consists of little more than the interface, not making it worth to lay a C interface over C++ code,
- you hack in an Open Source project (many of which, for various reasons, stick to C),
- you don't know C++.
In all other cases you should pick C++.

--

At the same time, I have to say that @Toll's answers (for one obvious example) have things just about backwards in most respects. Reasonably written C++ will generally be at least as fast as C, and often at least a little faster. Readability is generally much better, if only because you don't get buried in an avalanche of all the code for even the most trivial algorithms and data structures, all the error handling, etc.

...

As it happens, C and C++ are fairly frequently used together on the same projects, maintained by the same people. This allows something that's otherwise quite rare: a study that directly, objectively compares the maintainability of code written in the two languages by people who are equally competent overall (i.e., the exact same people). At least in the linked study, one conclusion was clear and unambiguous: "We found that using C++ instead of C results in improved software quality and reduced maintenance effort..."

--

(Side-note: Check out Linus Torvads' rant on why he prefers C to C++. I don't necessarily agree with his points, but it gives you insight into why people might choose C over C++. Rather, people that agree with him might choose C for these reasons.)

http://harmful.cat-v.org/software/c++/linus

Why would anybody use C over C++? [closed]: https://stackoverflow.com/questions/497786/why-would-anybody-use-c-over-c
Joel's answer is good for reasons you might have to use C, though there are a few others:
- You must meet industry guidelines, which are easier to prove and test for in C.
- You have tools to work with C, but not C++ (think not just about the compiler, but all the support tools, coverage, analysis, etc)
- Your target developers are C gurus
- You're writing drivers, kernels, or other low level code
- You know the C++ compiler isn't good at optimizing the kind of code you need to write
- Your app not only doesn't lend itself to be object oriented, but would be harder to write in that form

In some cases, though, you might want to use C rather than C++:
- You want the performance of assembler without the trouble of coding in assembler (C++ is, in theory, capable of 'perfect' performance, but the compilers aren't as good at seeing optimizations a good C programmer will see)
- The software you're writing is trivial, or nearly so - whip out the tiny C compiler, write a few lines of code, compile and you're all set - no need to open a huge editor with helpers, no need to write practically empty and useless classes, deal with namespaces, etc. You can do nearly the same thing with a C++ compiler and simply use the C subset, but the C++ compiler is slower, even for tiny programs.
- You need extreme performance or small code size, and know the C++ compiler will actually make it harder to accomplish due to the size and performance of the libraries
- You contend that you could just use the C subset and compile with a C++ compiler, but you'll find that if you do that you'll get slightly different results depending on the compiler.

Regardless, if you're doing that, you're using C. Is your question really "Why don't C programmers use C++ compilers?" If it is, then you either don't understand the language differences, or you don't understand compiler theory.

--

- Because they already know C
- Because they're building an embedded app for a platform that only has a C compiler
- Because they're maintaining legacy software written in C
- You're writing something on the level of an operating system, a relational database engine, or a retail 3D video game engine.
q-n-a  stackex  programming  engineering  pls  best-practices  impetus  checklists  c(pp)  systems  assembly  compilers  hardware  embedded  oss  links  study  evidence-based  devtools  performance  rant  expert-experience  types  blowhards  linux  git  vcs  debate  rhetoric  worse-is-better/the-right-thing  cracker-prog 
9 weeks ago by nhaliday
architecture - What is the most effective way to add functionality to unfamiliar, structurally unsound code? - Software Engineering Stack Exchange
If the required changes are small then follow the original coding style, that way someone picking up the code after you only needs to get used to one set idiosyncrasies.

If the required changes are large and the changes are concentrated in a few functions or modules, then, take the opportunity to refactor these modules and clean up the code.

Above all do not re-factor working code which has nothing to do with the immediate change request. It takes too much time, it introduces bugs, and, you may inadvertently stamp on a business rule that has taken years to perfect. Your boss will hate you for being so slow to deliver small changes, and, your users will hate you for crashing a system that ran for years without problems.

--

Rule 1: the better skilled are the developers who wrote the code, the more refactoring vs. rewriting from scratch you must use.

Rule 2: the larger is the project, the more refactoring vs. rewriting from scratch you must use.
q-n-a  stackex  programming  engineering  best-practices  flux-stasis  retrofit  code-dive  working-stiff  advice 
9 weeks ago by nhaliday
Why is reverse debugging rarely used? - Software Engineering Stack Exchange
(time travel)

For one, running in debug mode with recording on is very expensive compared to even normal debug mode; it also consumes a lot more memory.

It is easier to decrease the granularity from line level to function call level. For example, the standard debugger in eclipse allows you to "drop to frame," which is essentially a jump back to the start of the function with a reset of all the parameters (nothing done on the heap is reverted, and finally blocks are not executed, so it is not a true reverse debugger; be careful about that).

Note that this has been available for several years now and works hand in hand with hot-code replacement.
--
As mentioned already, performance is key e.g. with gdb's reversible debugging, running something like gzip sees a slowdown of 50,000x compared to running natively. There are commercial alternatives however: I work for Undo undo.io, and our UndoDB product does the same but with a slowdown of less than 2x. There are other commercial reversible debuggers available too.

https://undo.io
Based on GDB, UndoDB supports source-level debugging for applications written in any language supported by GDB, including C/C++, Rust and Ada.
q-n-a  stackex  programming  engineering  impetus  debugging  time  increase-decrease  worrydream  hci  devtools  direction  roots  money-for-time  review  comparison  critique  tools  software  multi  systems  c(pp)  rust  state 
11 weeks ago by nhaliday
unix - How can I profile C++ code running on Linux? - Stack Overflow
If your goal is to use a profiler, use one of the suggested ones.

However, if you're in a hurry and you can manually interrupt your program under the debugger while it's being subjectively slow, there's a simple way to find performance problems.

Just halt it several times, and each time look at the call stack. If there is some code that is wasting some percentage of the time, 20% or 50% or whatever, that is the probability that you will catch it in the act on each sample. So that is roughly the percentage of samples on which you will see it. There is no educated guesswork required. If you do have a guess as to what the problem is, this will prove or disprove it.

You may have multiple performance problems of different sizes. If you clean out any one of them, the remaining ones will take a larger percentage, and be easier to spot, on subsequent passes. This magnification effect, when compounded over multiple problems, can lead to truly massive speedup factors.

Caveat: Programmers tend to be skeptical of this technique unless they've used it themselves. They will say that profilers give you this information, but that is only true if they sample the entire call stack, and then let you examine a random set of samples. (The summaries are where the insight is lost.) Call graphs don't give you the same information, because

they don't summarize at the instruction level, and
they give confusing summaries in the presence of recursion.
They will also say it only works on toy programs, when actually it works on any program, and it seems to work better on bigger programs, because they tend to have more problems to find. They will say it sometimes finds things that aren't problems, but that is only true if you see something once. If you see a problem on more than one sample, it is real.
q-n-a  stackex  programming  engineering  performance  devtools  tools  advice  checklists  hacker  nitty-gritty  tricks  lol 
11 weeks ago by nhaliday
« earlier      
per page:    204080120160

bundles : pub

related tags

-_-  :/  ability-competence  abstraction  academia  accessibility  accretion  accuracy  acm  acmtariat  advice  afterlife  aggregator  agriculture  ai  ai-control  algorithms  allodium  analogy  analysis  anglo  anglosphere  anthropology  antidemos  antiquity  aphorism  api  app  apple  applicability-prereqs  applications  approximation  archaeology  arms  arrows  article  asia  assembly  atoms  audio  authoritarianism  automata-languages  automation  backup  bangbang  behavioral-gen  being-right  benchmarks  best-practices  biases  bible  big-peeps  big-picture  bio  biodet  bioinformatics  biophysical-econ  biotech  blowhards  books  brands  britain  broad-econ  browser  build-packaging  business  c(pp)  caching  calculation  caltech  cancer  canon  career  cartoons  CAS  causation  chart  cheatsheet  checking  checklists  chemistry  china  christianity  civilization  classical  classification  cloud  coalitions  cocktail  cocoa  code-dive  coding-theory  cog-psych  commentary  communication  communism  community  comparison  compensation  competition  compilers  composition-decomposition  compression  computation  computational-geometry  computer-memory  computer-vision  concept  conceptual-vocab  concurrency  config  confounding  confusion  conquest-empire  context  contrarianism  convexity-curvature  cooking  cool  coordination  correctness  correlation  cost-benefit  counterfactual  coupling-cohesion  cracker-prog  creative  crime  criminal-justice  critique  crosstab  cs  culture  curiosity  curvature  data  data-science  data-structures  database  dataset  dataviz  dbs  death  debate  debugging  decentralized  deep-learning  deep-materialism  definition  degrees-of-freedom  democracy  demographics  dennett  density  desktop  devtools  diet  dimensionality  diogenes  direct-indirect  direction  dirty-hands  discovery  discussion  disease  distributed  distribution  divide-and-conquer  diy  documentation  dotnet  draft  driving  dropbox  duplication  duty  dynamic  early-modern  earth  econ-productivity  economics  ecosystem  editors  education  efficiency  electromag  elegance  elite  embedded  embodied  embodied-pack  emotion  empirical  endo-exo  endogenous-exogenous  energy-resources  engineering  enhancement  enlightenment-renaissance-restoration-reformation  environment  epidemiology  equilibrium  erlang  error  essay  estimate  ethical-algorithms  ethnography  EU  europe  evidence-based  evolution  evopsych  examples  exegesis-hermeneutics  expert-experience  explanans  explanation  exploratory  facebook  fall-2015  farmers-and-foragers  fermi  feudal  fiction  fighting  film  finance  fire  fisher  fitness  flexibility  fluid  flux-stasis  food  foreign-lang  foreign-policy  formal-methods  forms-instances  forum  fourier  free  french  frontend  frontier  functional  futurism  games  gavisti  gbooks  gedanken  gender  gene-flow  generalization  genetics  genomics  geoengineering  geography  germanic  giants  git  github  gnon  gnxp  golang  good-evil  google  gotchas  government  graph-theory  graphics  graphs  gravity  great-powers  ground-up  growth-econ  guide  GWAS  h2o  habit  hacker  hardware  hashing  haskell  hci  heavy-industry  heuristic  history  hmm  hn  homo-hetero  housing  howto  huge-data-the-biggest  human-bean  ide  ideas  ideology  idk  IEEE  illusion  impetus  impro  incentives  increase-decrease  india  industrial-revolution  inequality  info-dynamics  info-foraging  init  innovation  input-output  institutions  integration-extension  interdisciplinary  interface  internet  intersection  intersection-connectedness  intricacy  investing  iron-age  iteration-recursion  janus  japan  jargon  javascript  jobs  judaism  jvm  keyboard  labor  language  latent-variables  latex  latin-america  law  leadership  learning  learning-theory  left-wing  legacy  legibility  lens  let-me-see  letters  levers  leviathan  lexical  libraries  lifestyle  linear-algebra  linguistics  links  linux  lisp  list  literature  lived-experience  llvm  lol  long-short-run  long-term  longevity  love-hate  low-hanging  lower-bounds  machine-learning  macro  madisonian  magnitude  maker  malthus  management  maps  marginal  markets  measure  measurement  mechanics  medicine  medieval  mediterranean  memory-management  MENA  mendel-randomization  mental-math  meta:medicine  meta:rhetoric  meta:war  metabolic  metabuch  methodology  metrics  micro  microsoft  military  minimalism  mit  ML-MAP-E  mobile  models  modernity  money  money-for-time  morality  mostly-modern  multi  music  music-theory  mutation  myth  nature  network-structure  networking  neuro  neurons  news  nibble  nihil  nitty-gritty  nlp  no-go  nostalgia  notation  nuclear  numerics  objektbuch  ocaml-sml  occident  old-anglo  oly  oly-programming  oop  opsec  optimization  orders  ORFE  org:com  org:edu  org:junk  org:mag  org:med  org:nat  org:popup  os  oss  osx  overflow  PAC  papers  paradox  parallax  parasites-microbiome  pareto  parsimony  pdf  peace-violence  performance  personal-finance  personality  perturbation  phase-transition  philosophy  phys-energy  physics  pic  planning  play  pls  plt  poetry  polisci  politics  poll  pop-structure  population  population-genetics  pragmatic  prediction  prepping  presentation  prioritizing  privacy  pro-rata  problem-solving  programming  project  protestant-catholic  protocol  psychology  publishing  puzzles  python  q-n-a  qra  QTL  quality  quantitative-qualitative  quixotic  quotes  random  ranking  rant  rationality  reading  realness  rec-math  recent-selection  recommendations  reddit  reference  regularizer  religion  repo  reputation  resources-effects  retrofit  review  revolution  rhetoric  right-wing  rigidity  risk  robotics  robust  roots  rsc  russia  rust  safety  sapiens  scala  scale  scaling-tech  scaling-up  scholar  sci-comp  science  scifi-fantasy  scitariat  search  security  selection  sequential  short-circuit  signal-noise  signum  similarity  sinosphere  sky  sleuthin  slides  social  social-psych  social-structure  sociality  society  sociology  software  space  space-complexity  span-cover  spatial  speaking  speculation  speedometer  sports  spreading  stackex  state  stats  steel-man  stereotypes  stock-flow  store  stories  street-fighting  strings  structure  study  studying  stylized-facts  subculture  subjective-objective  sublinear  summary  summer-2014  supply-demand  sv  syntax  system-design  systems  tactics  taxes  tcs  teaching  tech  technical-writing  technocracy  technology  techtariat  temperance  temperature  terminal  texas  the-classics  the-great-west-whale  the-self  the-south  the-trenches  the-world-is-just-atoms  theory-of-mind  theos  thermo  thesis  things  thinking  threat-modeling  tidbits  time  time-preference  time-series  tip-of-tongue  todo  tolkienesque  tools  top-n  toys  traces  track-record  tracker  tradecraft  tradeoffs  tradition  transportation  trees  trends  tricks  trivia  trust  truth  tutorial  twitter  types  ubiquity  universalism-particularism  unix  urban  urban-rural  usa  ux  vampire-squid  variance-components  vcs  virtualization  visual-understanding  visualization  visuo  volo-avolo  war  web  webapp  weird  west-hunter  whole-partial-many  wiki  wire-guided  within-without  workflow  working-stiff  world  world-war  worrydream  worse-is-better/the-right-thing  writing  yak-shaving  yoga  🌞  🔬  🖥 

Copy this bookmark:



description:


tags: