nhaliday + tools   484

Amazon Products Visualization - YASIV
based off a single test run, this works really well, at least for popular books (all I was interested in at the time)
tools  search  recommendations  consumerism  books  aggregator  exploratory  let-me-see  network-structure  amazon  similarity  graphs  visualization 
8 days ago by nhaliday
Skim / Feature Requests / #138 iphone/ebook support
Skim notes could never work on the iPhone, because SKim notes data depend on AppKit, which is not available in iOS. So any app for iOS would just be some comletely separate PDF app, that has nothing to do with Skim in particular.
tracker  app  pdf  software  tools  ios  mobile  osx  desktop  workflow  scholar  meta:reading  todo 
16 days ago by nhaliday
An Efficiency Comparison of Document Preparation Systems Used in Academic Research and Development
The choice of an efficient document preparation system is an important decision for any academic researcher. To assist the research community, we report a software usability study in which 40 researchers across different disciplines prepared scholarly texts with either Microsoft Word or LaTeX. The probe texts included simple continuous text, text with tables and subheadings, and complex text with several mathematical equations. We show that LaTeX users were slower than Word users, wrote less text in the same amount of time, and produced more typesetting, orthographical, grammatical, and formatting errors. On most measures, expert LaTeX users performed even worse than novice Word users. LaTeX users, however, more often report enjoying using their respective software. We conclude that even experienced LaTeX users may suffer a loss in productivity when LaTeX is used, relative to other document preparation systems. Individuals, institutions, and journals should carefully consider the ramifications of this finding when choosing document preparation strategies, or requiring them of authors.


However, our study suggests that LaTeX should be used as a document preparation system only in cases in which a document is heavily loaded with mathematical equations. For all other types of documents, our results suggest that LaTeX reduces the user’s productivity and results in more orthographical, grammatical, and formatting errors, more typos, and less written text than Microsoft Word over the same duration of time. LaTeX users may argue that the overall quality of the text that is created with LaTeX is better than the text that is created with Microsoft Word. Although this argument may be true, the differences between text produced in more recent editions of Microsoft Word and text produced in LaTeX may be less obvious than it was in the past. Moreover, we believe that the appearance of text matters less than the scientific content and impact to the field. In particular, LaTeX is also used frequently for text that does not contain a significant amount of mathematical symbols and formula. We believe that the use of LaTeX under these circumstances is highly problematic and that researchers should reflect on the criteria that drive their preferences to use LaTeX over Microsoft Word for text that does not require significant mathematical representations.


A second decision criterion that factors into the choice to use a particular software system is reflection about what drives certain preferences. A striking result of our study is that LaTeX users are highly satisfied with their system despite reduced usability and productivity. From a psychological perspective, this finding may be related to motivational factors, i.e., the driving forces that compel or reinforce individuals to act in a certain way to achieve a desired goal. A vital motivational factor is the tendency to reduce cognitive dissonance. According to the theory of cognitive dissonance, each individual has a motivational drive to seek consonance between their beliefs and their actual actions. If a belief set does not concur with the individual’s actual behavior, then it is usually easier to change the belief rather than the behavior [6]. The results from many psychological studies in which people have been asked to choose between one of two items (e.g., products, objects, gifts, etc.) and then asked to rate the desirability, value, attractiveness, or usefulness of their choice, report that participants often reduce unpleasant feelings of cognitive dissonance by rationalizing the chosen alternative as more desirable than the unchosen alternative [6, 7]. This bias is usually unconscious and becomes stronger as the effort to reject the chosen alternative increases, which is similar in nature to the case of learning and using LaTeX.


Given these numbers it remains an open question to determine the amount of taxpayer money that is spent worldwide for researchers to use LaTeX over a more efficient document preparation system, which would free up their time to advance their respective field. Some publishers may save a significant amount of money by requesting or allowing LaTeX submissions because a well-formed LaTeX document complying with a well-designed class file (template) is much easier to bring into their publication workflow. However, this is at the expense of the researchers’ labor time and effort. We therefore suggest that leading scientific journals should consider accepting submissions in LaTeX only if this is justified by the level of mathematics presented in the paper. In all other cases, we think that scholarly journals should request authors to submit their documents in Word or PDF format. We believe that this would be a good policy for two reasons. First, we think that the appearance of the text is secondary to the scientific merit of an article and its impact to the field. And, second, preventing researchers from producing documents in LaTeX would save time and money to maximize the benefit of research and development for both the research team and the public.

[ed.: I sense some salt.

And basically no description of how "# errors" was calculated.]

I question the validity of their methodology.
At no point in the paper is exactly what is meant by a "formatting error" or a "typesetting error" defined. From what I gather, the participants in the study were required to reproduce the formatting and layout of the sample text. In theory, a LaTeX file should strictly be a semantic representation of the content of the document; while TeX may have been a raw typesetting language, this is most definitely not the intended use case of LaTeX and is overall a very poor test of its relative advantages and capabilities.
The separation of the semantic definition of the content from the rendering of the document is, in my opinion, the most important feature of LaTeX. Like CSS, this allows the actual formatting to be abstracted away, allowing plain (marked-up) content to be written without worrying about typesetting.
Word has some similar capabilities with styles, and can be used in a similar manner, though few Word users actually use the software properly. This may sound like a relatively insignificant point, but in practice, almost every Word document I have seen has some form of inconsistent formatting. If Word disallowed local formatting changes (including things such as relative spacing of nested bullet points), forcing all formatting changes to be done in document-global styles, it would be a far better typesetting system. Also, the users would be very unhappy.
Yes, LaTeX can undeniably be a pain in the arse, especially when it comes to trying to get figures in the right place; however the combination of a simple, semantic plain-text representation with a flexible and professional typesetting and rendering engine are undeniable and completely unaddressed by this study.
It seems that the test was heavily biased in favor of WYSIWYG.
Of course that approach makes it very simple to reproduce something, as has been tested here. Even simpler would be to scan the document and run OCR. The massive problem with both approaches (WYSIWYG and scanning) is that you can't generalize any of it. You're doomed repeating it forever.
(I'll also note the other significant issue with this study: when the ratings provided by participants came out opposite of their test results, they attributed it to irrational bias.)

Over the past few years however, the line between the tools has blurred. In 2017, Microsoft made it possible to use LaTeX’s equation-writing syntax directly in Word, and last year it scrapped Word’s own equation editor. Other text editors also support elements of LaTeX, allowing newcomers to use as much or as little of the language as they like.

study  hmm  academia  writing  publishing  yak-shaving  technical-writing  software  tools  comparison  latex  scholar  regularizer  idk  microsoft  evidence-based  science  desktop  time  efficiency  multi  hn  commentary  critique  news  org:sci  flux-stasis  duplication  metrics  biases 
28 days ago by nhaliday
The End of the Editor Wars » Linux Magazine
Moreover, even if you assume a broad margin of error, the pollings aren't even close. With all the various text editors available today, Vi and Vim continue to be the choice of over a third of users, while Emacs well back in the pack, no longer a competitor for the most popular text editor.

I believe Vim is actually more popular, but it's hard to find any real data on it. The best source I've seen is the annual StackOverflow developer survey where 15.2% of developers used Vim compared to a mere 3.2% for Emacs.

Oddly enough, the report noted that "Data scientists and machine learning developers are about 3 times more likely to use Emacs than any other type of developer," which is not necessarily what I would have expected.

[ed. NB: Vim still dominates overall.]


Time To End The vi/Emacs Debate: https://cacm.acm.org/blogs/blog-cacm/226034-time-to-end-the-vi-emacs-debate/fulltext

Vim, Emacs and their forever war. Does it even matter any more?: https://blog.sourcerer.io/vim-emacs-and-their-forever-war-does-it-even-matter-any-more-697b1322d510
Like an episode of “Silicon Valley”, a discussion of Emacs vs. Vim used to have a polarizing effect that would guarantee a stimulating conversation, regardless of an engineer’s actual alignment. But nowadays, diehard Emacs and Vim users are getting much harder to find. Maybe I’m in the wrong orbit, but looking around today, I see that engineers are equally or even more likely to choose any one of a number of great (for any given definition of ‘great’) modern editors or IDEs such as Sublime Text, Visual Studio Code, Atom, IntelliJ (… or one of its siblings), Brackets, Visual Studio or Xcode, to name a few. It’s not surprising really — many top engineers weren’t even born when these editors were at version 1.0, and GUIs (for better or worse) hadn’t been invented.


… both forums have high traffic and up-to-the-minute comment and discussion threads. Some of the available statistics paint a reasonably healthy picture — Stackoverflow’s 2016 developer survey ranks Vim 4th out of 24 with 26.1% of respondents in the development environments category claiming to use it. Emacs came 15th with 5.2%. In combination, over 30% is, actually, quite impressive considering they’ve been around for several decades.

What’s odd, however, is that if you ask someone — say a random developer — to express a preference, the likelihood is that they will favor for one or the other even if they have used neither in anger. Maybe the meme has spread so widely that all responses are now predominantly ritualistic, and represent something more fundamental than peoples’ mere preference for an editor? There’s a rather obvious political hypothesis waiting to be made — that Emacs is the leftist, socialist, centralized state, while Vim represents the right and the free market, specialization and capitalism red in tooth and claw.

How is Emacs/Vim used in companies like Google, Facebook, or Quora? Are there any libraries or tools they share in public?: https://www.quora.com/How-is-Emacs-Vim-used-in-companies-like-Google-Facebook-or-Quora-Are-there-any-libraries-or-tools-they-share-in-public
In Google there's a fair amount of vim and emacs. I would say at least every other engineer uses one or another.

Among Software Engineers, emacs seems to be more popular, about 2:1. Among Site Reliability Engineers, vim is more popular, about 9:1.
People use both at Facebook, with (in my opinion) slightly better tooling for Emacs than Vim. We share a master.emacs and master.vimrc file, which contains the bare essentials (like syntactic highlighting for the Hack language). We also share a Ctags file that's updated nightly with a cron script.

Beyond the essentials, there's a group for Emacs users at Facebook that provides tips, tricks, and major-modes created by people at Facebook. That's where Adam Hupp first developed his excellent mural-mode (ahupp/mural), which does for Ctags what iDo did for file finding and buffer switching.
For emacs, it was very informal at Google. There wasn't a huge community of Emacs users at Google, so there wasn't much more than a wiki and a couple language styles matching Google's style guides.


And it is still that. It’s just that emacs is no longer unique, and neither is Lisp.

Dynamically typed scripting languages with garbage collection are a dime a dozen now. Anybody in their right mind developing an extensible text editor today would just use python, ruby, lua, or JavaScript as the extension language and get all the power of Lisp combined with vibrant user communities and millions of lines of ready-made libraries that Stallman and Steele could only dream of in the 70s.

In fact, in many ways emacs and elisp have fallen behind: 40 years after Lambda, the Ultimate Imperative, elisp is still dynamically scoped, and it still doesn’t support multithreading — when I try to use dired to list the files on a slow NFS mount, the entire editor hangs just as thoroughly as it might have in the 1980s. And when I say “doesn’t support multithreading,” I don’t mean there is some other clever trick for continuing to do work while waiting on a system call, like asynchronous callbacks or something. There’s start-process which forks a whole new process, and that’s about it. It’s a concurrency model straight out of 1980s UNIX land.

But being essentially just a decent text editor has robbed emacs of much of its competitive advantage. In a world where every developer tool is scriptable with languages and libraries an order of magnitude more powerful than cranky old elisp, the reason to use emacs is not that it lets a programmer hit a button and evaluate the current expression interactively (which must have been absolutely amazing at one point in the past).


more general comparison, not just popularity:
Differences between Emacs and Vim: https://stackoverflow.com/questions/1430164/differences-between-Emacs-and-vim


Technical Interview Performance by Editor/OS/Language: https://triplebyte.com/blog/technical-interview-performance-by-editor-os-language
[ed.: I'm guessing this is confounded to all hell.]

The #1 most common editor we see used in interviews is Sublime Text, with Vim close behind.

Emacs represents a fairly small market share today at just about a quarter the userbase of Vim in our interviews. This nicely matches the 4:1 ratio of Google Search Trends for the two editors.


Vim takes the prize here, but PyCharm and Emacs are close behind. We’ve found that users of these editors tend to pass our interview at an above-average rate.

On the other end of the spectrum is Eclipse: it appears that someone using either Vim or Emacs is more than twice as likely to pass our technical interview as an Eclipse user.


In this case, we find that the average Ruby, Swift, and C# users tend to be stronger, with Python and Javascript in the middle of the pack.


Here’s what happens after we select engineers to work with and send them to onsites:

[Python does best.]

There are no wild outliers here, but let’s look at the C++ segment. While C++ programmers have the most challenging time passing Triplebyte’s technical interview on average, the ones we choose to work with tend to have a relatively easier time getting offers at each onsite.

The Rise of Microsoft Visual Studio Code: https://triplebyte.com/blog/editor-report-the-rise-of-visual-studio-code
This chart shows the rates at which each editor's users pass our interview compared to the mean pass rate for all candidates. First, notice the preeminence of Emacs and Vim! Engineers who use these editors pass our interview at significantly higher rates than other engineers. And the effect size is not small. Emacs users pass our interview at a rate 50% higher than other engineers. What could explain this phenomenon? One possible explanation is that Vim and Emacs are old school. You might expect their users to have more experience and, thus, to do better. However, notice that VS Code is the third best editor—and it is brand new. This undercuts that narrative a bit (and makes VS Code look even more dominant).

Do Emacs and Vim users have some other characteristic that makes them more likely to succeed during interviews? Perhaps they tend to be more willing to invest time and effort customizing a complex editor in the short-term in order to get returns from a more powerful tool in the long-term?


Java and C# do have relatively low pass rates, although notice that Eclipse has a lower pass rate than Java (-21.4% vs. -16.7), so we cannot fully explain its poor performance as Java dragging it down.

Also, what's going on with Go? Go programmers are great! To dig deeper into these questions, I looked at editor usage by language:


Another finding from this chart is the difference between VS Code and Sublime. VS Code is primarily used for JavaScript development (61%) but less frequently for Python development (22%). With Sublime, the numbers are basically reversed (51% Python and 30% JavaScript). It's interesting that VS Code users pass interviews at a higher rate than Sublime engineers, even though they predominately use a language with a lower success rate (JavaSript).

To wrap things up, I sliced the data by experience level and location. Here you can see language usage by experience level:


Then there's editor usage by experience level:


Take all of this with a grain of salt. I want to end by saying that we don't think any of this is causative. That is, I don't recommend that you start using Emacs and Go (or stop using… [more]
news  linux  oss  tech  editors  devtools  tools  comparison  ranking  flux-stasis  trends  ubiquity  unix  increase-decrease  multi  q-n-a  qra  data  poll  stackex  sv  facebook  google  integration-extension  org:med  politics  stereotypes  coalitions  decentralized  left-wing  right-wing  chart  scale  time-series  distribution  top-n  list  discussion  ide  parsimony  intricacy  cost-benefit  tradeoffs  confounding  analysis  crosstab  pls  python  c(pp)  jvm  microsoft  golang  hmm  correlation  debate  critique 
4 weeks ago by nhaliday
Lindy effect - Wikipedia
The Lindy effect is a theory that the future life expectancy of some non-perishable things like a technology or an idea is proportional to their current age, so that every additional period of survival implies a longer remaining life expectancy.[1] Where the Lindy effect applies, mortality rate decreases with time. In contrast, living creatures and mechanical things follow a bathtub curve where, after "childhood", the mortality rate increases with time. Because life expectancy is probabilistically derived, a thing may become extinct before its "expected" survival. In other words, one needs to gauge both the age and "health" of the thing to determine continued survival.
wiki  reference  concept  metabuch  ideas  street-fighting  planning  comparison  time  distribution  flux-stasis  history  measure  correlation  arrows  branches  pro-rata  manifolds  aging  stylized-facts  age-generation  robust  technology  thinking  cost-benefit  conceptual-vocab  methodology  threat-modeling  efficiency  neurons  tools  track-record 
4 weeks ago by nhaliday
algorithm, algorithmic, algorithmicx, algorithm2e, algpseudocode = confused - TeX - LaTeX Stack Exchange
algorithm2e is only one currently maintained, but answerer prefers style of algorithmicx, and after perusing the docs, so do I
q-n-a  stackex  libraries  list  recommendations  comparison  publishing  cs  programming  algorithms  tools 
6 weeks ago by nhaliday
Fossil: Home
VCS w/ builtin issue tracking and wiki used by SQLite
tools  devtools  software  vcs  wiki  debugging  integration-extension  oss  dbs 
7 weeks ago by nhaliday
Frama-C is organized with a plug-in architecture (comparable to that of the Gimp or Eclipse). A common kernel centralizes information and conducts the analysis. Plug-ins interact with each other through interfaces defined by the kernel. This makes for robustness in the development of Frama-C while allowing a wide functionality spectrum.


Three heavyweight plug-ins that are used by the other plug-ins:

- Eva (Evolved Value analysis)
This plug-in computes variation domains for variables. It is quite automatic, although the user may guide the analysis in places. It handles a wide spectrum of C constructs. This plug-in uses abstract interpretation techniques.
- Jessie and Wp, two deductive verification plug-ins
These plug-ins are based on weakest precondition computation techniques. They allow to prove that C functions satisfy their specification as expressed in ACSL. These proofs are modular: the specifications of the called functions are used to establish the proof without looking at their code.

For browsing unfamiliar code:
- Impact analysis
This plug-in highlights the locations in the source code that are impacted by a modification.
- Scope & Data-flow browsing
This plug-in allows the user to navigate the dataflow of the program, from definition to use or from use to definition.
- Variable occurrence browsing
Also provided as a simple example for new plug-in development, this plug-in allows the user to reach the statements where a given variable is used.
- Metrics calculation
This plug-in allows the user to compute various metrics from the source code.

For code transformation:
- Semantic constant folding
This plug-in makes use of the results of the evolved value analysis plug-in to replace, in the source code, the constant expressions by their values. Because it relies on EVA, it is able to do more of these simplifications than a syntactic analysis would.
- Slicing
This plug-in slices the code according to a user-provided criterion: it creates a copy of the program, but keeps only those parts which are necessary with respect to the given criterion.
- Spare code: remove "spare code", code that does not contribute to the final results of the program.
- E-ACSL: translate annotations into C code for runtime assertion checking.
For verifying functional specifications:

- Aoraï: verify specifications expressed as LTL (Linear Temporal Logic) formulas
Other functionalities documented together with the EVA plug-in can be considered as verifying low-level functional specifications (inputs, outputs, dependencies,…)
For test-case generation:

- PathCrawler automatically finds test-case inputs to ensure coverage of a C function. It can be used for structural unit testing, as a complement to static analysis or to study the feasible execution paths of the function.
For concurrent programs:

- Mthread
This plug-in automatically analyzes concurrent C programs, using the EVA plug-in, taking into account all possible thread interactions. At the end of its execution, the concurrent behavior of each thread is over-approximated, resulting in precise information about shared variables, which mutex protects a part of the code, etc.
Front-end for other languages

- Frama-Clang
This plug-in provides a C++ front-end to Frama-C, based on the clang compiler. It transforms C++ code into a Frama-C AST, which can then be analyzed by the plug-ins above. Note however that it is very experimental and only supports a subset of C++11
tools  devtools  formal-methods  programming  software  c(pp)  systems  memory-management  ocaml-sml  debugging  checking  rigor  oss  code-dive  graphs  state  metrics  llvm  gallic  cool  worrydream  impact  flux-stasis  correctness  computer-memory 
7 weeks ago by nhaliday
Should I go for TensorFlow or PyTorch?
Honestly, most experts that I know love Pytorch and detest TensorFlow. Karpathy and Justin from Stanford for example. You can see Karpthy's thoughts and I've asked Justin personally and the answer was sharp: PYTORCH!!! TF has lots of PR but its API and graph model are horrible and will waste lots of your research time.



Updated Mar 12
Update after 2019 TF summit:

TL/DR: previously I was in the pytorch camp but with TF 2.0 it’s clear that Google is really going to try to have parity or try to be better than Pytorch in all aspects where people voiced concerns (ease of use/debugging/dynamic graphs). They seem to be allocating more resources on development than Facebook so the longer term currently looks promising for Google. Prior to TF 2.0 I thought that Pytorch team had more momentum. One area where FB/Pytorch is still stronger is Google is a bit more closed and doesn’t seem to release reproducible cutting edge models such as AlphaGo whereas FAIR released OpenGo for instance. Generally you will end up running into models that are only implemented in one framework of the other so chances are you might end up learning both.
q-n-a  qra  comparison  software  recommendations  cost-benefit  tradeoffs  python  libraries  machine-learning  deep-learning  data-science  sci-comp  tools  google  facebook  tech  competition  best-practices  trends  debugging  expert-experience  ecosystem 
7 weeks ago by nhaliday
c++ - Debugging template instantiations - Stack Overflow
Metashell is still in active development though: github.com/metashell/metashell
q-n-a  stackex  nitty-gritty  pls  types  c(pp)  debugging  devtools  tools  programming  howto  advice  checklists  multi  repo  github  wire-guided 
7 weeks ago by nhaliday
Burrito: Rethinking the Electronic Lab Notebook
Seems very well-suited for ML experiments (if you can get it to work), also the nilfs aspect is cool and basically implements exactly one of the my project ideas (mini-VCS for competitive programming). Unfortunately gnarly installation instructions specify running it on Linux VM: https://github.com/pgbovine/burrito/blob/master/INSTALL. Linux is hard requirement due to nilfs.
techtariat  project  tools  devtools  linux  programming  yak-shaving  integration-extension  nitty-gritty  workflow  exocortex  scholar  software  python  app  desktop  notetaking  state  machine-learning  data-science  nibble  sci-comp  oly  vcs  multi  repo  paste  homepage 
8 weeks ago by nhaliday
Why is reverse debugging rarely used? - Software Engineering Stack Exchange
(time travel)

For one, running in debug mode with recording on is very expensive compared to even normal debug mode; it also consumes a lot more memory.

It is easier to decrease the granularity from line level to function call level. For example, the standard debugger in eclipse allows you to "drop to frame," which is essentially a jump back to the start of the function with a reset of all the parameters (nothing done on the heap is reverted, and finally blocks are not executed, so it is not a true reverse debugger; be careful about that).

Note that this has been available for several years now and works hand in hand with hot-code replacement.
As mentioned already, performance is key e.g. with gdb's reversible debugging, running something like gzip sees a slowdown of 50,000x compared to running natively. There are commercial alternatives however: I work for Undo undo.io, and our UndoDB product does the same but with a slowdown of less than 2x. There are other commercial reversible debuggers available too.

Based on GDB, UndoDB supports source-level debugging for applications written in any language supported by GDB, including C/C++, Rust and Ada.
q-n-a  stackex  programming  engineering  impetus  debugging  time  increase-decrease  worrydream  hci  devtools  direction  roots  money-for-time  review  comparison  critique  tools  software  multi  systems  c(pp)  rust  state 
10 weeks ago by nhaliday
Cilk Hub
looks like this is run by Billy Moses and Leiserson (the L in CLRS)
mit  tools  programming  pls  plt  systems  c(pp)  libraries  compilers  performance  homepage  concurrency 
10 weeks ago by nhaliday
unix - How can I profile C++ code running on Linux? - Stack Overflow
If your goal is to use a profiler, use one of the suggested ones.

However, if you're in a hurry and you can manually interrupt your program under the debugger while it's being subjectively slow, there's a simple way to find performance problems.

Just halt it several times, and each time look at the call stack. If there is some code that is wasting some percentage of the time, 20% or 50% or whatever, that is the probability that you will catch it in the act on each sample. So that is roughly the percentage of samples on which you will see it. There is no educated guesswork required. If you do have a guess as to what the problem is, this will prove or disprove it.

You may have multiple performance problems of different sizes. If you clean out any one of them, the remaining ones will take a larger percentage, and be easier to spot, on subsequent passes. This magnification effect, when compounded over multiple problems, can lead to truly massive speedup factors.

Caveat: Programmers tend to be skeptical of this technique unless they've used it themselves. They will say that profilers give you this information, but that is only true if they sample the entire call stack, and then let you examine a random set of samples. (The summaries are where the insight is lost.) Call graphs don't give you the same information, because

they don't summarize at the instruction level, and
they give confusing summaries in the presence of recursion.
They will also say it only works on toy programs, when actually it works on any program, and it seems to work better on bigger programs, because they tend to have more problems to find. They will say it sometimes finds things that aren't problems, but that is only true if you see something once. If you see a problem on more than one sample, it is real.
q-n-a  stackex  programming  engineering  performance  devtools  tools  advice  checklists  hacker  nitty-gritty  tricks  lol 
10 weeks ago by nhaliday
AFL + QuickCheck = ?
Adventures in fuzzing. Also differences between testing culture in software and hardware.
techtariat  dan-luu  programming  engineering  checking  random  haskell  path-dependence  span-cover  heuristic  libraries  links  tools  devtools  software  hardware  culture  formal-methods  local-global  golang  correctness 
10 weeks ago by nhaliday
Delta debugging - Wikipedia
good overview of with examples: https://www.csm.ornl.gov/~sheldon/bucket/Automated-Debugging.pdf

Not as useful for my usecases (mostly contest programming) as QuickCheck. Input is generally pretty structured and I don't have a long history of code in VCS. And when I do have the latter git-bisect is probably enough.

good book tho: http://www.whyprogramsfail.com/toc.php
WHY PROGRAMS FAIL: A Guide to Systematic Debugging\
wiki  reference  programming  systems  debugging  c(pp)  python  tools  devtools  links  hmm  formal-methods  divide-and-conquer  vcs  git  search  yak-shaving  pdf  white-paper  multi  examples  stories  books  unit  caltech  recommendations  advanced  correctness 
11 weeks ago by nhaliday
macos - AutoHotkey Equivalent for OS X? - Ask Different
hammerspoon looks like best option in that it's scriptable (but probably less featureful than the paid "Keyboard Maestro")
q-n-a  stackex  apple  osx  desktop  yak-shaving  integration-extension  tools 
april 2019 by nhaliday
Perseus Digital Library
This is actually really useful.

- Load English translation side-by-side if available.
- Click on any word and see the best guess for definition+inflection given context.
tools  reference  history  iron-age  mediterranean  the-classics  canon  foreign-lang  linguistics  database  quixotic  stoic  syntax  lexical  exocortex 
february 2019 by nhaliday
Stack Overflow Developer Survey 2018
Rust, Python, Go in top most loved
F#/OCaml most high paying globally, Erlang/Scala/OCaml in the US (F# still in top 10)
ML specialists high-paid
editor usage: VSCode > VS > Sublime > Vim > Intellij >> Emacs
ranking  list  top-n  time-series  data  database  programming  engineering  pls  trends  stackex  poll  career  exploratory  network-structure  ubiquity  ocaml-sml  rust  golang  python  dotnet  money  jobs  compensation  erlang  scala  jvm  ai  ai-control  risk  futurism  ethical-algorithms  data-science  machine-learning  editors  devtools  tools  pro-rata  org:com 
december 2018 by nhaliday
Team *Decorations Until Epiphany* on Twitter: "@RoundSqrCupola maybe just C https://t.co/SFPXb3qrAE"
Remember ‘BRICs’? Now it’s just ICs.
maybe just C
Solow predicts that if 2 countries have the same TFP, then the poorer nation should grow faster. But poorer India grows more slowly than China.

Solow thinking leads one to suspect India has substantially lower TFP.

Recent growth is great news, but alas 5 years isn't the long run!

FWIW under Solow conditional convergence assumptions--historically robust--the fact that a country as poor as India grows only a few % faster than the world average is a sign they'll end up poorer than S Europe.

see his spreadsheet here: http://mason.gmu.edu/~gjonesb/SolowForecast.xlsx
spearhead  econotariat  garett-jones  unaffiliated  twitter  social  discussion  india  asia  china  economics  macro  growth-econ  econ-metrics  wealth  wealth-of-nations  convergence  world  developing-world  trends  time-series  cjones-like  prediction  multi  backup  the-bones  long-short-run  europe  mediterranean  comparison  simulation  econ-productivity  great-powers  thucydides  broad-econ  pop-diff  microfoundations  🎩  marginal  hive-mind  rindermann-thompson  hari-seldon  tools  calculator  estimate 
december 2017 by nhaliday
Use and Interpretation of LD Score Regression
LD Score regression distinguishes confounding from polygenicity in genome-wide association studies: https://sci-hub.bz/10.1038/ng.3211
- Po-Ru Loh, Nick Patterson, et al.


Both polygenicity (i.e. many small genetic effects) and confounding biases, such as cryptic relatedness and population stratification, can yield inflated distributions of test statistics in genome-wide association studies (GWAS). However, current methods cannot distinguish between inflation from bias and true signal from polygenicity. We have developed an approach that quantifies the contributions of each by examining the relationship between test statistics and linkage disequilibrium (LD). We term this approach LD Score regression. LD Score regression provides an upper bound on the contribution of confounding bias to the observed inflation in test statistics and can be used to estimate a more powerful correction factor than genomic control. We find strong evidence that polygenicity accounts for the majority of test statistic inflation in many GWAS of large sample size.

Supplementary Note: https://images.nature.com/original/nature-assets/ng/journal/v47/n3/extref/ng.3211-S1.pdf

An atlas of genetic correlations across human diseases
and traits: https://sci-hub.bz/10.1038/ng.3406


Supplementary Note: https://images.nature.com/original/nature-assets/ng/journal/v47/n11/extref/ng.3406-S1.pdf

ldsc is a command line tool for estimating heritability and genetic correlation from GWAS summary statistics. ldsc also computes LD Scores.
nibble  pdf  slides  talks  bio  biodet  genetics  genomics  GWAS  genetic-correlation  correlation  methodology  bioinformatics  concept  levers  🌞  tutorial  explanation  pop-structure  gene-drift  ideas  multi  study  org:nat  article  repo  software  tools  libraries  stats  hypothesis-testing  biases  confounding  gotchas  QTL  simulation  survey  preprint  population-genetics 
november 2017 by nhaliday
« earlier      
per page:    204080120160

bundles : meta

related tags

2016-election  80000-hours  ability-competence  absolute-relative  academia  accessibility  accretion  accuracy  acm  acmtariat  advanced  advertising  advice  africa  age-generation  age-of-discovery  aggregator  aging  agriculture  ai  ai-control  akrasia  algebra  algorithms  allodium  alt-inst  amazon  analogy  analysis  anglo  anglosphere  anomie  anonymity  anthropology  antiquity  aphorism  api  app  apple  arbitrage  aristos  arms  arrows  art  article  asia  assembly  atoms  attention  audio  automation  autor  axioms  backup  bangbang  bayesian  beauty  beeminder  behavioral-gen  benchmarks  best-practices  biases  bifl  big-peeps  big-picture  bio  biodet  biohacking  bioinformatics  biophysical-econ  biotech  bitcoin  blockchain  blog  boaz-barak  books  bots  branches  brands  bret-victor  britain  broad-econ  browser  build-packaging  business  business-models  c(pp)  c:*  caching  calculation  calculator  california  caltech  canada  candidate-gene  canon  career  CAS  cause  censorship  chapman  characterization  charity  chart  cheatsheet  checking  checklists  chemistry  chicago  china  christianity  civic  civilization  cjones-like  clarity  class  class-warfare  classic  clever-rats  climate-change  clinton  cloud  coalitions  cochrane  cocoa  code-dive  cog-psych  collaboration  commentary  communication  communism  community  comparison  compensation  competition  compilers  composition-decomposition  computer-memory  computer-vision  concept  conceptual-vocab  concurrency  conference  config  confluence  confounding  consumerism  context  contradiction  convergence  convexity-curvature  cooking  cool  cooperate-defect  coordination  core-rats  correctness  correlation  corruption  cost-benefit  cracker-econ  cracker-prog  creative  CRISPR  critique  crooked  crosstab  crypto  crypto-anarchy  cryptocurrency  cs  cultural-dynamics  culture  current-events  curvature  cycles  d3  dan-luu  data  data-science  database  dataset  dataviz  dbs  debate  debugging  decentralized  decision-making  deep-learning  definite-planning  definition  degrees-of-freedom  demographic-transition  demographics  density  dependence-independence  descriptive  design  desktop  developing-world  developmental  devops  devtools  diet  differential  direction  dirty-hands  discipline  discovery  discrimination  discussion  distributed  distribution  diversity  divide-and-conquer  diy  documentation  dotnet  douthatish  draft  drama  driving  dropbox  drugs  duplication  duty  dynamic  dynamical  early-modern  earth  eastern-europe  econ-metrics  econ-productivity  economics  econotariat  ecosystem  editors  education  efficiency  egalitarianism-hierarchy  elections  elegance  elite  email  embedded  embeddings  embodied  embodied-cognition  embodied-pack  empirical  encyclopedic  endurance  energy-resources  engineering  enhancement  enlightenment-renaissance-restoration-reformation  environment  envy  epistemic  ergo  erlang  error  estimate  ethanol  ethical-algorithms  ethics  ethnocentrism  EU  europe  evan-miller  evidence-based  examples  exit-voice  exocortex  experiment  expert  expert-experience  explanans  explanation  exploratory  expression-survival  extra-introversion  facebook  fall-2016  fertility  ffi  fiction  film  finance  fitness  fitsci  flux-stasis  food  foreign-lang  foreign-policy  formal-methods  formal-values  forum  frameworks  free  freelance  french  frontend  frontier  functional  futurism  gallic  games  garett-jones  gavisti  gbooks  gender  gender-diff  gene-drift  gene-flow  general-survey  genetic-correlation  genetics  genomics  geoengineering  geography  geopolitics  germanic  giants  gif  gilens-page  git  github  gnon  gnosis-logos  gnxp  golang  good-evil  google  gotchas  government  gowers  grad-school  graph-theory  graphical-models  graphics  graphs  gravity  great-powers  group-level  growth  growth-econ  gtd  guide  guilt-shame  GWAS  gwern  habit  hacker  haidt  hanson  hardware  hari-seldon  haskell  hci  health  heavy-industry  heuristic  hg  hi-order-bits  higher-ed  history  hive-mind  hmm  hn  homepage  homo-hetero  housing  howto  hsu  huge-data-the-biggest  human-bean  hypochondria  hypothesis-testing  ide  ideas  ideology  idk  illusion  impact  impetus  incentives  increase-decrease  india  individualism-collectivism  industrial-org  industrial-revolution  inequality  inference  info-dynamics  info-foraging  infographic  infrastructure  inhibition  init  innovation  insight  institutions  integration-extension  interface  internet  interpretability  interview  intricacy  investing  ios  iq  iron-age  is-ought  islam  israel  iteration-recursion  japan  jargon  javascript  jobs  judaism  judgement  jvm  keyboard  keyboards  kinship  knowledge  labor  land  language  latent-variables  latex  law  leadership  leaks  learning  left-wing  legibility  lesswrong  let-me-see  letters  levers  lexical  libraries  lifehack  linear-algebra  linearity  liner-notes  linguistics  links  linux  lisp  list  literature  live-coding  lived-experience  llvm  local-global  logic  lol  long-short-run  long-term  longform  low-hanging  machine-learning  macro  madisonian  malaise  management  manifolds  maps  marginal  marginal-rev  market-power  marketing  markets  math  math.AT  math.CA  math.CO  math.CT  math.NT  mathtariat  matrix-factorization  meaningness  measure  measurement  mechanics  media  medicine  medieval  mediterranean  memes(ew)  memory-management  MENA  meta-analysis  meta:prediction  meta:reading  meta:rhetoric  meta:science  metabolic  metabuch  methodology  metrics  micro  microbiz  microfoundations  micropayments  microsoft  migration  military  mindful  minimalism  minimum-viable  mit  mobile  mobility  model-class  modernity  monetary-fiscal  money  money-for-time  mooc  morality  mostly-modern  multi  multiplicative  murray  music  music-theory  mystic  n-factor  nature  navigation  network-structure  networking  neuro  neuro-nitgrit  neurons  news  nibble  nihil  nitty-gritty  nlp  noahpinion  nootropics  nordic  nostalgia  notation  notetaking  nuclear  null-result  number  numerics  nutrition  obama  objective-measure  objektbuch  observer-report  ocaml-sml  occident  oceans  ocr  oly  oly-programming  oop  open-closed  openai  operational  opsec  optimate  optimization  ORFE  org:anglo  org:biz  org:bleg  org:com  org:data  org:davos  org:edu  org:fin  org:gov  org:health  org:junk  org:lite  org:mag  org:mat  org:med  org:nat  org:ngo  org:popup  org:rec  org:sci  organization  orient  os  oss  osx  outcome-risk  outdoors  overflow  oxbridge  p2p  paas  papers  parsimony  paste  path-dependence  paying-rent  pdf  pennsylvania  people  performance  personal-finance  personality  pessimism  phalanges  pharma  philosophy  photography  physics  pic  pinboard  piracy  plan9  planning  play  plots  pls  plt  poast  podcast  poetry  polarization  policy  polisci  politics  poll  polynomials  pop-diff  pop-structure  population  population-genetics  positivity  pragmatic  prediction  prediction-markets  prepping  preprint  presentation  prioritizing  priors-posteriors  privacy  pro-rata  problem-solving  procrastination  productivity  prof  profile  programming  progression  project  proofs  propaganda  property-rights  protestant-catholic  protocol  psych-architecture  psychiatry  psychology  psychometrics  publishing  python  q-n-a  qra  QTL  quality  quantified-self  quantum  quantum-info  quixotic  quiz  quotes  r-lang  race  random  ranking  rant  rat-pack  rationality  ratty  realness  reason  recommendations  recording  recruiting  reddit  redistribution  reference  reflection  regularizer  reinforcement  religion  rent-seeking  replication  repo  research  retention  review  rhetoric  rhythm  right-wing  rigor  rindermann-thompson  risk  robust  rock  roots  rsc  running  russia  rust  s-factor  saas  sales  sanctity-degradation  sapiens  satire  scala  scale  scaling-tech  scaling-up  schelling  scholar  scholar-pack  sci-comp  science  scifi-fantasy  scitariat  search  securities  security  self-control  sequential  shift  SIGGRAPH  signal-noise  signum  similarity  simulation  sinosphere  skeleton  skunkworks  sky  sleep  sleuthin  slides  social  social-choice  social-norms  social-psych  social-science  society  sociology  soft-question  software  space  span-cover  spanish  spatial  speaking  spearhead  speculation  speed  speedometer  spock  sports  spotify  spring-2015  ssc  stackex  stanford  startups  stat-power  state  state-of-art  stats  stereotypes  stock-flow  stoic  store  stories  strategy  stream  street-fighting  stress  strings  structure  study  studying  stylized-facts  subculture  success  summary  summer-2014  supply-demand  survey  sv  symbols  syntax  system-design  systematic-ad-hoc  systems  tactics  talks  taxes  tcs  tcstariat  teaching  tech  technical-writing  technology  techtariat  terminal  tetlock  the-bones  the-classics  the-great-west-whale  the-monster  the-south  theos  things  thinking  threat-modeling  thucydides  time  time-preference  time-series  time-use  tip-of-tongue  todo  tools  top-n  toys  traces  track-record  tracker  tradecraft  tradeoffs  transitions  transportation  travel  trees  trends  tribalism  tricks  trivia  trump  trust  truth  tumblr  tutorial  tutoring  tv  twin-study  twitter  types  ubiquity  ui  unaffiliated  uncertainty  unintended-consequences  unit  universalism-particularism  unix  urban  urban-rural  us-them  usa  ux  vague  values  variance-components  vcs  venture  via:popular  video  virginia-DC  virtu  visual-understanding  visualization  visuo  vitality  volo-avolo  vulgar  water  wealth  wealth-of-nations  web  webapp  weightlifting  weird  west-hunter  westminster  whiggish-hegelian  white-paper  wiki  wire-guided  within-group  wkfly  wonkish  workflow  working-stiff  world  world-war  worrydream  worse-is-better/the-right-thing  writing  yak-shaving  yc  yoga  yvain  zeitgeist  🌞  🎩  👳  🔬  🖥  🤖  🦉 

Copy this bookmark: