nhaliday + github   25

Geoff Greer's site: Burnout is in the Mind
I sometimes wonder if burnout is the western version of fan death. When you think about it, burnout makes little sense. People get depressed and tired from… what, exactly? Working too much? Working too hard? Excessive drudgery? Bull. We are working less than ever before. Just over a century ago, the average work week exceeded 60 hours. Today, it’s 33.[1] Past occupations also involved toil and danger far greater than any employment today. Yet burnout is a modern phenomenon. Strange, eh?


I’m not saying those who claim to be burnt-out are faking. I don’t doubt that burnout describes a real phenomenon. What I do doubt is the accepted cause (work) and the accepted cure (time off from work). It seems much more likely that burnout is a form of depression[3], which has a myriad of causes and cures.

It is only after making all this noise about burnout that I feel comfortable suggesting the following: Don’t worry about working too much. The important thing is to avoid depression. People more knowledgable than I have written on that subject, but to sum up their advice: Get out. Exercise. Try to form healthy habits. And stay the hell away from negative media such as cable news and Tumblr.
techtariat  labor  discipline  productivity  contrarianism  reflection  tech  realness  stress  causation  roots  psycho-atoms  health  oss  github  stamina  working-stiff 
26 days ago by nhaliday
One week of bugs
If I had to guess, I'd say I probably work around hundreds of bugs in an average week, and thousands in a bad week. It's not unusual for me to run into a hundred new bugs in a single week. But I often get skepticism when I mention that I run into multiple new (to me) bugs per day, and that this is inevitable if we don't change how we write tests. Well, here's a log of one week of bugs, limited to bugs that were new to me that week. After a brief description of the bugs, I'll talk about what we can do to improve the situation. The obvious answer to spend more effort on testing, but everyone already knows we should do that and no one does it. That doesn't mean it's hopeless, though.


Here's where I'm supposed to write an appeal to take testing more seriously and put real effort into it. But we all know that's not going to work. It would take 90k LOC of tests to get Julia to be as well tested as a poorly tested prototype (falsely assuming linear complexity in size). That's two person-years of work, not even including time to debug and fix bugs (which probably brings it closer to four of five years). Who's going to do that? No one. Writing tests is like writing documentation. Everyone already knows you should do it. Telling people they should do it adds zero information1.

Given that people aren't going to put any effort into testing, what's the best way to do it?

Property-based testing. Generative testing. Random testing. Concolic Testing (which was done long before the term was coined). Static analysis. Fuzzing. Statistical bug finding. There are lots of options. Some of them are actually the same thing because the terminology we use is inconsistent and buggy. I'm going to arbitrarily pick one to talk about, but they're all worth looking into.


There are a lot of great resources out there, but if you're just getting started, I found this description of types of fuzzers to be one of those most helpful (and simplest) things I've read.

John Regehr has a udacity course on software testing. I haven't worked through it yet (Pablo Torres just pointed to it), but given the quality of Dr. Regehr's writing, I expect the course to be good.

For more on my perspective on testing, there's this.

Everything's broken and nobody's upset: https://www.hanselman.com/blog/EverythingsBrokenAndNobodysUpset.aspx

From the perspective of a user, the purpose of Hypothesis is to make it easier for you to write better tests.

From my perspective as the primary author, that is of course also a purpose of Hypothesis. I write a lot of code, it needs testing, and the idea of trying to do that without Hypothesis has become nearly unthinkable.

But, on a large scale, the true purpose of Hypothesis is to drag the world kicking and screaming into a new and terrifying age of high quality software.

Software is everywhere. We have built a civilization on it, and it’s only getting more prevalent as more services move online and embedded and “internet of things” devices become cheaper and more common.

Software is also terrible. It’s buggy, it’s insecure, and it’s rarely well thought out.

This combination is clearly a recipe for disaster.

The state of software testing is even worse. It’s uncontroversial at this point that you should be testing your code, but it’s a rare codebase whose authors could honestly claim that they feel its testing is sufficient.

Much of the problem here is that it’s too hard to write good tests. Tests take up a vast quantity of development time, but they mostly just laboriously encode exactly the same assumptions and fallacies that the authors had when they wrote the code, so they miss exactly the same bugs that you missed when they wrote the code.

Preventing the Collapse of Civilization [video]: https://news.ycombinator.com/item?id=19945452
- Jonathan Blow

NB: DevGAMM is a game industry conference

- loss of technological knowledge (Antikythera mechanism, aqueducts, etc.)
- hardware driving most gains, not software
- software's actually less robust, often poorly designed and overengineered these days
- *list of bugs he's encountered recently*:
- knowledge of trivia becomes more than general, deep knowledge
- does at least acknowledge value of DRY, reusing code, abstraction saving dev time
techtariat  dan-luu  tech  software  error  list  debugging  linux  github  robust  checking  oss  troll  lol  aphorism  webapp  email  google  facebook  games  julia  pls  compilers  communication  mooc  browser  rust  programming  engineering  random  jargon  formal-methods  expert-experience  prof  c(pp)  course  correctness  hn  commentary  video  presentation  carmack  pragmatic  contrarianism  pessimism  sv  unix  rhetoric  critique  worrydream  hardware  performance  trends  multiplicative  roots  impact  comparison  history  iron-age  the-classics  mediterranean  conquest-empire  gibbon  technology  the-world-is-just-atoms  flux-stasis  increase-decrease  graphics  hmm  idk  systems  os  abstraction  intricacy  worse-is-better/the-right-thing  build-packaging  microsoft  osx  apple  reflection  assembly  things  knowledge  detail-architecture  thick-thin  trivia  info-dynamics  caching  frameworks  generalization  systematic-ad-hoc  universalism-particularism  analytical-holistic  structure  tainter  libraries  tradeoffs  prepping  threat-modeling  network-structure  writing  risk  local-glob 
may 2019 by nhaliday
I'm curious what Gerrit gets them that Github doesn't have natively, too. | Hacker News
There are a lot of things lacking about GitHub's code review process (pull requests). Off the top of my head:
- Merging a pull request (almost) always creates a merge commit, polluting the change history. Gerrit will automatically rebase the changes atop the master branch head, leaving a nice linear history.

- Pull requests require the contributor to create a public fork of the repository they're committing to. I can see how this works for some people, but I find it gross for each contributor to the Go project to have their own public fork. What a mess.

- Comments on pull requests are sent as soon as they are created. Gerrit allows you to make many comments on a change, as drafts, and then send them all in one go, sending just a single email. This is much easier to manage for a large project.

- Gerrit has the notion of multiple 'patch sets' for a particular change, and you can see diffs between patch sets, so it's much easier to progressively review large changes.

And there are many other small issues with the GitHub pull request process that make it untenable for the Go project.
FYI if you have a "CONTRIBUTING" or "CONTRIBUTING.md" document at the project root, a neat little info bar "Please read the [contributing] guidelines before proceeding" will show up to anyone filing a bug or a PR.
- scrollaway
engineering  best-practices  collaboration  tech  sv  oss  github  vcs  howto  comparison  critique  nitty-gritty  golang 
may 2016 by nhaliday
The Next Generation of Software Stacks | StackShare
most interesting part to me:
GECS have a clear bias towards certain types of applications and services as well. These preferences are particularly apparent in the analytics stack. Tools typically aimed primarily at marketing teams—tools like Crazy Egg, Optimizely, and Google Analytics, the most popular tool on Stackshare—are extremely unpopular among GECS. These services are being replaced by tools that are aimed a serving both marketing and analytics teams. Segment, Mixpanel, Heap, and Amplitude, which provide flexible access to raw data, are well-represented among GECS, suggesting that these companies are looking to understand user behavior beyond clicks and page views.
data  analysis  business  startups  tech  planning  techtariat  org:com  ecosystem  software  saas  network-structure  integration-extension  cloud  github  oss  vcs  amazon  communication  trends  pro-rata  crosstab  visualization  sv  programming  pls  web  javascript  frontend  marketing  tech-infrastructure 
april 2016 by nhaliday

bundles : techie

related tags

abstraction  aggregator  amazon  analysis  analytical-holistic  aphorism  apple  assembly  best-practices  browser  build-packaging  business  c(pp)  caching  carmack  causation  chart  cheatsheet  checking  checklists  civilization  cloud  code-organizing  collaboration  commentary  communication  community  comparison  compilers  complex-systems  composition-decomposition  conquest-empire  contrarianism  cool  correctness  coupling-cohesion  course  cracker-prog  critique  crosstab  culture  culture-war  dan-luu  data  database  debugging  decentralized  density  desktop  detail-architecture  devtools  diogenes  discipline  distributed  documentation  dotnet  DSL  ecosystem  email  engineering  error  expert-experience  exploratory  facebook  flux-stasis  formal-methods  frameworks  frontend  functional  games  generalization  gibbon  git  github  golang  google  gotchas  graphics  hardware  haskell  health  heavyweights  history  hmm  hn  howto  identity-politics  idk  impact  increase-decrease  info-dynamics  info-foraging  init  integration-extension  intellectual-property  interface-compatibility  internet  intricacy  iron-age  jargon  javascript  julia  jvm  knowledge  labor  law  let-me-see  libraries  links  linux  lisp  list  local-global  lol  management  marketing  measure  mediterranean  metal-to-virtual  methodology  microsoft  mooc  move-fast-(and-break-things)  multi  multiplicative  network-structure  news  nitty-gritty  objektbuch  ocaml-sml  oly-programming  opsec  org:biz  org:com  org:lite  organization  os  oss  osx  parsimony  performance  pessimism  planning  pls  postmortem  pragmatic  prepping  presentation  pro-rata  productivity  prof  programming  project  property-rights  protocol-metadata  psycho-atoms  python  q-n-a  r-lang  random  ranking  realness  recommendations  reference  reflection  repo  rhetoric  risk  robust  roots  rust  saas  scala  search  security  sentiment  similarity  social  software  stackex  stamina  startups  stock-flow  stress  structure  summary  sv  system-design  systematic-ad-hoc  systems  tainter  tech  tech-infrastructure  technology  techtariat  the-classics  the-world-is-just-atoms  thick-thin  things  threat-modeling  time  time-series  tools  top-n  trade  tradeoffs  trends  trivia  troll  ubiquity  ui  universalism-particularism  unix  vcs  video  visualization  web  webapp  whole-partial-many  wire-guided  workflow  working-stiff  worrydream  worse-is-better/the-right-thing  writing  yak-shaving  yc 

Copy this bookmark: