nhaliday + ux   42

history - Why are UNIX/POSIX system call namings so illegible? - Unix & Linux Stack Exchange
It's due to the technical constraints of the time. The POSIX standard was created in the 1980s and referred to UNIX, which was born in the 1970. Several C compilers at that time were limited to identifiers that were 6 or 8 characters long, so that settled the standard for the length of variable and function names.

http://neverworkintheory.org/2017/11/26/abbreviated-full-names.html
We carried out a family of controlled experiments to investigate whether the use of abbreviated identifier names, with respect to full-word identifier names, affects fault fixing in C and Java source code. This family consists of an original (or baseline) controlled experiment and three replications. We involved 100 participants with different backgrounds and experiences in total. Overall results suggested that there is no difference in terms of effort, effectiveness, and efficiency to fix faults, when source code contains either only abbreviated or only full-word identifier names. We also conducted a qualitative study to understand the values, beliefs, and assumptions that inform and shape fault fixing when identifier names are either abbreviated or full-word. We involved in this qualitative study six professional developers with 1--3 years of work experience. A number of insights emerged from this qualitative study and can be considered a useful complement to the quantitative results from our family of experiments. One of the most interesting insights is that developers, when working on source code with abbreviated identifier names, adopt a more methodical approach to identify and fix faults by extending their focus point and only in a few cases do they expand abbreviated identifiers.
q-n-a  stackex  trivia  programming  os  systems  legacy  legibility  ux  libraries  unix  linux  hacker  cracker-prog  multi  evidence-based  empirical  expert-experience  engineering  study  best-practices  comparison  quality  debugging  efficiency  time  code-organizing  grokkability  grokkability-clarity 
july 2019 by nhaliday
Computer latency: 1977-2017
If we look at overall results, the fastest machines are ancient. Newer machines are all over the place. Fancy gaming rigs with unusually high refresh-rate displays are almost competitive with machines from the late 70s and early 80s, but “normal” modern computers can’t compete with thirty to forty year old machines.

...

If we exclude the game boy color, which is a different class of device than the rest, all of the quickest devices are Apple phones or tablets. The next quickest device is the blackberry q10. Although we don’t have enough data to really tell why the blackberry q10 is unusually quick for a non-Apple device, one plausible guess is that it’s helped by having actual buttons, which are easier to implement with low latency than a touchscreen. The other two devices with actual buttons are the gameboy color and the kindle 4.

After that iphones and non-kindle button devices, we have a variety of Android devices of various ages. At the bottom, we have the ancient palm pilot 1000 followed by the kindles. The palm is hamstrung by a touchscreen and display created in an era with much slower touchscreen technology and the kindles use e-ink displays, which are much slower than the displays used on modern phones, so it’s not surprising to see those devices at the bottom.

...

Almost every computer and mobile device that people buy today is slower than common models of computers from the 70s and 80s. Low-latency gaming desktops and the ipad pro can get into the same range as quick machines from thirty to forty years ago, but most off-the-shelf devices aren’t even close.

If we had to pick one root cause of latency bloat, we might say that it’s because of “complexity”. Of course, we all know that complexity is bad. If you’ve been to a non-academic non-enterprise tech conference in the past decade, there’s a good chance that there was at least one talk on how complexity is the root of all evil and we should aspire to reduce complexity.

Unfortunately, it's a lot harder to remove complexity than to give a talk saying that we should remove complexity. A lot of the complexity buys us something, either directly or indirectly. When we looked at the input of a fancy modern keyboard vs. the apple 2 keyboard, we saw that using a relatively powerful and expensive general purpose processor to handle keyboard inputs can be slower than dedicated logic for the keyboard, which would both be simpler and cheaper. However, using the processor gives people the ability to easily customize the keyboard, and also pushes the problem of “programming” the keyboard from hardware into software, which reduces the cost of making the keyboard. The more expensive chip increases the manufacturing cost, but considering how much of the cost of these small-batch artisanal keyboards is the design cost, it seems like a net win to trade manufacturing cost for ease of programming.

...

If you want a reference to compare the kindle against, a moderately quick page turn in a physical book appears to be about 200 ms.

https://twitter.com/gravislizard/status/927593460642615296
almost everything on computers is perceptually slower than it was in 1983
https://archive.is/G3D5K
https://archive.is/vhDTL
https://archive.is/a3321
https://archive.is/imG7S
techtariat  dan-luu  performance  time  hardware  consumerism  objektbuch  data  history  reflection  critique  software  roots  tainter  engineering  nitty-gritty  ui  ux  hci  ios  mobile  apple  amazon  sequential  trends  increase-decrease  measure  analysis  measurement  os  systems  IEEE  intricacy  desktop  benchmarks  rant  carmack  system-design  degrees-of-freedom  keyboard  terminal  editors  links  input-output  networking  world  s:**  multi  twitter  social  discussion  tech  programming  web  internet  speed  backup  worrydream  interface  metal-to-virtual  latency-throughput  workflow  form-design  interface-compatibility 
july 2019 by nhaliday
Is the keyboard faster than the mouse?
Conclusion

It’s entirely possible that the mysterious studies Tog’s org spent $50M on prove that the mouse is faster than the keyboard for all tasks other than raw text input, but there doesn’t appear to be enough information to tell what the actual studies were. There are many public studies on user input, but I couldn’t find any that are relevant to whether or not I should use the mouse more or less at the margin.

When I look at various tasks myself, the results are mixed, and they’re mixed in the way that most programmers I polled predicted. This result is so boring that it would barely be worth mentioning if not for the large groups of people who believe that either the keyboard is always faster than the mouse or vice versa.

Please let me know if there are relevant studies on this topic that I should read! I’m not familiar with the relevant fields, so it’s possible that I’m searching with the wrong keywords and reading the wrong papers.

[ed.: Incidentally it looks like Dan uses Emacs.]
techtariat  dan-luu  engineering  programming  productivity  workflow  hci  hardware  working-stiff  benchmarks  time  time-use  keyboard  ui  ux  editors  critique  debate  meta-analysis  study  summary  commentary  comparison  bangbang 
november 2017 by nhaliday
Two theories of home heat control - ScienceDirect
People routinely develop their own theories to explain the world around them. These theories can be useful even when they contradict conventional technical wisdom. Based on in-depth interviews about home heating and thermostat setting behavior, the present study presents two theories people use to understand and adjust their thermostats. The two theories are here called the feedback theory and the valve theory. The valve theory is inconsistent with engineering knowledge, but is estimated to be held by 25% to 50% of Americans. Predictions of each of the theories are compared with the operations normally performed in home heat control. This comparison suggests that the valve theory may be highly functional in normal day-to-day use. Further data is needed on the ways this theory guides behavior in natural environments.
study  hci  ux  hardware  embodied  engineering  dirty-hands  models  thinking  trivia  cocktail  map-territory  realness  neurons  psychology  cog-psych  social-psych  error  usa  poll  descriptive  temperature  protocol-metadata  form-design 
september 2017 by nhaliday
Astronauts of Interface — Medium
A new community of researchers is mixing human computer interaction with complimentary ideas: with aspirations from utopian urban planning, with excitement about new types of creative community, and with post-consumption models of man/machine collaboration, for issuance.

They gather under strange banners: Tools for Thought, Computing and Humanity, CDG/HARC, Livable Media, Computer Utopias, the League of Considerate Inventors, and so on.
hci  worrydream  ux  design  eh  list  hmm  org:med  techtariat  form-design 
may 2016 by nhaliday
Lean
https://lean-forward.github.io
The goal of the Lean Forward project is to collaborate with number theorists to formally prove theorems about research mathematics and to address the main usability issues hampering the adoption of proof assistants in mathematical circles. The theorems will be selected together with our collaborators to guide the development of formal libraries and verified tools.

mostly happening in the Netherlands

https://formalabstracts.github.io

A Review of the Lean Theorem Prover: https://jiggerwit.wordpress.com/2018/09/18/a-review-of-the-lean-theorem-prover/
- Thomas Hales
seems like a Coq might be a better starter if I ever try to get into proof assistants/theorem provers

edit: on second thought this actually seems like a wash for beginners

An Argument for Controlled Natural Languages in Mathematics: https://jiggerwit.wordpress.com/2019/06/20/an-argument-for-controlled-natural-languages-in-mathematics/
By controlled natural language for mathematics (CNL), we mean an artificial language for the communication of mathematics that is (1) designed in a deliberate and explicit way with precise computer-readable syntax and semantics, (2) based on a single natural language (such as Chinese, Spanish, or English), and (3) broadly understood at least in an intuitive way by mathematically literate speakers of the natural language.

The definition of controlled natural language is intended to exclude invented languages such as Esperanto and Logjam that are not based on a single natural language. Programming languages are meant to be excluded, but a case might be made for TeX as the first broadly adopted controlled natural language for mathematics.

Perhaps it is best to start with an example. Here is a beautifully crafted CNL text created by Peter Koepke and Steffen Frerix. It reproduces a theorem and proof in Rudin’s Principles of mathematical analysis almost word for word. Their automated proof system is able to read and verify the proof.

https://github.com/Naproche/Naproche-SAD
research  math  formal-methods  msr  multi  homepage  research-program  skunkworks  math.NT  academia  ux  CAS  mathtariat  expert-experience  cost-benefit  nitty-gritty  review  critique  rant  types  learning  intricacy  functional  performance  c(pp)  ocaml-sml  comparison  ecosystem  DSL  tradeoffs  composition-decomposition  interdisciplinary  europe  germanic  grokkability  nlp  language  heavyweights  inference  rigor  automata-languages  repo  software  tools  syntax  frontier  state-of-art  pls  grokkability-clarity  technical-writing  database  lifts-projections 
january 2016 by nhaliday

bundles : worrydream

related tags

:/  abstraction  academia  age-generation  ai  akrasia  alignment  amazon  analysis  announcement  app  apple  arrows  art  attention  automata-languages  backup  bangbang  barons  benchmarks  best-practices  bifl  blog  books  bots  bret-victor  britain  browser  build-packaging  business  c(pp)  carmack  CAS  cocktail  cocoa  code-organizing  cog-psych  commentary  community  comparison  compilers  composition-decomposition  concurrency  consumerism  contradiction  cool  correctness  cost-benefit  cracker-prog  creative  critique  crypto  cs  culture  dan-luu  dark-arts  data  database  debate  debugging  degrees-of-freedom  descriptive  design  desktop  devtools  diogenes  direct-indirect  dirty-hands  discussion  distributed  distribution  documentation  dropbox  DSL  dynamic  early-modern  ecosystem  editors  education  efficiency  eh  elegance  embodied  empirical  engineering  enlightenment-renaissance-restoration-reformation  error  ethics  europe  evidence-based  examples  exocortex  expert-experience  explanation  explore-exploit  externalities  facebook  form-design  formal-methods  frontier  functional  germanic  git  gotchas  grokkability  grokkability-clarity  hacker  hardware  hci  heavyweights  hg  history  hmm  hn  homepage  homo-hetero  ide  IEEE  impact  impetus  increase-decrease  inference  info-dynamics  info-foraging  init  input-output  insight  interdisciplinary  interface  interface-compatibility  internet  intricacy  investing  ios  javascript  jvm  keyboard  language  latency-throughput  latex  law  learning  legacy  legibility  lexical  libraries  lifts-projections  links  linux  lisp  list  literature  live-coding  local-global  long-short-run  map-territory  marginal  math  math.NT  mathtariat  measure  measurement  medieval  meta-analysis  metal-to-virtual  metameta  michael-nielsen  mobile  models  move-fast-(and-break-things)  msr  multi  music  networking  neurons  nitty-gritty  nlp  nonlinearity  notation  notetaking  objektbuch  ocaml-sml  oop  org:bleg  org:com  org:med  organization  os  papers  parable  parsimony  people  performance  pls  plt  poetry  policy  poll  pragmatic  prediction  presentation  prioritizing  privacy  pro-rata  productivity  programming  project  protocol-metadata  psychology  python  q-n-a  quality  rant  realness  recommendations  reference  reflection  repo  research  research-program  retention  retrofit  review  rhetoric  rigor  robust  roots  s:**  scaling-tech  search  security  sequential  signal-noise  skunkworks  social  social-psych  society  software  speed  stackex  startups  state  state-of-art  static-dynamic  stories  stream  study  summary  summer-2014  syntax  system-design  systems  tainter  tcstariat  tech  tech-infrastructure  technical-writing  techtariat  temperature  terminal  the-monster  thinking  time  time-use  tools  traces  tracker  tradeoffs  trends  trivia  tutorial  twitter  types  ui  unix  usa  ux  vcs  venture  video  visual-understanding  visualization  web  webapp  wiki  wkfly  woah  workflow  working-stiff  world  worrydream  worse-is-better/the-right-thing  yak-shaving  yc  zooming  🖥 

Copy this bookmark:



description:


tags: