nhaliday + software   264

quality - Is the average number of bugs per loc the same for different programming languages? - Software Engineering Stack Exchange
Contrary to intuition, the number of errors per 1000 lines of does seem to be relatively constant, reguardless of the specific language involved. Steve McConnell, author of Code Complete and Software Estimation: Demystifying the Black Art goes over this area in some detail.

I don't have my copies readily to hand - they're sitting on my bookshelf at work - but a quick Google found a relevant quote:

Industry Average: "about 15 - 50 errors per 1000 lines of delivered code."
(Steve) further says this is usually representative of code that has some level of structured programming behind it, but probably includes a mix of coding techniques.

Quoted from Code Complete, found here: http://mayerdan.com/ruby/2012/11/11/bugs-per-line-of-code-ratio/

If memory serves correctly, Steve goes into a thorough discussion of this, showing that the figures are constant across languages (C, C++, Java, Assembly and so on) and despite difficulties (such as defining what "line of code" means).

Most importantly he has lots of citations for his sources - he's not offering unsubstantiated opinions, but has the references to back them up.
q-n-a  stackex  programming  engineering  nitty-gritty  error  flux-stasis  books  recommendations  software  checking  debugging  pro-rata  pls  comparison  parsimony  measure 
yesterday by nhaliday
More arguments against blockchain, most of all about trust - Marginal REVOLUTION
Auditing software is hard! The most-heavily scrutinized smart contract in history had a small bug that nobody noticed — that is, until someone did notice it, and used it to steal fifty million dollars. If cryptocurrency enthusiasts putting together a $150m investment fund can’t properly audit the software, how confident are you in your e-book audit? Perhaps you would rather write your own counteroffer software contract, in case this e-book author has hidden a recursion bug in their version to drain your ethereum wallet of all your life savings?

It’s a complicated way to buy a book! It’s not trustless, you’re trusting in the software (and your ability to defend yourself in a software-driven world), instead of trusting other people.
econotariat  marginal-rev  links  commentary  quotes  bitcoin  cryptocurrency  blockchain  crypto  trust  money  monetary-fiscal  technology  software  institutions  government  comparison  cost-benefit  primitivism  eden-heaven 
april 2018 by nhaliday
The Hanson-Yudkowsky AI-Foom Debate - Machine Intelligence Research Institute
How Deviant Recent AI Progress Lumpiness?: http://www.overcomingbias.com/2018/03/how-deviant-recent-ai-progress-lumpiness.html
I seem to disagree with most people working on artificial intelligence (AI) risk. While with them I expect rapid change once AI is powerful enough to replace most all human workers, I expect this change to be spread across the world, not concentrated in one main localized AI system. The efforts of AI risk folks to design AI systems whose values won’t drift might stop global AI value drift if there is just one main AI system. But doing so in a world of many AI systems at similar abilities levels requires strong global governance of AI systems, which is a tall order anytime soon. Their continued focus on preventing single system drift suggests that they expect a single main AI system.

The main reason that I understand to expect relatively local AI progress is if AI progress is unusually lumpy, i.e., arriving in unusually fewer larger packages rather than in the usual many smaller packages. If one AI team finds a big lump, it might jump way ahead of the other teams.

However, we have a vast literature on the lumpiness of research and innovation more generally, which clearly says that usually most of the value in innovation is found in many small innovations. We have also so far seen this in computer science (CS) and AI. Even if there have been historical examples where much value was found in particular big innovations, such as nuclear weapons or the origin of humans.

Apparently many people associated with AI risk, including the star machine learning (ML) researchers that they often idolize, find it intuitively plausible that AI and ML progress is exceptionally lumpy. Such researchers often say, “My project is ‘huge’, and will soon do it all!” A decade ago my ex-co-blogger Eliezer Yudkowsky and I argued here on this blog about our differing estimates of AI progress lumpiness. He recently offered Alpha Go Zero as evidence of AI lumpiness:

...

In this post, let me give another example (beyond two big lumps in a row) of what could change my mind. I offer a clear observable indicator, for which data should have available now: deviant citation lumpiness in recent ML research. One standard measure of research impact is citations; bigger lumpier developments gain more citations that smaller ones. And it turns out that the lumpiness of citations is remarkably constant across research fields! See this March 3 paper in Science:

I Still Don’t Get Foom: http://www.overcomingbias.com/2014/07/30855.html
All of which makes it look like I’m the one with the problem; everyone else gets it. Even so, I’m gonna try to explain my problem again, in the hope that someone can explain where I’m going wrong. Here goes.

“Intelligence” just means an ability to do mental/calculation tasks, averaged over many tasks. I’ve always found it plausible that machines will continue to do more kinds of mental tasks better, and eventually be better at pretty much all of them. But what I’ve found it hard to accept is a “local explosion.” This is where a single machine, built by a single project using only a tiny fraction of world resources, goes in a short time (e.g., weeks) from being so weak that it is usually beat by a single human with the usual tools, to so powerful that it easily takes over the entire world. Yes, smarter machines may greatly increase overall economic growth rates, and yes such growth may be uneven. But this degree of unevenness seems implausibly extreme. Let me explain.

If we count by economic value, humans now do most of the mental tasks worth doing. Evolution has given us a brain chock-full of useful well-honed modules. And the fact that most mental tasks require the use of many modules is enough to explain why some of us are smarter than others. (There’d be a common “g” factor in task performance even with independent module variation.) Our modules aren’t that different from those of other primates, but because ours are different enough to allow lots of cultural transmission of innovation, we’ve out-competed other primates handily.

We’ve had computers for over seventy years, and have slowly build up libraries of software modules for them. Like brains, computers do mental tasks by combining modules. An important mental task is software innovation: improving these modules, adding new ones, and finding new ways to combine them. Ideas for new modules are sometimes inspired by the modules we see in our brains. When an innovation team finds an improvement, they usually sell access to it, which gives them resources for new projects, and lets others take advantage of their innovation.

...

In Bostrom’s graph above the line for an initially small project and system has a much higher slope, which means that it becomes in a short time vastly better at software innovation. Better than the entire rest of the world put together. And my key question is: how could it plausibly do that? Since the rest of the world is already trying the best it can to usefully innovate, and to abstract to promote such innovation, what exactly gives one small project such a huge advantage to let it innovate so much faster?

...

In fact, most software innovation seems to be driven by hardware advances, instead of innovator creativity. Apparently, good ideas are available but must usually wait until hardware is cheap enough to support them.

Yes, sometimes architectural choices have wider impacts. But I was an artificial intelligence researcher for nine years, ending twenty years ago, and I never saw an architecture choice make a huge difference, relative to other reasonable architecture choices. For most big systems, overall architecture matters a lot less than getting lots of detail right. Researchers have long wandered the space of architectures, mostly rediscovering variations on what others found before.

Some hope that a small project could be much better at innovation because it specializes in that topic, and much better understands new theoretical insights into the basic nature of innovation or intelligence. But I don’t think those are actually topics where one can usefully specialize much, or where we’ll find much useful new theory. To be much better at learning, the project would instead have to be much better at hundreds of specific kinds of learning. Which is very hard to do in a small project.

What does Bostrom say? Alas, not much. He distinguishes several advantages of digital over human minds, but all software shares those advantages. Bostrom also distinguishes five paths: better software, brain emulation (i.e., ems), biological enhancement of humans, brain-computer interfaces, and better human organizations. He doesn’t think interfaces would work, and sees organizations and better biology as only playing supporting roles.

...

Similarly, while you might imagine someday standing in awe in front of a super intelligence that embodies all the power of a new age, superintelligence just isn’t the sort of thing that one project could invent. As “intelligence” is just the name we give to being better at many mental tasks by using many good mental modules, there’s no one place to improve it. So I can’t see a plausible way one project could increase its intelligence vastly faster than could the rest of the world.

Takeoff speeds: https://sideways-view.com/2018/02/24/takeoff-speeds/
Futurists have argued for years about whether the development of AGI will look more like a breakthrough within a small group (“fast takeoff”), or a continuous acceleration distributed across the broader economy or a large firm (“slow takeoff”).

I currently think a slow takeoff is significantly more likely. This post explains some of my reasoning and why I think it matters. Mostly the post lists arguments I often hear for a fast takeoff and explains why I don’t find them compelling.

(Note: this is not a post about whether an intelligence explosion will occur. That seems very likely to me. Quantitatively I expect it to go along these lines. So e.g. while I disagree with many of the claims and assumptions in Intelligence Explosion Microeconomics, I don’t disagree with the central thesis or with most of the arguments.)
ratty  lesswrong  subculture  miri-cfar  ai  risk  ai-control  futurism  books  debate  hanson  big-yud  prediction  contrarianism  singularity  local-global  speed  speedometer  time  frontier  distribution  smoothness  shift  pdf  economics  track-record  abstraction  analogy  links  wiki  list  evolution  mutation  selection  optimization  search  iteration-recursion  intelligence  metameta  chart  analysis  number  ems  coordination  cooperate-defect  death  values  formal-values  flux-stasis  philosophy  farmers-and-foragers  malthus  scale  studying  innovation  insight  conceptual-vocab  growth-econ  egalitarianism-hierarchy  inequality  authoritarianism  wealth  near-far  rationality  epistemic  biases  cycles  competition  arms  zero-positive-sum  deterrence  war  peace-violence  winner-take-all  technology  moloch  multi  plots  research  science  publishing  humanity  labor  marginal  urban-rural  structure  composition-decomposition  complex-systems  gregory-clark  decentralized  heavy-industry  magnitude  multiplicative  endogenous-exogenous  models  uncertainty  decision-theory  time-prefer 
april 2018 by nhaliday
[1410.0369] The Universe of Minds
kinda dumb, don't think this guy is anywhere close to legit (e.g., he claims set of mind designs is countable, but gives no actual reason to believe that)
papers  preprint  org:mat  ratty  miri-cfar  ai  intelligence  philosophy  logic  software  cs  computation  the-self 
march 2018 by nhaliday
Information Processing: US Needs a National AI Strategy: A Sputnik Moment?
FT podcasts on US-China competition and AI: http://infoproc.blogspot.com/2018/05/ft-podcasts-on-us-china-competition-and.html

A new recommended career path for effective altruists: China specialist: https://80000hours.org/articles/china-careers/
Our rough guess is that it would be useful for there to be at least ten people in the community with good knowledge in this area within the next few years.

By “good knowledge” we mean they’ve spent at least 3 years studying these topics and/or living in China.

We chose ten because that would be enough for several people to cover each of the major areas listed (e.g. 4 within AI, 2 within biorisk, 2 within foreign relations, 1 in another area).

AI Policy and Governance Internship: https://www.fhi.ox.ac.uk/ai-policy-governance-internship/

https://www.fhi.ox.ac.uk/deciphering-chinas-ai-dream/
https://www.fhi.ox.ac.uk/wp-content/uploads/Deciphering_Chinas_AI-Dream.pdf
Deciphering China’s AI Dream
The context, components, capabilities, and consequences of
China’s strategy to lead the world in AI

Europe’s AI delusion: https://www.politico.eu/article/opinion-europes-ai-delusion/
Brussels is failing to grasp threats and opportunities of artificial intelligence.
By BRUNO MAÇÃES

When the computer program AlphaGo beat the Chinese professional Go player Ke Jie in a three-part match, it didn’t take long for Beijing to realize the implications.

If algorithms can already surpass the abilities of a master Go player, it can’t be long before they will be similarly supreme in the activity to which the classic board game has always been compared: war.

As I’ve written before, the great conflict of our time is about who can control the next wave of technological development: the widespread application of artificial intelligence in the economic and military spheres.

...

If China’s ambitions sound plausible, that’s because the country’s achievements in deep learning are so impressive already. After Microsoft announced that its speech recognition software surpassed human-level language recognition in October 2016, Andrew Ng, then head of research at Baidu, tweeted: “We had surpassed human-level Chinese recognition in 2015; happy to see Microsoft also get there for English less than a year later.”

...

One obvious advantage China enjoys is access to almost unlimited pools of data. The machine-learning technologies boosting the current wave of AI expansion are as good as the amount of data they can use. That could be the number of people driving cars, photos labeled on the internet or voice samples for translation apps. With 700 or 800 million Chinese internet users and fewer data protection rules, China is as rich in data as the Gulf States are in oil.

How can Europe and the United States compete? They will have to be commensurately better in developing algorithms and computer power. Sadly, Europe is falling behind in these areas as well.

...

Chinese commentators have embraced the idea of a coming singularity: the moment when AI surpasses human ability. At that point a number of interesting things happen. First, future AI development will be conducted by AI itself, creating exponential feedback loops. Second, humans will become useless for waging war. At that point, the human mind will be unable to keep pace with robotized warfare. With advanced image recognition, data analytics, prediction systems, military brain science and unmanned systems, devastating wars might be waged and won in a matter of minutes.

...

The argument in the new strategy is fully defensive. It first considers how AI raises new threats and then goes on to discuss the opportunities. The EU and Chinese strategies follow opposite logics. Already on its second page, the text frets about the legal and ethical problems raised by AI and discusses the “legitimate concerns” the technology generates.

The EU’s strategy is organized around three concerns: the need to boost Europe’s AI capacity, ethical issues and social challenges. Unfortunately, even the first dimension quickly turns out to be about “European values” and the need to place “the human” at the center of AI — forgetting that the first word in AI is not “human” but “artificial.”

https://twitter.com/mr_scientism/status/983057591298351104
https://archive.is/m3Njh
US military: "LOL, China thinks it's going to be a major player in AI, but we've got all the top AI researchers. You guys will help us develop weapons, right?"

US AI researchers: "No."

US military: "But... maybe just a computer vision app."

US AI researchers: "NO."

https://www.theverge.com/2018/4/4/17196818/ai-boycot-killer-robots-kaist-university-hanwha
https://www.nytimes.com/2018/04/04/technology/google-letter-ceo-pentagon-project.html
https://twitter.com/mr_scientism/status/981685030417326080
https://archive.is/3wbHm
AI-risk was a mistake.
hsu  scitariat  commentary  video  presentation  comparison  usa  china  asia  sinosphere  frontier  technology  science  ai  speedometer  innovation  google  barons  deepgoog  stories  white-paper  strategy  migration  iran  human-capital  corporation  creative  alien-character  military  human-ml  nationalism-globalism  security  investing  government  games  deterrence  defense  nuclear  arms  competition  risk  ai-control  musk  optimism  multi  news  org:mag  europe  EU  80000-hours  effective-altruism  proposal  article  realness  offense-defense  war  biotech  altruism  language  foreign-lang  philosophy  the-great-west-whale  enhancement  foreign-policy  geopolitics  anglo  jobs  career  planning  hmm  travel  charity  tech  intel  media  teaching  tutoring  russia  india  miri-cfar  pdf  automation  class  labor  polisci  society  trust  n-factor  corruption  leviathan  ethics  authoritarianism  individualism-collectivism  revolution  economics  inequality  civic  law  regulation  data  scale  pro-rata  capital  zero-positive-sum  cooperate-defect  distribution  time-series  tre 
february 2018 by nhaliday
What Peter Thiel thinks about AI risk - Less Wrong
TL;DR: he thinks its an issue but also feels AGI is very distant and hence less worried about it than Musk.

I recommend the rest of the lecture as well, it's a good summary of "Zero to One"  and a good QA afterwards.

For context, in case anyone doesn't realize: Thiel has been MIRI's top donor throughout its history.

other stuff:
nice interview question: "thing you know is true that not everyone agrees on?"
"learning from failure overrated"
cleantech a huge market, hard to compete
software makes for easy monopolies (zero marginal costs, network effects, etc.)
for most of history inventors did not benefit much (continuous competition)
ethical behavior is a luxury of monopoly
ratty  lesswrong  commentary  ai  ai-control  risk  futurism  technology  speedometer  audio  presentation  musk  thiel  barons  frontier  miri-cfar  charity  people  track-record  venture  startups  entrepreneurialism  contrarianism  competition  market-power  business  google  truth  management  leadership  socs-and-mops  dark-arts  skunkworks  hard-tech  energy-resources  wire-guided  learning  software  sv  tech  network-structure  scale  marginal  cost-benefit  innovation  industrial-revolution  economics  growth-econ  capitalism  comparison  nationalism-globalism  china  asia  trade  stagnation  things  dimensionality  exploratory  world  developing-world  thinking  definite-planning  optimism  pessimism  intricacy  politics  war  career  planning  supply-demand  labor  science  engineering  dirty-hands  biophysical-econ  migration  human-capital  policy  canada  anglo  winner-take-all  polarization  amazon  business-models  allodium  civilization  the-classics  microsoft  analogy  gibbon  conquest-empire  realness  cynicism-idealism  org:edu  open-closed  ethics  incentives  m 
february 2018 by nhaliday
Use and Interpretation of LD Score Regression
LD Score regression distinguishes confounding from polygenicity in genome-wide association studies: https://sci-hub.bz/10.1038/ng.3211
- Po-Ru Loh, Nick Patterson, et al.

https://www.biorxiv.org/content/biorxiv/early/2014/02/21/002931.full.pdf

Both polygenicity (i.e. many small genetic effects) and confounding biases, such as cryptic relatedness and population stratification, can yield inflated distributions of test statistics in genome-wide association studies (GWAS). However, current methods cannot distinguish between inflation from bias and true signal from polygenicity. We have developed an approach that quantifies the contributions of each by examining the relationship between test statistics and linkage disequilibrium (LD). We term this approach LD Score regression. LD Score regression provides an upper bound on the contribution of confounding bias to the observed inflation in test statistics and can be used to estimate a more powerful correction factor than genomic control. We find strong evidence that polygenicity accounts for the majority of test statistic inflation in many GWAS of large sample size.

Supplementary Note: https://images.nature.com/original/nature-assets/ng/journal/v47/n3/extref/ng.3211-S1.pdf

An atlas of genetic correlations across human diseases
and traits: https://sci-hub.bz/10.1038/ng.3406

https://www.biorxiv.org/content/early/2015/01/27/014498.full.pdf

Supplementary Note: https://images.nature.com/original/nature-assets/ng/journal/v47/n11/extref/ng.3406-S1.pdf

https://github.com/bulik/ldsc
ldsc is a command line tool for estimating heritability and genetic correlation from GWAS summary statistics. ldsc also computes LD Scores.
nibble  pdf  slides  talks  bio  biodet  genetics  genomics  GWAS  genetic-correlation  correlation  methodology  bioinformatics  concept  levers  🌞  tutorial  explanation  pop-structure  gene-drift  ideas  multi  study  org:nat  article  repo  software  tools  libraries  stats  hypothesis-testing  biases  confounding  gotchas  QTL  simulation  survey  preprint  population-genetics 
november 2017 by nhaliday
Ancient Admixture in Human History
- Patterson, Reich et al., 2012
Population mixture is an important process in biology. We present a suite of methods for learning about population mixtures, implemented in a software package called ADMIXTOOLS, that support formal tests for whether mixture occurred and make it possible to infer proportions and dates of mixture. We also describe the development of a new single nucleotide polymorphism (SNP) array consisting of 629,433 sites with clearly documented ascertainment that was specifically designed for population genetic analyses and that we genotyped in 934 individuals from 53 diverse populations. To illustrate the methods, we give a number of examples that provide new insights about the history of human admixture. The most striking finding is a clear signal of admixture into northern Europe, with one ancestral population related to present-day Basques and Sardinians and the other related to present-day populations of northeast Asia and the Americas. This likely reflects a history of admixture between Neolithic migrants and the indigenous Mesolithic population of Europe, consistent with recent analyses of ancient bones from Sweden and the sequencing of the genome of the Tyrolean “Iceman.”
nibble  pdf  study  article  methodology  bio  sapiens  genetics  genomics  population-genetics  migration  gene-flow  software  trees  concept  history  antiquity  europe  roots  gavisti  🌞  bioinformatics  metrics  hypothesis-testing  levers  ideas  libraries  tools  pop-structure 
november 2017 by nhaliday
Superintelligence Risk Project Update II
https://www.jefftk.com/p/superintelligence-risk-project-update

https://www.jefftk.com/p/conversation-with-michael-littman
For example, I asked him what he thought of the idea that to we could get AGI with current techniques, primarily deep neural nets and reinforcement learning, without learning anything new about how intelligence works or how to implement it ("Prosaic AGI" [1]). He didn't think this was possible, and believes there are deep conceptual issues we still need to get a handle on. He's also less impressed with deep learning than he was before he started working in it: in his experience it's a much more brittle technology than he had been expecting. Specifically, when trying to replicate results, he's often found that they depend on a bunch of parameters being in just the right range, and without that the systems don't perform nearly as well.

The bottom line, to him, was that since we are still many breakthroughs away from getting to AGI, we can't productively work on reducing superintelligence risk now.

He told me that he worries that the AI risk community is not solving real problems: they're making deductions and inferences that are self-consistent but not being tested or verified in the world. Since we can't tell if that's progress, it probably isn't. I asked if he was referring to MIRI's work here, and he said their work was an example of the kind of approach he's skeptical about, though he wasn't trying to single them out. [2]

https://www.jefftk.com/p/conversation-with-an-ai-researcher
Earlier this week I had a conversation with an AI researcher [1] at one of the main industry labs as part of my project of assessing superintelligence risk. Here's what I got from them:

They see progress in ML as almost entirely constrained by hardware and data, to the point that if today's hardware and data had existed in the mid 1950s researchers would have gotten to approximately our current state within ten to twenty years. They gave the example of backprop: we saw how to train multi-layer neural nets decades before we had the computing power to actually train these nets to do useful things.

Similarly, people talk about AlphaGo as a big jump, where Go went from being "ten years away" to "done" within a couple years, but they said it wasn't like that. If Go work had stayed in academia, with academia-level budgets and resources, it probably would have taken nearly that long. What changed was a company seeing promising results, realizing what could be done, and putting way more engineers and hardware on the project than anyone had previously done. AlphaGo couldn't have happened earlier because the hardware wasn't there yet, and was only able to be brought forward by massive application of resources.

https://www.jefftk.com/p/superintelligence-risk-project-conclusion
Summary: I'm not convinced that AI risk should be highly prioritized, but I'm also not convinced that it shouldn't. Highly qualified researchers in a position to have a good sense the field have massively different views on core questions like how capable ML systems are now, how capable they will be soon, and how we can influence their development. I do think these questions are possible to get a better handle on, but I think this would require much deeper ML knowledge than I have.
ratty  core-rats  ai  risk  ai-control  prediction  expert  machine-learning  deep-learning  speedometer  links  research  research-program  frontier  multi  interview  deepgoog  games  hardware  performance  roots  impetus  chart  big-picture  state-of-art  reinforcement  futurism  🤖  🖥  expert-experience  singularity  miri-cfar  empirical  evidence-based  speculation  volo-avolo  clever-rats  acmtariat  robust  ideas  crux  atoms  detail-architecture  software  gradient-descent 
july 2017 by nhaliday
Identify Anything, Anywhere, Instantly (Well, Almost) With the Newest iNaturalist Release - Bay Nature
A new version of the California Academy of Sciences’ iNaturalist app uses artificial intelligence to offer immediate identifications for photos of any kind of wildlife. You can observe anywhere and ask the computer anything. I’ve been using it for a few weeks now and it seems like it mostly works. It is completely astonishing.
tools  sleuthin  software  app  mobility  ios  nature  outdoors  database  reference  info-foraging  toys 
july 2017 by nhaliday
Overcoming Bias : A Tangled Task Future
So we may often retain systems that inherit the structure of the human brain, and the structures of the social teams and organizations by which humans have worked together. All of which is another way to say: descendants of humans may have a long future as workers. We may have another future besides being retirees or iron-fisted peons ruling over gods. Even in a competitive future with no friendly singleton to ensure preferential treatment, something recognizably like us may continue. And even win.
ratty  hanson  speculation  automation  labor  economics  ems  futurism  prediction  complex-systems  network-structure  intricacy  thinking  engineering  management  law  compensation  psychology  cog-psych  ideas  structure  gray-econ  competition  moloch  coordination  cooperate-defect  risk  ai  ai-control  singularity  number  humanity  complement-substitute  cybernetics  detail-architecture  legacy  threat-modeling  degrees-of-freedom  composition-decomposition  order-disorder  analogy  parsimony  institutions  software 
june 2017 by nhaliday
:feed v1 - /fora/posts/~2017.4.12..21.14.00..fe17~
The goal of this demo was to show that building a Twitter replacement actually isn't that hard at all; and it can be done almost entirely on the frontend. As shown, you don't even have to use React/Redux. But that's probably the way to go if you want to build the real thing.
techtariat  urbit  software  decentralized  twitter  social  internet  web  programming  tutorial  project  gnon 
april 2017 by nhaliday
Overcoming Bias : On the goodness of Beeminder
There is a lot of leeway in what indicators you measure, and some I tried didn’t help much. The main things I measure lately are:

- number of 20 minute blocks of time spent working. They have to be continuous, though a tiny bit of interruption is allowed if someone else causes it
- time spent exercising weighted by the type of exercise e.g. running = 2x dancing = 2 x walking
- points accrued for doing tasks on my to-do list. When I think of anything I want to do I put it on the list, whether it’s watching a certain movie or figuring out how to make the to do list system better. Some things stay there permanently, e.g. laundry. I assign each task a number of points, which goes up every Sunday if it’s still on the list. I have to get 15 points per day or I lose.
ratty  core-rats  hanson  rationality  money-for-time  akrasia  productivity  workflow  webapp  tools  review  software  exocortex  decision-making  working-stiff  the-monster  🦉  beeminder  skeleton  summary  gtd  time-use  quantified-self  procrastination 
january 2017 by nhaliday
Edge.org: 2016 : WHAT DO YOU CONSIDER THE MOST INTERESTING RECENT [SCIENTIFIC] NEWS? WHAT MAKES IT IMPORTANT?
highlights:
- quantum supremacy [Scott Aaronson]
- gene drive
- gene editing/CRISPR
- carcinogen may be entropy
- differentiable programming
- quantitative biology
soft:
- antisocial punishment of pro-social cooperators
- "strongest prejudice" (politics) [Haidt]
- Europeans' origins [Cochran]
- "Anthropic Capitalism And The New Gimmick Economy" [Eric Weinstein]

https://twitter.com/toad_spotted/status/986253381344907265
https://archive.is/gNGDJ
There's an underdiscussed contradiction between the idea that our society would make almost all knowledge available freely and instantaneously to almost everyone and that almost everyone would find gainful employment as knowledge workers. Value is in scarcity not abundance.
--
You’d need to turn reputational-based systems into an income stream
technology  discussion  trends  gavisti  west-hunter  aaronson  haidt  list  expert  science  biotech  geoengineering  top-n  org:edge  frontier  multi  CRISPR  2016  big-picture  links  the-world-is-just-atoms  quantum  quantum-info  computation  metameta  🔬  scitariat  q-n-a  zeitgeist  speedometer  cancer  random  epidemiology  mutation  GT-101  cooperate-defect  cultural-dynamics  anthropology  expert-experience  tcs  volo-avolo  questions  thiel  capitalism  labor  supply-demand  internet  tech  economics  broad-econ  prediction  automation  realness  gnosis-logos  iteration-recursion  similarity  uniqueness  homo-hetero  education  duplication  creative  software  programming  degrees-of-freedom  futurism  order-disorder  flux-stasis  public-goodish  markets  market-failure  piracy  property-rights  free-riding  twitter  social  backup  ratty  unaffiliated  gnon  contradiction  career  planning  hmm  idk  knowledge  higher-ed  pro-rata  sociality  reinforcement  tribalism  us-them  politics  coalitions  prejudice  altruism  human-capital  engineering  unintended-consequences 
november 2016 by nhaliday
Decison Tree for Optimization Software
including convex programming

Mosek makes out pretty good but not pareto-optimal
benchmarks  optimization  software  libraries  comparison  data  performance  faq  frameworks  curvature  convexity-curvature 
november 2016 by nhaliday
Cryptpad: Zero Knowledge, Collaborative Real Time Editing | Hacker News
comments have interesting discussion of use of "zero-knowledge" in practice
commentary  hn  project  software  tools  crypto  privacy  hmm  engineering 
september 2016 by nhaliday
« earlier      
per page:    204080120160

bundles : techie

related tags

2016-election  80000-hours  :/  aaronson  ability-competence  abstraction  academia  accessibility  accuracy  acmtariat  advertising  advice  aggregator  ai  ai-control  akrasia  algorithms  alien-character  alignment  allodium  altruism  amazon  analogy  analysis  analytical-holistic  anglo  anglosphere  announcement  anomie  anonymity  anthropology  antiquity  app  approximation  arms  art  article  asia  atoms  attention  audio  authoritarianism  automation  aversion  backup  barons  bayesian  beeminder  behavioral-gen  benchmarks  best-practices  biases  bifl  big-peeps  big-picture  big-yud  bio  biodet  biohacking  bioinformatics  biophysical-econ  biotech  bitcoin  bits  blockchain  blog  boaz-barak  books  bootstraps  bostrom  bots  bounded-cognition  brain-scan  brands  britain  broad-econ  browser  build-packaging  business  business-models  c(pp)  calculation  canada  cancer  capital  capitalism  career  carmack  censorship  charity  chart  cheatsheet  checking  chemistry  chicago  china  civic  civilization  class  class-warfare  clever-rats  cloud  coalitions  coarse-fine  cochrane  cocktail  code-dive  coding-theory  cog-psych  collaboration  commentary  communication  community  comparison  compensation  competition  compilers  complement-substitute  complex-systems  complexity  composition-decomposition  computation  computational-geometry  computer-vision  concept  conceptual-vocab  concurrency  confounding  conquest-empire  consumerism  context  contradiction  contrarianism  convexity-curvature  cool  cooperate-defect  coordination  core-rats  corporation  correlation  corruption  cost-benefit  cracker-econ  creative  CRISPR  critique  crooked  crux  crypto  crypto-anarchy  cryptocurrency  cs  cultural-dynamics  culture  current-events  curvature  cybernetics  cycles  cynicism-idealism  dan-luu  dark-arts  data  data-science  database  dataviz  dbs  death  debate  debugging  decentralized  decision-making  decision-theory  deep-learning  deep-materialism  deepgoog  defense  definite-planning  degrees-of-freedom  dennett  density  design  desktop  detail-architecture  deterrence  developing-world  devops  devtools  dimensionality  diogenes  dirty-hands  discipline  discrimination  discussion  distributed  distribution  diversity  documentation  dropbox  drugs  duplication  duty  eastern-europe  econometrics  economics  econotariat  eden  eden-heaven  editors  education  EEA  effective-altruism  efficiency  egalitarianism-hierarchy  EGT  eh  elections  electromag  email  embedded  embodied  embodied-pack  emotion  empirical  ems  endogenous-exogenous  energy-resources  engineering  enhancement  entrepreneurialism  entropy-like  epidemiology  epistemic  error  essay  estimate  ethical-algorithms  ethics  EU  europe  evan-miller  evidence-based  evolution  evopsych  examples  existence  exit-voice  exocortex  expert  expert-experience  explanation  exploratory  exposition  externalities  facebook  failure  faq  farmers-and-foragers  ffi  fiction  finance  finiteness  fitness  flexibility  flux-stasis  focus  foreign-lang  foreign-policy  formal-methods  formal-values  forum  frameworks  free  free-riding  freelance  frequency  frontend  frontier  functional  futurism  gallic  games  garett-jones  gavisti  gedanken  gender  gene-drift  gene-flow  generalization  genetic-correlation  genetics  genomics  geoengineering  geopolitics  giants  gibbon  git  github  gnon  gnosis-logos  golang  google  gotchas  government  gowers  gradient-descent  graph-theory  graphical-models  graphics  graphs  gravity  gray-econ  gregory-clark  growth-econ  GT-101  gtd  GWAS  haidt  hanson  hard-tech  hardware  hashing  haskell  hci  health  heavy-industry  high-variance  higher-ed  history  hmm  hn  homo-hetero  housing  howto  hsu  human-capital  human-ml  humanity  hypocrisy  hypothesis-testing  ide  ideas  idk  iidness  illusion  impact  impetus  incentives  india  individualism-collectivism  industrial-org  industrial-revolution  inequality  info-dynamics  info-foraging  information-theory  inhibition  innovation  insight  instinct  institutions  integration-extension  intel  intelligence  interdisciplinary  interests  internet  interpretability  interview  interview-prep  intricacy  intuition  investing  ios  iq  iran  iteration-recursion  janus  japan  javascript  jobs  jvm  knowledge  labor  language  large-factor  latent-variables  latex  law  leadership  leaks  learning  left-wing  legacy  len:long  len:short  lens  lesswrong  let-me-see  levers  leviathan  libraries  limits  liner-notes  links  linux  lisp  list  local-global  logic  lol  long-short-run  long-term  longitudinal  lower-bounds  machine-learning  macro  madisonian  magnitude  malaise  malthus  management  marginal  marginal-rev  market-failure  market-power  markets  math  mathtariat  meaningness  measure  measurement  mechanics  media  meta-analysis  meta:prediction  meta:rhetoric  meta:science  metameta  metaprogramming  methodology  metrics  michael-nielsen  microbiz  microsoft  migration  mihai  military  mindful  minimalism  miri-cfar  mit  mobile  mobility  model-class  model-organism  models  moloch  moments  monetary-fiscal  money  money-for-time  morality  moxie  multi  multiplicative  music  musk  mutation  mystic  n-factor  nationalism-globalism  nature  near-far  network-structure  networking  neuro  neuro-nitgrit  neurons  news  nibble  nietzschean  nitty-gritty  nl-and-so-can-you  nlp  no-go  nonlinearity  notetaking  nuclear  null-result  number  nyc  objektbuch  ocr  offense-defense  oly  oly-programming  open-closed  openai  opsec  optimate  optimism  optimization  order-disorder  org:anglo  org:biz  org:bleg  org:edge  org:edu  org:fin  org:health  org:junk  org:lite  org:mag  org:mat  org:med  org:nat  org:ngo  org:rec  organization  os  oss  osx  outdoors  overflow  p2p  paas  papers  paradox  parenting  parsimony  pdf  peace-violence  pennsylvania  people  performance  personal-assistant  personal-finance  pessimism  philosophy  photography  phys-energy  physics  pinboard  piracy  plan9  planning  play  plots  pls  plt  poast  podcast  polarization  policy  polisci  politics  poll  pop-structure  population-genetics  power  pragmatic  prediction  prejudice  preprint  presentation  primitivism  prioritizing  privacy  pro-rata  procrastination  productivity  profile  programming  progression  project  property-rights  proposal  protocol  psychedelics  psychology  psychometrics  public-goodish  publishing  python  q-n-a  qra  QTL  quantified-self  quantum  quantum-info  questions  quora  quotes  r-lang  race  random  randy-ayndy  ranking  rant  rationality  ratty  realness  reason  recommendations  recording  recruiting  reddit  reduction  reference  reflection  regularizer  regulation  reinforcement  relativity  religion  rent-seeking  replication  repo  reputation  research  research-program  retention  review  revolution  rhetoric  rhythm  risk  robotics  robust  roots  rot  rsc  ruby  russia  rust  s:***  saas  sapiens  scale  scaling-tech  scaling-up  scholar  science  scifi-fantasy  scitariat  search  security  selection  sequential  sex  shift  shipping  SIGGRAPH  signal-noise  similarity  simulation  singularity  sinosphere  skeleton  skunkworks  sleep  sleuthin  slides  slippery-slope  smoothness  social  social-choice  sociality  society  socs-and-mops  soft-question  software  spatial  speculation  speed  speedometer  spotify  ssc  stackex  stagnation  stamina  stanford  startups  state-of-art  stats  stoic  stories  strategy  stream  stress  stripe  structure  study  studying  subculture  success  summary  supply-demand  survey  sv  systems  tactics  talks  tcs  tcstariat  teaching  tech  technology  techtariat  telos-atelos  temperance  terminal  the-bones  the-classics  the-great-west-whale  the-monster  the-self  the-trenches  the-world-is-just-atoms  theory-of-mind  theos  thermo  thiel  things  thinking  threat-modeling  time  time-preference  time-series  time-use  tip-of-tongue  todo  tools  top-n  toys  track-record  tracker  trade  tradecraft  transitions  travel  trees  trends  tribalism  tricks  trivia  trump  trust  truth  tutorial  tutoring  twitter  ui  unaffiliated  uncertainty  unintended-consequences  uniqueness  universalism-particularism  unix  urban-rural  urbit  us-them  usa  ux  values  vcs  venture  video  virginia-DC  virtu  virtualization  visual-understanding  visualization  visuo  vitality  volo-avolo  war  water  wealth  web  webapp  weightlifting  west-hunter  white-paper  whole-partial-many  wiki  winner-take-all  wire-guided  within-without  wkfly  wonkish  wordlessness  workflow  working-stiff  world  worrydream  writing  wtf  yak-shaving  yarvin  yc  zeitgeist  zero-positive-sum  🌞  🎩  🐸  🔬  🖥  🤖  🦉 

Copy this bookmark:



description:


tags: