local-global   74

« earlier    

"Performance Matters" by Emery Berger - YouTube
Stabilizer is a tool that enables statistically sound performance evaluation, making it possible to understand the impact of optimizations and conclude things like the fact that the -O2 and -O3 optimization levels are indistinguishable from noise (sadly true).

Since compiler optimizations have run out of steam, we need better profiling support, especially for modern concurrent, multi-threaded applications. Coz is a new "causal profiler" that lets programmers optimize for throughput or latency, and which pinpoints and accurately predicts the impact of optimizations.

- randomize extraneous factors like code layout and stack size to avoid spurious speedups
- simulate speedup of component of concurrent system (to assess effect of optimization before attempting) by slowing down the complement (all but that component)
- latency vs. throughput, Little's law
video  presentation  programming  engineering  nitty-gritty  performance  devtools  compilers  latency-throughput  concurrency  legacy  causation  wire-guided  let-me-see  manifolds  pro-rata  tricks  endogenous-exogenous  control  random  signal-noise  comparison  marginal  llvm  systems  hashing  computer-memory  build-packaging  composition-decomposition  coupling-cohesion  local-global  dbs  direct-indirect  symmetry  research  models  metal-to-virtual  linux  measurement  simulation  magnitude  realness  hypothesis-testing 
5 weeks ago by nhaliday
How to come up with the solutions: techniques - Codeforces
Technique 1: "Total Recall"
Technique 2: "From Specific to General"
Let's say that you've found the solution for the problem (hurray!). Let's consider some particular case of a problem. Of course, you can apply the algorithm/solution to it. That's why, in order to solve a general problem, you need to solve all of its specific cases. Try solving some (or multiple) specific cases and then try and generalize them to the solution of the main problem.
Technique 3: "Bold Hypothesis"
Technique 4: "To solve a problem, you should think like a problem"
Technique 5: "Think together"
Technique 6: "Pick a Method"
Technique 7: "Print Out and Look"
Technique 8: "Google"
oly  oly-programming  problem-solving  thinking  expert-experience  retention  metabuch  visual-understanding  zooming  local-global  collaboration  tactics  debugging  bare-hands  let-me-see  advice 
august 2019 by nhaliday
The Scholar's Stage: Book Notes—Strategy: A History
Freedman's book is something of a shadow history of Western intellectual thought between 1850 and 2010. Marx, Tolstoy, Foucault, game theorists, economists, business law--it is all in there.

Thus the thoughts prompted by this book have surprisingly little to do with war.
Instead I am left with questions about the long-term trajectory of Western thought. Specifically:

*Has America really dominated Western intellectual life in the post 45 world as much as English speakers seem to think it has?
*Has the professionalization/credential-iization of Western intellectual life helped or harmed our ability to understand society?
*Will we ever recover from the 1960s?
wonkish  unaffiliated  broad-econ  books  review  reflection  summary  strategy  war  higher-ed  academia  social-science  letters  organizing  nascent-state  counter-revolution  rot  westminster  culture-war  left-wing  anglosphere  usa  history  mostly-modern  coordination  lens  local-global  europe  gallic  philosophy  cultural-dynamics  anthropology  game-theory  industrial-org  schelling  flux-stasis  trends  culture  iraq-syria  MENA  military  frontier  info-dynamics  big-peeps  politics  multi  twitter  social  commentary  backup  defense 
july 2019 by nhaliday
PythonSpeed/PerformanceTips - Python Wiki
some are obsolete, but I think, eg, the tip about using local vars over globals is still applicable
wiki  reference  cheatsheet  objektbuch  list  programming  python  performance  pls  local-global 
june 2019 by nhaliday
AFL + QuickCheck = ?
Adventures in fuzzing. Also differences between testing culture in software and hardware.
techtariat  dan-luu  programming  engineering  checking  random  haskell  path-dependence  span-cover  heuristic  libraries  links  tools  devtools  software  hardware  culture  formal-methods  local-global  golang  correctness  methodology 
may 2019 by nhaliday
Ultimate fate of the universe - Wikipedia
The fate of the universe is determined by its density. The preponderance of evidence to date, based on measurements of the rate of expansion and the mass density, favors a universe that will continue to expand indefinitely, resulting in the "Big Freeze" scenario below.[8] However, observations are not conclusive, and alternative models are still possible.[9]

Big Freeze or heat death
Main articles: Future of an expanding universe and Heat death of the universe
The Big Freeze is a scenario under which continued expansion results in a universe that asymptotically approaches absolute zero temperature.[10] This scenario, in combination with the Big Rip scenario, is currently gaining ground as the most important hypothesis.[11] It could, in the absence of dark energy, occur only under a flat or hyperbolic geometry. With a positive cosmological constant, it could also occur in a closed universe. In this scenario, stars are expected to form normally for 1012 to 1014 (1–100 trillion) years, but eventually the supply of gas needed for star formation will be exhausted. As existing stars run out of fuel and cease to shine, the universe will slowly and inexorably grow darker. Eventually black holes will dominate the universe, which themselves will disappear over time as they emit Hawking radiation.[12] Over infinite time, there would be a spontaneous entropy decrease by the Poincaré recurrence theorem, thermal fluctuations,[13][14] and the fluctuation theorem.[15][16]

A related scenario is heat death, which states that the universe goes to a state of maximum entropy in which everything is evenly distributed and there are no gradients—which are needed to sustain information processing, one form of which is life. The heat death scenario is compatible with any of the three spatial models, but requires that the universe reach an eventual temperature minimum.[17]
physics  big-picture  world  space  long-short-run  futurism  singularity  wiki  reference  article  nibble  thermo  temperature  entropy-like  order-disorder  death  nihil  bio  complex-systems  cybernetics  increase-decrease  trends  computation  local-global  prediction  time  spatial  spreading  density  distribution  manifolds  geometry  janus 
april 2018 by nhaliday
The Hanson-Yudkowsky AI-Foom Debate - Machine Intelligence Research Institute
How Deviant Recent AI Progress Lumpiness?: http://www.overcomingbias.com/2018/03/how-deviant-recent-ai-progress-lumpiness.html
I seem to disagree with most people working on artificial intelligence (AI) risk. While with them I expect rapid change once AI is powerful enough to replace most all human workers, I expect this change to be spread across the world, not concentrated in one main localized AI system. The efforts of AI risk folks to design AI systems whose values won’t drift might stop global AI value drift if there is just one main AI system. But doing so in a world of many AI systems at similar abilities levels requires strong global governance of AI systems, which is a tall order anytime soon. Their continued focus on preventing single system drift suggests that they expect a single main AI system.

The main reason that I understand to expect relatively local AI progress is if AI progress is unusually lumpy, i.e., arriving in unusually fewer larger packages rather than in the usual many smaller packages. If one AI team finds a big lump, it might jump way ahead of the other teams.

However, we have a vast literature on the lumpiness of research and innovation more generally, which clearly says that usually most of the value in innovation is found in many small innovations. We have also so far seen this in computer science (CS) and AI. Even if there have been historical examples where much value was found in particular big innovations, such as nuclear weapons or the origin of humans.

Apparently many people associated with AI risk, including the star machine learning (ML) researchers that they often idolize, find it intuitively plausible that AI and ML progress is exceptionally lumpy. Such researchers often say, “My project is ‘huge’, and will soon do it all!” A decade ago my ex-co-blogger Eliezer Yudkowsky and I argued here on this blog about our differing estimates of AI progress lumpiness. He recently offered Alpha Go Zero as evidence of AI lumpiness:


In this post, let me give another example (beyond two big lumps in a row) of what could change my mind. I offer a clear observable indicator, for which data should have available now: deviant citation lumpiness in recent ML research. One standard measure of research impact is citations; bigger lumpier developments gain more citations that smaller ones. And it turns out that the lumpiness of citations is remarkably constant across research fields! See this March 3 paper in Science:

I Still Don’t Get Foom: http://www.overcomingbias.com/2014/07/30855.html
All of which makes it look like I’m the one with the problem; everyone else gets it. Even so, I’m gonna try to explain my problem again, in the hope that someone can explain where I’m going wrong. Here goes.

“Intelligence” just means an ability to do mental/calculation tasks, averaged over many tasks. I’ve always found it plausible that machines will continue to do more kinds of mental tasks better, and eventually be better at pretty much all of them. But what I’ve found it hard to accept is a “local explosion.” This is where a single machine, built by a single project using only a tiny fraction of world resources, goes in a short time (e.g., weeks) from being so weak that it is usually beat by a single human with the usual tools, to so powerful that it easily takes over the entire world. Yes, smarter machines may greatly increase overall economic growth rates, and yes such growth may be uneven. But this degree of unevenness seems implausibly extreme. Let me explain.

If we count by economic value, humans now do most of the mental tasks worth doing. Evolution has given us a brain chock-full of useful well-honed modules. And the fact that most mental tasks require the use of many modules is enough to explain why some of us are smarter than others. (There’d be a common “g” factor in task performance even with independent module variation.) Our modules aren’t that different from those of other primates, but because ours are different enough to allow lots of cultural transmission of innovation, we’ve out-competed other primates handily.

We’ve had computers for over seventy years, and have slowly build up libraries of software modules for them. Like brains, computers do mental tasks by combining modules. An important mental task is software innovation: improving these modules, adding new ones, and finding new ways to combine them. Ideas for new modules are sometimes inspired by the modules we see in our brains. When an innovation team finds an improvement, they usually sell access to it, which gives them resources for new projects, and lets others take advantage of their innovation.


In Bostrom’s graph above the line for an initially small project and system has a much higher slope, which means that it becomes in a short time vastly better at software innovation. Better than the entire rest of the world put together. And my key question is: how could it plausibly do that? Since the rest of the world is already trying the best it can to usefully innovate, and to abstract to promote such innovation, what exactly gives one small project such a huge advantage to let it innovate so much faster?


In fact, most software innovation seems to be driven by hardware advances, instead of innovator creativity. Apparently, good ideas are available but must usually wait until hardware is cheap enough to support them.

Yes, sometimes architectural choices have wider impacts. But I was an artificial intelligence researcher for nine years, ending twenty years ago, and I never saw an architecture choice make a huge difference, relative to other reasonable architecture choices. For most big systems, overall architecture matters a lot less than getting lots of detail right. Researchers have long wandered the space of architectures, mostly rediscovering variations on what others found before.

Some hope that a small project could be much better at innovation because it specializes in that topic, and much better understands new theoretical insights into the basic nature of innovation or intelligence. But I don’t think those are actually topics where one can usefully specialize much, or where we’ll find much useful new theory. To be much better at learning, the project would instead have to be much better at hundreds of specific kinds of learning. Which is very hard to do in a small project.

What does Bostrom say? Alas, not much. He distinguishes several advantages of digital over human minds, but all software shares those advantages. Bostrom also distinguishes five paths: better software, brain emulation (i.e., ems), biological enhancement of humans, brain-computer interfaces, and better human organizations. He doesn’t think interfaces would work, and sees organizations and better biology as only playing supporting roles.


Similarly, while you might imagine someday standing in awe in front of a super intelligence that embodies all the power of a new age, superintelligence just isn’t the sort of thing that one project could invent. As “intelligence” is just the name we give to being better at many mental tasks by using many good mental modules, there’s no one place to improve it. So I can’t see a plausible way one project could increase its intelligence vastly faster than could the rest of the world.

Takeoff speeds: https://sideways-view.com/2018/02/24/takeoff-speeds/
Futurists have argued for years about whether the development of AGI will look more like a breakthrough within a small group (“fast takeoff”), or a continuous acceleration distributed across the broader economy or a large firm (“slow takeoff”).

I currently think a slow takeoff is significantly more likely. This post explains some of my reasoning and why I think it matters. Mostly the post lists arguments I often hear for a fast takeoff and explains why I don’t find them compelling.

(Note: this is not a post about whether an intelligence explosion will occur. That seems very likely to me. Quantitatively I expect it to go along these lines. So e.g. while I disagree with many of the claims and assumptions in Intelligence Explosion Microeconomics, I don’t disagree with the central thesis or with most of the arguments.)
ratty  lesswrong  subculture  miri-cfar  ai  risk  ai-control  futurism  books  debate  hanson  big-yud  prediction  contrarianism  singularity  local-global  speed  speedometer  time  frontier  distribution  smoothness  shift  pdf  economics  track-record  abstraction  analogy  links  wiki  list  evolution  mutation  selection  optimization  search  iteration-recursion  intelligence  metameta  chart  analysis  number  ems  coordination  cooperate-defect  death  values  formal-values  flux-stasis  philosophy  farmers-and-foragers  malthus  scale  studying  innovation  insight  conceptual-vocab  growth-econ  egalitarianism-hierarchy  inequality  authoritarianism  wealth  near-far  rationality  epistemic  biases  cycles  competition  arms  zero-positive-sum  deterrence  war  peace-violence  winner-take-all  technology  moloch  multi  plots  research  science  publishing  humanity  labor  marginal  urban-rural  structure  composition-decomposition  complex-systems  gregory-clark  decentralized  heavy-industry  magnitude  multiplicative  endogenous-exogenous  models  uncertainty  decision-theory  time-prefer 
april 2018 by nhaliday
Existential Risks: Analyzing Human Extinction Scenarios
Would you endorse choosing policy to max the expected duration of civilization, at least as a good first approximation?
Can anyone suggest a different first approximation that would get more votes?

How useful would it be to agree on a relatively-simple first-approximation observable-after-the-fact metric for what we want from the future universe, such as total life years experienced, or civilization duration?

We're Underestimating the Risk of Human Extinction: https://www.theatlantic.com/technology/archive/2012/03/were-underestimating-the-risk-of-human-extinction/253821/
An Oxford philosopher argues that we are not adequately accounting for technology's risks—but his solution to the problem is not for Luddites.

Anderson: You have argued that we underrate existential risks because of a particular kind of bias called observation selection effect. Can you explain a bit more about that?

Bostrom: The idea of an observation selection effect is maybe best explained by first considering the simpler concept of a selection effect. Let's say you're trying to estimate how large the largest fish in a given pond is, and you use a net to catch a hundred fish and the biggest fish you find is three inches long. You might be tempted to infer that the biggest fish in this pond is not much bigger than three inches, because you've caught a hundred of them and none of them are bigger than three inches. But if it turns out that your net could only catch fish up to a certain length, then the measuring instrument that you used would introduce a selection effect: it would only select from a subset of the domain you were trying to sample.

Now that's a kind of standard fact of statistics, and there are methods for trying to correct for it and you obviously have to take that into account when considering the fish distribution in your pond. An observation selection effect is a selection effect introduced not by limitations in our measurement instrument, but rather by the fact that all observations require the existence of an observer. This becomes important, for instance, in evolutionary biology. For instance, we know that intelligent life evolved on Earth. Naively, one might think that this piece of evidence suggests that life is likely to evolve on most Earth-like planets. But that would be to overlook an observation selection effect. For no matter how small the proportion of all Earth-like planets that evolve intelligent life, we will find ourselves on a planet that did. Our data point-that intelligent life arose on our planet-is predicted equally well by the hypothesis that intelligent life is very improbable even on Earth-like planets as by the hypothesis that intelligent life is highly probable on Earth-like planets. When it comes to human extinction and existential risk, there are certain controversial ways that observation selection effects might be relevant.
bostrom  ratty  miri-cfar  skunkworks  philosophy  org:junk  list  top-n  frontier  speedometer  risk  futurism  local-global  scale  death  nihil  technology  simulation  anthropic  nuclear  deterrence  environment  climate-change  arms  competition  ai  ai-control  genetics  genomics  biotech  parasites-microbiome  disease  offense-defense  physics  tails  network-structure  epidemiology  space  geoengineering  dysgenics  ems  authoritarianism  government  values  formal-values  moloch  enhancement  property-rights  coordination  cooperate-defect  flux-stasis  ideas  prediction  speculation  humanity  singularity  existence  cybernetics  study  article  letters  eden-heaven  gedanken  multi  twitter  social  discussion  backup  hanson  metrics  optimization  time  long-short-run  janus  telos-atelos  poll  forms-instances  threat-modeling  selection  interview  expert-experience  malthus  volo-avolo  intel  leviathan  drugs  pharma  data  estimate  nature  longevity  expansionism  homo-hetero  utopia-dystopia 
march 2018 by nhaliday

« earlier    

related tags

2016-election  80000-hours  absolute-relative  abstraction  academia  accretion  acm  acmtariat  advanced  advice  aggregator  ai-control  ai  algorithms  alien-character  alignment  amt  analogy  analysis  analytical-holistic  anglosphere  anthropic  anthropology  antidemos  aphorism  apollonian-dionysian  applicability-prereqs  approximation  aristos  arms  arrows  article  asia  atoms  attaq  attention  audio  authoritarianism  autism  automation  average-case  axioms  backup  bare-hands  being-becoming  ben-recht  best-practices  biases  big-list  big-peeps  big-picture  big-yud  bio  biotech  boaz-barak  books  bootstraps  bostrom  brain-scan  branches  britain  broad-econ  buddhism  build-packaging  c(pp)  capital  career  carmack  cartoons  causation  characterization  chart  cheatsheet  checking  checklists  chemistry  china  civic  civil-liberty  clarity  class-warfare  clever-rats  climate-change  coalitions  coarse-fine  cocoa  cog-psych  cohesion  collaboration  commentary  communication  comparison  compensation  competition  compilers  complex-systems  composition-decomposition  computation  computer-memory  computer-vision  concentration-of-measure  concept  conceptual-vocab  concurrency  conference  conquest-empire  constraint-satisfaction  context  contrarianism  control  convexity-curvature  cool  cooperate-defect  coordination  corporation  correctness  cost-benefit  cost-disease  counter-revolution  counterexample  counterfactual  coupling-cohesion  course_use  cracker-econ  crime  criminal-justice  critique  crooked  crux  cs  cultural-dynamics  culture-war  culture  current-events  curvature  cybernetics  cycles  dan-luu  data  database  dbs  death  debate  debugging  decentralized  decision-making  decision-theory  deep-learning  deep-materialism  deepgoog  defense  definition  degrees-of-freedom  democracy  dennett  density  descriptive  detail-architecture  deterrence  devtools  dimensionality  direct-indirect  dirty-hands  discussion  disease  distribution  douthatish  draft  drugs  dsl  dysgenics  early-modern  ecology  econ-metrics  econ-productivity  economics  econotariat  eden-heaven  eden  education  eea  effective-altruism  efficiency  egalitarianism-hierarchy  egt  elections  electromag  elegance  elite  embedded-cognition  embodied  emotion  empirical  ems  endogenous-exogenous  engineering  enhancement  entropy-like  environment  envy  epidemiology  epistemic  equilibrium  error  essence-existence  estimate  ethics  eu  europe  events  evolution  evopsych  examples  existence  exit-voice  expansionism  expert-experience  expert  explanans  explanation  exploration-exploitation  exposition  extrema  facebook  fall-2015  farmers-and-foragers  fermi  feudal  flexibility  fluid  flux-stasis  foreign-lang  formal-methods  formal-values  forms-instances  fourier  frontier  functional  futurism  gallic  game-theory  games  gedanken  gender  generalization  generative  genetics  genomics  geo  geoengineering  geog327  geometry  giants  globalization  gnon  golang  google  gotchas  government  gowers  grad-school  gradient-descent  graph-theory  graphs  greedy  gregory-clark  grokkability-clarity  ground-up  growth-econ  gt-101  h2o  hacker  hanson  happy-sad  hardness  hardware  harvard  hashing  haskell  hci  health  heavy-industry  heavyweights  heuristic  hi-order-bits  hidden-motives  high-dimension  higher-ed  history  hmm  hn  homepage  homo-hetero  hsu  human-bean  human-capital  human-ml  humanity  hypothesis-testing  ideas  identity  ideology  idk  ieee  iidness  impact  impetus  impro  increase-decrease  india  individualism-collectivism  industrial-org  inequality  info-dynamics  info-foraging  inner-product  innovation  insight  institutions  intel  intelligence  interdisciplinary  intervention  interview  intricacy  intuition  invariance  ios  iq  iraq-syria  iron-age  is-ought  iteration-recursion  iterative-methods  janus  jargon  journos-pundits  jvm  labor  language  large-factor  latency-throughput  law  learning-theory  learning  lecture-notes  lectures  left-wing  legacy  lens  lesswrong  let-me-see  letters  levers  leviathan  lexical  libraries  lifts-projections  limits  liner-notes  linguistics  links  linux  list  llvm  logic  lol  long-short-run  long-term  longevity  machine-learning  madisonian  magnitude  malthus  managerial-state  manifolds  map-territory  marginal  martial  math.at  math.ca  math.co  math.ds  math.fa  math  mathtariat  mcdonalds  meaningness  measure  measurement  mechanics  media  medieval  mediterranean  memes  memetics  mena  menus  meta:math  meta:reading  meta:research  meta:rhetoric  metabuch  metal-to-virtual  metameta  methodology  metrics  michael-jordan  migration  military  miri-cfar  mobile  model-organism  models  moloch  moments  monetary-fiscal  money  mood-affiliation  morality  mostly-modern  motivation  move-fast-(and-break-things)  multi  multiplicative  mutation  n-factor  nascent-state  nationalism-globalism  nature  near-far  neighborhoodwalk  network-structure  networking  neuro-nitgrit  neuro  neurons  new-religion  news  nibble  nihil  nips  nitty-gritty  nlp  noble-lie  nonlinearity  nuclear  number  nyc  objektbuch  ocaml-sml  occam  off-convex  offense-defense  oly-programming  oly  open-closed  openai  operational  optimization  order-disorder  orders  org:bleg  org:com  org:data  org:edu  org:gov  org:junk  org:lite  org:local  org:mag  org:ngo  org:popup  org:rec  organizing  oscillation  overflow  p:someday  p:whenever  papers  parable  parasites-microbiome  parenting  pareto  parsimony  path-dependence  pdf  peace-violence  performance  personal-finance  persuasion  perturbation  pharma  phd  philosophy  phys-energy  physics  pinker  piracy  planning  plots  pls  plt  podcast  policy  polisci  political-econ  politics  poll  popsci  population-genetics  power  pragmatic  pre-ww2  prediction  preference-falsification  presentation  princeton  prioritizing  privacy  pro-rata  probabilistic-method  probability  problem-solving  profile  programming  project  proofs  properties  property-rights  psychology  psychometrics  publishing  python  q-n-a  qra  quantifiers-sums  quantum  quixotic  quotes  race  random  ranking  rationality  ratty  reading  realness  reason  recommendations  red-queen  reddit  redistribution  reduction  reference  reflection  regression-to-mean  regularization  regularizer  regulation  reinforcement  religion  replication  research-program  research  retention  revealed-preference  review  rhetoric  right-wing  rigor  risk  robotics  robust  roots  rot  rust  s-factor  s:**  sample-complexity  sanjeev-arora  scale  scaling-tech  schelling  scholar  science  scifi-fantasy  scitariat  search  sebastien-bubeck  security  selection  sequential  sex  shalizi  shift  siggraph  signal-noise  signaling  simulation  singularity  skeleton  skunkworks  sleuthin  slides  smoothness  social-capital  social-choice  social-norms  social-psych  social-science  social-structure  social  soft-question  software  space  span-cover  sparsity  spatial  speaking  spectral  speculation  speed  speedometer  spengler  spreading  stanford  stat-mech  state-of-art  state  static-dynamic  stats  status  steel-man  stories  strategy  street-fighting  structure  study  studying  stylized-facts  subculture  subjective-objective  sulla  summary  survey  symmetry  synchrony  syntax  synthesis  systems  tactics  tails  talks  taxes  tcs  tcstariat  technical-writing  technology  techtariat  telos-atelos  temperance  temperature  tensors  the-classics  the-founding  the-great-west-whale  the-self  the-south  the-trenches  theos  thermo  things  thinking  threat-modeling  tidbits  time-complexity  time-preference  time-series  time  tip-of-tongue  tools  top-n  toxoplasmosis  track-record  tracker  tradition  trees  trends  tribalism  tricki  tricks  trump  trust  truth  turing  twitter  unaffiliated  uncertainty  unintended-consequences  uniqueness  unit  universalism-particularism  unsupervised  urban-rural  usa  utopia-dystopia  ux  values  variance-components  vc-dimension  video  virginia-dc  visual-understanding  visualization  visuo  volo-avolo  war  wealth  westminster  whiggish-hegelian  white-paper  whole-partial-many  wiki  winner-take-all  wire-guided  wisdom  within-without  wonkish  workflow  workshop  world  writing  yoga  zeitgeist  zero-positive-sum  zooming  🌞  🎓  🎩  👳  👽  🔬 

Copy this bookmark: