nhaliday + performance   56

c - What REALLY happens when you don't free after malloc? - Stack Overflow
keep this stuff in mind when writing competition stuff, can usually just omit deletes/frees unless you're really running up against the memory limit:
Just about every modern operating system will recover all the allocated memory space after a program exits.

...

On the other hand, the similar admonition to close your files on exit has a much more concrete result - if you don't, the data you wrote to them might not get flushed, or if they're a temp file, they might not get deleted when you're done. Also, database handles should have their transactions committed and then closed when you're done with them. Similarly, if you're using an object oriented language like C++ or Objective C, not freeing an object when you're done with it will mean the destructor will never get called, and any resources the class is responsible might not get cleaned up.

--

I really consider this answer wrong.One should always deallocate resources after one is done with them, be it file handles/memory/mutexs. By having that habit, one will not make that sort of mistake when building servers. Some servers are expected to run 24x7. In those cases, any leak of any sort means that your server will eventually run out of that resource and hang/crash in some way. A short utility program, ya a leak isn't that bad. Any server, any leak is death. Do yourself a favor. Clean up after yourself. It's a good habit.

--

Allocation Myth 4: Non-garbage-collected programs should always deallocate all memory they allocate.

The Truth: Omitted deallocations in frequently executed code cause growing leaks. They are rarely acceptable. but Programs that retain most allocated memory until program exit often perform better without any intervening deallocation. Malloc is much easier to implement if there is no free.

In most cases, deallocating memory just before program exit is pointless. The OS will reclaim it anyway. Free will touch and page in the dead objects; the OS won't.

Consequence: Be careful with "leak detectors" that count allocations. Some "leaks" are good!
q-n-a  stackex  programming  memory-management  performance  systems  c(pp)  oly-programming 
11 days ago by nhaliday
Theory of Self-Reproducing Automata - John von Neumann
Fourth Lecture: THE ROLE OF HIGH AND OF EXTREMELY HIGH COMPLICATION

Comparisons between computing machines and the nervous systems. Estimates of size for computing machines, present and near future.

Estimates for size for the human central nervous system. Excursus about the “mixed” character of living organisms. Analog and digital elements. Observations about the “mixed” character of all componentry, artificial as well as natural. Interpretation of the position to be taken with respect to these.

Evaluation of the discrepancy in size between artificial and natural automata. Interpretation of this discrepancy in terms of physical factors. Nature of the materials used.

The probability of the presence of other intellectual factors. The role of complication and the theoretical penetration that it requires.

Questions of reliability and errors reconsidered. Probability of individual errors and length of procedure. Typical lengths of procedure for computing machines and for living organisms--that is, for artificial and for natural automata. Upper limits on acceptable probability of error in individual operations. Compensation by checking and self-correcting features.

Differences of principle in the way in which errors are dealt with in artificial and in natural automata. The “single error” principle in artificial automata. Crudeness of our approach in this case, due to the lack of adequate theory. More sophisticated treatment of this problem in natural automata: The role of the autonomy of parts. Connections between this autonomy and evolution.

- 10^10 neurons in brain, 10^4 vacuum tubes in largest computer at time
- machines faster: 5 ms from neuron potential to neuron potential, 10^-3 ms for vacuum tubes

https://en.wikipedia.org/wiki/John_von_Neumann#Computing
pdf  article  papers  essay  nibble  math  cs  computation  bio  neuro  neuro-nitgrit  scale  magnitude  comparison  acm  von-neumann  giants  thermo  phys-energy  speed  performance  time  density  frequency  hardware  ems  efficiency  dirty-hands  street-fighting  fermi  estimate  retention  physics  interdisciplinary  multi  wiki  links  people  🔬  atoms  automata  duplication  iteration-recursion  turing  complexity  measure  nature  technology  complex-systems  bits  information-theory  circuits  robust  structure  composition-decomposition  evolution  mutation  axioms  analogy  thinking  input-output  hi-order-bits  coding-theory  flexibility  rigidity 
april 2018 by nhaliday
Complexity no Bar to AI - Gwern.net
Critics of AI risk suggest diminishing returns to computing (formalized asymptotically) means AI will be weak; this argument relies on a large number of questionable premises and ignoring additional resources, constant factors, and nonlinear returns to small intelligence advantages, and is highly unlikely. (computer science, transhumanism, AI, R)
created: 1 June 2014; modified: 01 Feb 2018; status: finished; confidence: likely; importance: 10
ratty  gwern  analysis  faq  ai  risk  speedometer  intelligence  futurism  cs  computation  complexity  tcs  linear-algebra  nonlinearity  convexity-curvature  average-case  adversarial  article  time-complexity  singularity  iteration-recursion  magnitude  multiplicative  lower-bounds  no-go  performance  hardware  humanity  psychology  cog-psych  psychometrics  iq  distribution  moments  complement-substitute  hanson  ems  enhancement  parable  detail-architecture  universalism-particularism  neuro  ai-control  environment  climate-change  threat-modeling  security  theory-practice  hacker  academia  realness  crypto  rigorous-crypto  usa  government 
april 2018 by nhaliday
Recitation 25: Data locality and B-trees
The same idea can be applied to trees. Binary trees are not good for locality because a given node of the binary tree probably occupies only a fraction of a cache line. B-trees are a way to get better locality. As in the hash table trick above, we store several elements in a single node -- as many as will fit in a cache line.

B-trees were originally invented for storing data structures on disk, where locality is even more crucial than with memory. Accessing a disk location takes about 5ms = 5,000,000ns. Therefore if you are storing a tree on disk you want to make sure that a given disk read is as effective as possible. B-trees, with their high branching factor, ensure that few disk reads are needed to navigate to the place where data is stored. B-trees are also useful for in-memory data structures because these days main memory is almost as slow relative to the processor as disk drives were when B-trees were introduced!
nibble  org:junk  org:edu  cornell  lecture-notes  exposition  programming  engineering  systems  dbs  caching  performance  memory-management  os 
september 2017 by nhaliday
Anatomy of an SQL Index: What is an SQL Index
“An index makes the query fast” is the most basic explanation of an index I have ever seen. Although it describes the most important aspect of an index very well, it is—unfortunately—not sufficient for this book. This chapter describes the index structure in a less superficial way but doesn't dive too deeply into details. It provides just enough insight for one to understand the SQL performance aspects discussed throughout the book.

B-trees, etc.
techtariat  tutorial  explanation  performance  programming  engineering  dbs  trees  data-structures  nibble 
september 2017 by nhaliday
Superintelligence Risk Project Update II
https://www.jefftk.com/p/superintelligence-risk-project-update

https://www.jefftk.com/p/conversation-with-michael-littman
For example, I asked him what he thought of the idea that to we could get AGI with current techniques, primarily deep neural nets and reinforcement learning, without learning anything new about how intelligence works or how to implement it ("Prosaic AGI" [1]). He didn't think this was possible, and believes there are deep conceptual issues we still need to get a handle on. He's also less impressed with deep learning than he was before he started working in it: in his experience it's a much more brittle technology than he had been expecting. Specifically, when trying to replicate results, he's often found that they depend on a bunch of parameters being in just the right range, and without that the systems don't perform nearly as well.

The bottom line, to him, was that since we are still many breakthroughs away from getting to AGI, we can't productively work on reducing superintelligence risk now.

He told me that he worries that the AI risk community is not solving real problems: they're making deductions and inferences that are self-consistent but not being tested or verified in the world. Since we can't tell if that's progress, it probably isn't. I asked if he was referring to MIRI's work here, and he said their work was an example of the kind of approach he's skeptical about, though he wasn't trying to single them out. [2]

https://www.jefftk.com/p/conversation-with-an-ai-researcher
Earlier this week I had a conversation with an AI researcher [1] at one of the main industry labs as part of my project of assessing superintelligence risk. Here's what I got from them:

They see progress in ML as almost entirely constrained by hardware and data, to the point that if today's hardware and data had existed in the mid 1950s researchers would have gotten to approximately our current state within ten to twenty years. They gave the example of backprop: we saw how to train multi-layer neural nets decades before we had the computing power to actually train these nets to do useful things.

Similarly, people talk about AlphaGo as a big jump, where Go went from being "ten years away" to "done" within a couple years, but they said it wasn't like that. If Go work had stayed in academia, with academia-level budgets and resources, it probably would have taken nearly that long. What changed was a company seeing promising results, realizing what could be done, and putting way more engineers and hardware on the project than anyone had previously done. AlphaGo couldn't have happened earlier because the hardware wasn't there yet, and was only able to be brought forward by massive application of resources.

https://www.jefftk.com/p/superintelligence-risk-project-conclusion
Summary: I'm not convinced that AI risk should be highly prioritized, but I'm also not convinced that it shouldn't. Highly qualified researchers in a position to have a good sense the field have massively different views on core questions like how capable ML systems are now, how capable they will be soon, and how we can influence their development. I do think these questions are possible to get a better handle on, but I think this would require much deeper ML knowledge than I have.
ratty  core-rats  ai  risk  ai-control  prediction  expert  machine-learning  deep-learning  speedometer  links  research  research-program  frontier  multi  interview  deepgoog  games  hardware  performance  roots  impetus  chart  big-picture  state-of-art  reinforcement  futurism  🤖  🖥  expert-experience  singularity  miri-cfar  empirical  evidence-based  speculation  volo-avolo  clever-rats  acmtariat  robust  ideas  crux  atoms  detail-architecture  software  gradient-descent 
july 2017 by nhaliday
Decison Tree for Optimization Software
including convex programming

Mosek makes out pretty good but not pareto-optimal
benchmarks  optimization  software  libraries  comparison  data  performance  faq  frameworks  curvature  convexity-curvature 
november 2016 by nhaliday

bundles : engtechie

related tags

:)  abstraction  academia  accuracy  acm  acmtariat  adversarial  ai  ai-control  algorithms  analogy  analysis  announcement  applications  approximation  arbitrage  article  atoms  attention  automata  average-case  axioms  backup  bangbang  benchmarks  best-practices  big-list  big-picture  bio  bits  blog  books  business  c(pp)  caching  caltech  career  chart  cheatsheet  circuits  clever-rats  climate-change  coding-theory  cog-psych  commentary  comparison  competition  compilers  complement-substitute  complex-systems  complexity  composition-decomposition  compression  computation  computer-vision  concentration-of-measure  concurrency  convexity-curvature  cool  cooperate-defect  coordination  core-rats  cornell  course  critique  crux  crypto  cs  curvature  dan-luu  data  data-science  data-structures  dbs  debate  debugging  decision-theory  deep-learning  deepgoog  density  detail-architecture  devtools  dimensionality  direct-indirect  dirty-hands  discussion  distributed  distribution  documentation  dropbox  duplication  economics  efficiency  embedded  empirical  ems  engineering  enhancement  environment  equilibrium  essay  estimate  evidence-based  evolution  expectancy  expert  expert-experience  explanation  explore-exploit  exposition  facebook  faq  fermi  ffi  flexibility  frameworks  frequency  frontier  functional  futurism  games  giants  golang  google  gotchas  government  gradient-descent  graphics  guide  gwern  hacker  hanson  hardware  haskell  hci  heuristic  hi-order-bits  high-dimension  hmm  hn  howto  huge-data-the-biggest  humanity  hypothesis-testing  ideas  idk  impetus  incentives  information-theory  init  input-output  intelligence  interdisciplinary  internet  interview  interview-prep  intricacy  iq  iteration-recursion  javascript  jvm  lecture-notes  let-me-see  libraries  limits  linear-algebra  liner-notes  links  linux  list  lol  lower-bounds  machine-learning  magnitude  math  measure  memory-management  methodology  metrics  miri-cfar  mobile  model-class  moloch  moments  multi  multiplicative  mutation  nature  networking  neuro  neuro-nitgrit  nibble  nitty-gritty  no-go  nonlinearity  nonparametric  numerics  objektbuch  oly-programming  optimization  orders  org:bleg  org:edu  org:junk  os  oss  overflow  papers  parable  parametric  paste  pdf  people  performance  philosophy  phys-energy  physics  pinboard  pls  plt  postmortem  pragmatic  prediction  preprint  presentation  probabilistic-method  probability  programming  project  psychology  psychometrics  python  q-n-a  qra  quantitative-qualitative  quora  random  ranking  rant  ratty  realness  reduction  reference  reflection  reinforcement  repo  research  research-program  retention  rhetoric  rigidity  rigorous-crypto  risk  robust  roots  rsc  ruby  rust  saas  scale  scaling-tech  security  SIGGRAPH  singularity  slides  smoothness  social  software  speculation  speed  speedometer  stackex  startups  state-of-art  stats  stock-flow  stream  street-fighting  strings  structure  summary  survey  sv  synthesis  systems  tcs  tech  technology  techtariat  telos-atelos  terminal  the-self  theory-practice  thermo  thinking  threat-modeling  time  time-complexity  tools  top-n  trees  trivia  turing  tutorial  twitter  ui  unit  universalism-particularism  unix  unsupervised  usa  ux  via:popular  video  virtualization  visualization  volo-avolo  von-neumann  web  wiki  worrydream  wtf  yak-shaving  🔬  🖥  🤖 

Copy this bookmark:



description:


tags: