papers   24176

« earlier    

[1804.04268] Incomplete Contracting and AI Alignment
We suggest that the analysis of incomplete contracting developed by law and economics researchers can provide a useful framework for understanding the AI alignment problem and help to generate a systematic approach to finding solutions. We first provide an overview of the incomplete contracting literature and explore parallels between this work and the problem of AI alignment. As we emphasize, misalignment between principal and agent is a core focus of economic analysis. We highlight some technical results from the economics literature on incomplete contracts that may provide insights for AI alignment researchers. Our core contribution, however, is to bring to bear an insight that economists have been urged to absorb from legal scholars and other behavioral scientists: the fact that human contracting is supported by substantial amounts of external structure, such as generally available institutions (culture, law) that can supply implied terms to fill the gaps in incomplete contracts. We propose a research agenda for AI alignment work that focuses on the problem of how to build AI that can replicate the human cognitive processes that connect individual incomplete contracts with this supporting external structure.
nibble  preprint  org:mat  papers  ai  ai-control  alignment  coordination  contracts  law  economics  interests  culture  institutions  number  context  behavioral-econ  composition-decomposition  rent-seeking  whole-partial-many 
3 days ago by nhaliday
Theory of Self-Reproducing Automata - John von Neumann
Fourth Lecture: THE ROLE OF HIGH AND OF EXTREMELY HIGH COMPLICATION

Comparisons between computing machines and the nervous systems. Estimates of size for computing machines, present and near future.

Estimates for size for the human central nervous system. Excursus about the “mixed” character of living organisms. Analog and digital elements. Observations about the “mixed” character of all componentry, artificial as well as natural. Interpretation of the position to be taken with respect to these.

Evaluation of the discrepancy in size between artificial and natural automata. Interpretation of this discrepancy in terms of physical factors. Nature of the materials used.

The probability of the presence of other intellectual factors. The role of complication and the theoretical penetration that it requires.

Questions of reliability and errors reconsidered. Probability of individual errors and length of procedure. Typical lengths of procedure for computing machines and for living organisms--that is, for artificial and for natural automata. Upper limits on acceptable probability of error in individual operations. Compensation by checking and self-correcting features.

Differences of principle in the way in which errors are dealt with in artificial and in natural automata. The “single error” principle in artificial automata. Crudeness of our approach in this case, due to the lack of adequate theory. More sophisticated treatment of this problem in natural automata: The role of the autonomy of parts. Connections between this autonomy and evolution.

- 10^10 neurons in brain, 10^4 vacuum tubes in largest computer at time
- machines faster: 5 ms from neuron potential to neuron potential, 10^-3 ms for vacuum tubes

https://en.wikipedia.org/wiki/John_von_Neumann#Computing
pdf  article  papers  essay  nibble  math  cs  computation  bio  neuro  neuro-nitgrit  scale  magnitude  comparison  acm  von-neumann  giants  thermo  phys-energy  speed  performance  time  density  frequency  hardware  ems  efficiency  dirty-hands  street-fighting  fermi  estimate  retention  physics  interdisciplinary  multi  wiki  links  people  🔬  atoms  automata  duplication  iteration-recursion  turing  complexity  measure  nature  technology  complex-systems  bits  information-theory  circuits  robust  structure  composition-decomposition  evolution  mutation  axioms  analogy  thinking  input-output  hi-order-bits  coding-theory  flexibility  rigidity 
3 days ago by nhaliday
[1804.01619] Stability and Convergence Trade-off of Iterative Optimization Algorithms
The overall performance or expected excess risk of an iterative machine learning algorithm can be decomposed into training error and generalization error. While the former is controlled by its convergence analysis, the latter can be tightly handled by algorithmic stability. The machine learning community has a rich history investigating convergence and stability separately. However, the question about the trade-off between these two quantities remains open. In this paper, we show that for any iterative algorithm at any iteration, the overall performance is lower bounded by the minimax statistical error over an appropriately chosen loss function class. This implies an important trade-off between convergence and stability of the algorithm -- a faster converging algorithm has to be less stable, and vice versa. As a direct consequence of this fundamental tradeoff, new convergence lower bounds can be derived for classes of algorithms constrained with different stability bounds. In particular, when the loss function is convex (or strongly convex) and smooth, we discuss the stability upper bounds of gradient descent (GD) and stochastic gradient descent and their variants with decreasing step sizes. For Nesterov's accelerated gradient descent (NAG) and heavy ball method (HB), we provide stability upper bounds for the quadratic loss function. Applying existing stability upper bounds for the gradient methods in our trade-off framework, we obtain lower bounds matching the well-established convergence upper bounds up to constants for these algorithms and conjecture similar lower bounds for NAG and HB. Finally, we numerically demonstrate the tightness of our stability bounds in terms of exponents in the rate and also illustrate via a simulated logistic regression problem that our stability bounds reflect the generalization errors better than the simple uniform convergence bounds for GD and NAG.
papers  to-read  machine-learning  optimization  stability  heard-the-talk 
4 days ago by mraginsky

« earlier    

related tags

2016  absolute-relative  abstraction  academia  acm  acmtariat  ai-control  ai  alankay  alexanderstepanov  algorithms  alignment  analogy  antivax  apl  arbitrage-theory  array  article  articles  astronomy  atoms  attention  author:benjamin_mark  author:julian_togelius  author:tobias_mahlmann  author:tudor_berechet  authoritarianism  autism  auto-learning  automata  axioms  bayesian-probability  beautiful  behavioral-econ  biases  bibdesk  bibliography  big-peeps  big-picture  big-surf  bio  bitcoin  bits  blockchain  book  borg  build  byrd  c++  circuits  clever-rats  cluster-management  code  coding-theory  cog-psych  comedy  commentary  comparison  compilers  complex-systems  complexity  composition-decomposition  compsci  computation  computer-science  computerscience  computervision  concept  concurrency  conference  context  contracts  coordination  creative  crime  crypto  cs  culture  cybernetics  dad  dan  data-structures  database  datascience  db  debunk  decision-theory  deep-learning  deepgoog  definition  dennett  density  descriptive  detail-architecture  development  dialyzer  dimensionality  dirty-hands  duplication  dynabook  econ-cs  economics  eden-heaven  edu  education  efficiency  ems  engine  erlang  essay  estimate  ethics  evolution  facebook  fatml  fermi  finance  flexibility  flux-stasis  free  freedman  frequency  futurism  gamedev  gc  gdc  gdpr  generalization  generation  giants  github  google  government  gradient-descent  graphics  hacker  hadley  hardware  haskell  health  heard-the-talk  hi-order-bits  hinton  history  human-ml  humanity  ideas  immutability  impetus  information-theory  infra  input-output  inspiration  institutions  intel  intelligence  intepretability  interdisciplinary  interests  internet  intricacy  iteration-recursion  journal  journals  keras  kernel  l-systems  lambda  lang  languages  large-factor  later  law  learning  ledger  lens  leviathan  library  life  liner-notes  links  lisp  list  logic  love  lstm  machine-learning  machine_learning  machinelearning  magazine  magnitude  math  mathematica  mathematics  measure  measurement  media  metameta  minikanren  miri-cfar  ml  monads  morning  multi  mutation  nature  neuro-nitgrit  neuro  nibble  nips  nitty-gritty  nlp  notation  notebooks  now  number  nursing  occam  of  offense-defense  openai  optimization  order-disorder  org:mat  parsimony  pdf  people  performance  philo  philosophy  phys-energy  physics  piracy  platforms  plos  plt  pragmatic  preprint  presentations  preston.mcafee  privacy  probability  problem-solving  procedural  program  programming  project  propaganda  properties  psychology  psychometrics  public  publications  publishing  quotes  race  ratio  rationality  ratty  readinglist  readlater  reasoning  reference  reinforcement  rent-seeking  research  retention  rigidity  rigor  risk  robust  satellite  scale  scheme  scholar-pack  scholar  sci-hub  science  scienceontheweb  scihub  scipy  search  searchengine  security  similarity  singleserving  skeleton  skunkworks  smallsats  social-networks  software  space  speed  spock  stability  street-fighting  structure  studies  surveillance  survey  syntax  synthesis  talk  talks  technology  techtariat  telos-atelos  the-self  theoretical-computer-science  theory-of-mind  thermo  thick-thin  things  thinking  threat-modeling  time  to-read  tools  toread  torrents  tts  turing  tutorial  types  uncertainty  unintended-consequences  universalism-particularism  usenix  utopia-dystopia  uxofai  vaccines  values  video  vm  volo-avolo  von-neumann  wavenet  we  whole-partial-many  wiki  william  wire-guided  within-without  wolfram  🔬 

Copy this bookmark:



description:


tags: