nhaliday + deepgoog   45

Information Processing: US Needs a National AI Strategy: A Sputnik Moment?
FT podcasts on US-China competition and AI: http://infoproc.blogspot.com/2018/05/ft-podcasts-on-us-china-competition-and.html

A new recommended career path for effective altruists: China specialist: https://80000hours.org/articles/china-careers/
Our rough guess is that it would be useful for there to be at least ten people in the community with good knowledge in this area within the next few years.

By “good knowledge” we mean they’ve spent at least 3 years studying these topics and/or living in China.

We chose ten because that would be enough for several people to cover each of the major areas listed (e.g. 4 within AI, 2 within biorisk, 2 within foreign relations, 1 in another area).

AI Policy and Governance Internship: https://www.fhi.ox.ac.uk/ai-policy-governance-internship/

https://www.fhi.ox.ac.uk/deciphering-chinas-ai-dream/
https://www.fhi.ox.ac.uk/wp-content/uploads/Deciphering_Chinas_AI-Dream.pdf
Deciphering China’s AI Dream
The context, components, capabilities, and consequences of
China’s strategy to lead the world in AI

Europe’s AI delusion: https://www.politico.eu/article/opinion-europes-ai-delusion/
Brussels is failing to grasp threats and opportunities of artificial intelligence.
By BRUNO MAÇÃES

When the computer program AlphaGo beat the Chinese professional Go player Ke Jie in a three-part match, it didn’t take long for Beijing to realize the implications.

If algorithms can already surpass the abilities of a master Go player, it can’t be long before they will be similarly supreme in the activity to which the classic board game has always been compared: war.

As I’ve written before, the great conflict of our time is about who can control the next wave of technological development: the widespread application of artificial intelligence in the economic and military spheres.

...

If China’s ambitions sound plausible, that’s because the country’s achievements in deep learning are so impressive already. After Microsoft announced that its speech recognition software surpassed human-level language recognition in October 2016, Andrew Ng, then head of research at Baidu, tweeted: “We had surpassed human-level Chinese recognition in 2015; happy to see Microsoft also get there for English less than a year later.”

...

One obvious advantage China enjoys is access to almost unlimited pools of data. The machine-learning technologies boosting the current wave of AI expansion are as good as the amount of data they can use. That could be the number of people driving cars, photos labeled on the internet or voice samples for translation apps. With 700 or 800 million Chinese internet users and fewer data protection rules, China is as rich in data as the Gulf States are in oil.

How can Europe and the United States compete? They will have to be commensurately better in developing algorithms and computer power. Sadly, Europe is falling behind in these areas as well.

...

Chinese commentators have embraced the idea of a coming singularity: the moment when AI surpasses human ability. At that point a number of interesting things happen. First, future AI development will be conducted by AI itself, creating exponential feedback loops. Second, humans will become useless for waging war. At that point, the human mind will be unable to keep pace with robotized warfare. With advanced image recognition, data analytics, prediction systems, military brain science and unmanned systems, devastating wars might be waged and won in a matter of minutes.

...

The argument in the new strategy is fully defensive. It first considers how AI raises new threats and then goes on to discuss the opportunities. The EU and Chinese strategies follow opposite logics. Already on its second page, the text frets about the legal and ethical problems raised by AI and discusses the “legitimate concerns” the technology generates.

The EU’s strategy is organized around three concerns: the need to boost Europe’s AI capacity, ethical issues and social challenges. Unfortunately, even the first dimension quickly turns out to be about “European values” and the need to place “the human” at the center of AI — forgetting that the first word in AI is not “human” but “artificial.”

https://twitter.com/mr_scientism/status/983057591298351104
https://archive.is/m3Njh
US military: "LOL, China thinks it's going to be a major player in AI, but we've got all the top AI researchers. You guys will help us develop weapons, right?"

US AI researchers: "No."

US military: "But... maybe just a computer vision app."

US AI researchers: "NO."

https://www.theverge.com/2018/4/4/17196818/ai-boycot-killer-robots-kaist-university-hanwha
https://www.nytimes.com/2018/04/04/technology/google-letter-ceo-pentagon-project.html
https://twitter.com/mr_scientism/status/981685030417326080
https://archive.is/3wbHm
AI-risk was a mistake.
hsu  scitariat  commentary  video  presentation  comparison  usa  china  asia  sinosphere  frontier  technology  science  ai  speedometer  innovation  google  barons  deepgoog  stories  white-paper  strategy  migration  iran  human-capital  corporation  creative  alien-character  military  human-ml  nationalism-globalism  security  investing  government  games  deterrence  defense  nuclear  arms  competition  risk  ai-control  musk  optimism  multi  news  org:mag  europe  EU  80000-hours  effective-altruism  proposal  article  realness  offense-defense  war  biotech  altruism  language  foreign-lang  philosophy  the-great-west-whale  enhancement  foreign-policy  geopolitics  anglo  jobs  career  planning  hmm  travel  charity  tech  intel  media  teaching  tutoring  russia  india  miri-cfar  pdf  automation  class  labor  polisci  society  trust  n-factor  corruption  leviathan  ethics  authoritarianism  individualism-collectivism  revolution  economics  inequality  civic  law  regulation  data  scale  pro-rata  capital  zero-positive-sum  cooperate-defect  distribution  time-series  tre 
february 2018 by nhaliday
New Theory Cracks Open the Black Box of Deep Learning | Quanta Magazine
A new idea called the “information bottleneck” is helping to explain the puzzling success of today’s artificial-intelligence algorithms — and might also explain how human brains learn.

sounds like he's just talking about autoencoders?
news  org:mag  org:sci  popsci  announcement  research  deep-learning  machine-learning  acm  information-theory  bits  neuro  model-class  big-surf  frontier  nibble  hmm  signal-noise  deepgoog  expert  ideas  wild-ideas  summary  talks  video  israel  roots  physics  interdisciplinary  ai  intelligence  shannon  giants  arrows  preimage  lifts-projections  composition-decomposition  characterization  markov  gradient-descent  papers  liner-notes  experiment  hi-order-bits  generalization  expert-experience  explanans  org:inst  speedometer 
september 2017 by nhaliday
Superintelligence Risk Project Update II
https://www.jefftk.com/p/superintelligence-risk-project-update

https://www.jefftk.com/p/conversation-with-michael-littman
For example, I asked him what he thought of the idea that to we could get AGI with current techniques, primarily deep neural nets and reinforcement learning, without learning anything new about how intelligence works or how to implement it ("Prosaic AGI" [1]). He didn't think this was possible, and believes there are deep conceptual issues we still need to get a handle on. He's also less impressed with deep learning than he was before he started working in it: in his experience it's a much more brittle technology than he had been expecting. Specifically, when trying to replicate results, he's often found that they depend on a bunch of parameters being in just the right range, and without that the systems don't perform nearly as well.

The bottom line, to him, was that since we are still many breakthroughs away from getting to AGI, we can't productively work on reducing superintelligence risk now.

He told me that he worries that the AI risk community is not solving real problems: they're making deductions and inferences that are self-consistent but not being tested or verified in the world. Since we can't tell if that's progress, it probably isn't. I asked if he was referring to MIRI's work here, and he said their work was an example of the kind of approach he's skeptical about, though he wasn't trying to single them out. [2]

https://www.jefftk.com/p/conversation-with-an-ai-researcher
Earlier this week I had a conversation with an AI researcher [1] at one of the main industry labs as part of my project of assessing superintelligence risk. Here's what I got from them:

They see progress in ML as almost entirely constrained by hardware and data, to the point that if today's hardware and data had existed in the mid 1950s researchers would have gotten to approximately our current state within ten to twenty years. They gave the example of backprop: we saw how to train multi-layer neural nets decades before we had the computing power to actually train these nets to do useful things.

Similarly, people talk about AlphaGo as a big jump, where Go went from being "ten years away" to "done" within a couple years, but they said it wasn't like that. If Go work had stayed in academia, with academia-level budgets and resources, it probably would have taken nearly that long. What changed was a company seeing promising results, realizing what could be done, and putting way more engineers and hardware on the project than anyone had previously done. AlphaGo couldn't have happened earlier because the hardware wasn't there yet, and was only able to be brought forward by massive application of resources.

https://www.jefftk.com/p/superintelligence-risk-project-conclusion
Summary: I'm not convinced that AI risk should be highly prioritized, but I'm also not convinced that it shouldn't. Highly qualified researchers in a position to have a good sense the field have massively different views on core questions like how capable ML systems are now, how capable they will be soon, and how we can influence their development. I do think these questions are possible to get a better handle on, but I think this would require much deeper ML knowledge than I have.
ratty  core-rats  ai  risk  ai-control  prediction  expert  machine-learning  deep-learning  speedometer  links  research  research-program  frontier  multi  interview  deepgoog  games  hardware  performance  roots  impetus  chart  big-picture  state-of-art  reinforcement  futurism  🤖  🖥  expert-experience  singularity  miri-cfar  empirical  evidence-based  speculation  volo-avolo  clever-rats  acmtariat  robust  ideas  crux  atoms  detail-architecture  software  gradient-descent 
july 2017 by nhaliday
The Bridge: 数字化 – 网络化 – 智能化: China’s Quest for an AI Revolution in Warfare
The PLA’s organizational tendencies could render it more inclined to take full advantage of the disruptive potential of artificial intelligence, without constraints due to concerns about keeping humans ‘in the loop.’ In its command culture, the PLA has tended to consolidate and centralize authorities at higher levels, remaining reluctant to delegate decision-making downward. The introduction of information technology has exacerbated the tendency of PLA commanders to micromanage subordinates through a practice known as “skip-echelon command” (越级指挥) that enables the circumvention of command bureaucracy to influence units and weapons systems at even a tactical level.[xxviii] This practice can be symptomatic of a culture of distrust and bureaucratic immaturity. The PLA has confronted and started to progress in mitigating its underlying human resource challenges, recruiting increasingly educated officers and enlisted personnel, while seeking to modernize and enhance political and ideological work aimed to ensure loyalty to the Chinese Communist Party. However, the employment of artificial intelligence could appeal to the PLA as a way to circumvent and work around those persistent issues. In the long term, the intersection of the PLA’s focus on ‘scientific’ approaches to warfare with the preference to consolidate and centralize decision-making could cause the PLA’s leadership to rely more upon artificial intelligence, rather than human judgment.
news  org:mag  org:foreign  trends  china  asia  sinosphere  war  meta:war  military  defense  strategy  current-events  ai  automation  technology  foreign-policy  realpolitik  expansionism  innovation  individualism-collectivism  values  prediction  deepgoog  games  n-factor  human-ml  alien-character  risk  ai-control 
june 2017 by nhaliday
ExtraTricky - A Rant About AlphaGo Discussions
The most important idea to be able to analyze endgames is the idea of adding two games together. The sum of two games is another game where on your turn you pick one of the two games to play in. So you could imagine a game of "chess plus checkers" where each turn is either a turn on the chess board or a turn on the checkers board. Say your opponent makes a move on the chess board. Now you have a choice: do you want to respond to that move also on the chess board, or is it better to take a turn on the checkers board and accept the potential loss of allowing two consecutive chess moves?

If you were to actually add a game of chess and a game of checkers, you'd have to also determine a way to say who wins. I'm going to conveniently avoid talking about that for general games, because for Go positions the answer is simple: add up the points from each game. So you could imagine a game of "Go plus Go" where you're playing simultaneously on two boards, and on your turn you pick one of the boards to play on. At the end of the game, instead of counting territory from just one board, you count it from both.

As it turns out, when a Go game reaches the final stages, the board is typically partitioned into small areas that don't interact with each other. In these cases, even though these sections exist on the same board, you can think of them being entirely separate games being added together. Once we have that, there's still the question: how do you determine which section to play in?
extratricky  oly  games  deepgoog  thinking  things  analysis  nibble  org:bleg 
february 2017 by nhaliday
Performance Trends in AI | Otium
Deep learning has revolutionized the world of artificial intelligence. But how much does it improve performance? How have computers gotten better at different tasks over time, since the rise of deep learning?

In games, what the data seems to show is that exponential growth in data and computation power yields exponential improvements in raw performance. In other words, you get out what you put in. Deep learning matters, but only because it provides a way to turn Moore’s Law into corresponding performance improvements, for a wide class of problems. It’s not even clear it’s a discontinuous advance in performance over non-deep-learning systems.

In image recognition, deep learning clearly is a discontinuous advance over other algorithms. But the returns to scale and the improvements over time seem to be flattening out as we approach or surpass human accuracy.

In speech recognition, deep learning is again a discontinuous advance. We are still far away from human accuracy, and in this regime, accuracy seems to be improving linearly over time.

In machine translation, neural nets seem to have made progress over conventional techniques, but it’s not yet clear if that’s a real phenomenon, or what the trends are.

In natural language processing, trends are positive, but deep learning doesn’t generally seem to do better than trendline.

...

The learned agent performs much better than the hard-coded agent, but moves more jerkily and “randomly” and doesn’t know the law of reflection. Similarly, the reports of AlphaGo producing “unusual” Go moves are consistent with an agent that can do pattern-recognition over a broader space than humans can, but which doesn’t find the “laws” or “regularities” that humans do.

Perhaps, contrary to the stereotype that contrasts “mechanical” with “outside-the-box” thinking, reinforcement learners can “think outside the box” but can’t find the box?

http://slatestarcodex.com/2017/08/02/where-the-falling-einstein-meets-the-rising-mouse/
ratty  core-rats  summary  prediction  trends  analysis  spock  ai  deep-learning  state-of-art  🤖  deepgoog  games  nlp  computer-vision  nibble  reinforcement  model-class  faq  org:bleg  shift  chart  technology  language  audio  accuracy  speaking  foreign-lang  definite-planning  china  asia  microsoft  google  ideas  article  speedometer  whiggish-hegelian  yvain  ssc  smoothness  data  hsu  scitariat  genetics  iq  enhancement  genetic-load  neuro  neuro-nitgrit  brain-scan  time-series  multiplicative  iteration-recursion  additive  multi 
january 2017 by nhaliday

bundles : acmtechie

related tags

80000-hours  absolute-relative  abstraction  academia  accuracy  acm  acmtariat  additive  adversarial  ai  ai-control  algorithms  alien-character  alignment  altruism  ama  analogy  analysis  anglo  anglosphere  announcement  applications  approximation  arms  arrows  article  asia  atoms  attention  audio  authoritarianism  autism  auto-learning  automation  average-case  backup  bandits  bare-hands  barons  best-practices  biases  big-peeps  big-picture  big-surf  bio  biotech  bits  bots  brain-scan  britain  capital  career  characterization  charity  chart  china  civic  class  classic  clever-rats  coarse-fine  cog-psych  commentary  comparison  competition  complement-substitute  complex-systems  complexity  composition-decomposition  computation  computer-vision  concept  conference  cool  cooperate-defect  coordination  core-rats  corporation  corruption  cost-benefit  creative  critique  crux  crypto  cs  current-events  cybernetics  data  dataset  debate  decision-theory  deep-learning  deepgoog  defense  definite-planning  definition  dennett  descriptive  detail-architecture  deterrence  developmental  dimensionality  direct-indirect  discrete  discussion  distribution  economics  effective-altruism  embeddings  empirical  engineering  enhancement  ensembles  epistemic  equilibrium  error  estimate  ethics  EU  europe  events  evidence-based  evolution  expansionism  experiment  expert  expert-experience  explanans  explanation  explore-exploit  exposition  extratricky  facebook  fall-2016  faq  features  fixed-point  flexibility  flux-stasis  foreign-lang  foreign-policy  formal-values  frameworks  frontier  futurism  gallic  games  generalization  generative  genetic-load  genetics  geopolitics  giants  gnon  google  government  gradient-descent  gwern  hardware  heuristic  hi-order-bits  high-dimension  hmm  hsu  human-capital  human-ml  humanity  hypocrisy  ideas  impetus  incentives  india  individualism-collectivism  inequality  information-theory  innovation  insight  intel  intelligence  interdisciplinary  interpretability  interview  intricacy  investing  iq  iran  israel  iteration-recursion  japan  jargon  jobs  labor  land  language  large-factor  law  learning  lens  lesswrong  leviathan  lifts-projections  liner-notes  links  list  local-global  longform  machine-learning  markets  markov  measurement  media  meta:war  metameta  michael-nielsen  microsoft  migration  military  miri-cfar  model-class  models  moloch  multi  multiplicative  musk  n-factor  nationalism-globalism  nature  neuro  neuro-nitgrit  neurons  news  nibble  nitty-gritty  nlp  nuclear  number  occam  offense-defense  oly  online-learning  openai  optimism  optimization  order-disorder  org:biz  org:bleg  org:foreign  org:inst  org:lite  org:mag  org:mat  org:nat  org:ngo  org:rec  org:sci  organization  papers  parsimony  pdf  performance  philosophy  physics  pinker  planning  podcast  polisci  popsci  pragmatic  prediction  preimage  preprint  presentation  privacy  pro-rata  problem-solving  profile  progression  project  properties  proposal  psychology  psychometrics  q-n-a  qra  quotes  random  rationality  ratty  realness  realpolitik  reddit  reduction  reflection  regularization  regularizer  regulation  reinforcement  research  research-program  retention  revolution  rhetoric  rigor  risk  robotics  robust  roots  russia  saas  sampling  scale  science  scitariat  search  security  shannon  shift  signal-noise  similarity  singularity  sinosphere  skeleton  skunkworks  slides  smoothness  social  social-psych  society  software  speaking  speculation  speedometer  spock  ssc  state-of-art  stories  strategy  structure  summary  supply-demand  survey  sv  synthesis  systems  talks  teaching  tech  technology  techtariat  telos-atelos  the-great-west-whale  the-self  theory-of-mind  thick-thin  things  thinking  threat-modeling  time  time-series  todo  top-n  track-record  trade  tradeoffs  transitions  travel  trends  trust  truth  turing  tutoring  twitter  ui  universalism-particularism  unsupervised  usa  values  video  volo-avolo  war  whiggish-hegelian  white-paper  wild-ideas  within-without  yc  yvain  zero-positive-sum  🐸  👽  🖥  🤖 

Copy this bookmark:



description:


tags: