deepgoog   46

[1803.00085] Chinese Text in the Wild
We introduce Chinese Text in the Wild, a very large dataset of Chinese text in street view images.


We give baseline results using several state-of-the-art networks, including AlexNet, OverFeat, Google Inception and ResNet for character recognition, and YOLOv2 for character detection in images. Overall Google Inception has the best performance on recognition with 80.5% top-1 accuracy, while YOLOv2 achieves an mAP of 71.0% on detection. Dataset, source code and trained models will all be publicly available on the website.
nibble  pdf  papers  preprint  machine-learning  deep-learning  deepgoog  state-of-art  china  asia  writing  language  dataset  error  accuracy  computer-vision  pic  ocr 
22 days ago by nhaliday
Information Processing: Moore's Law and AI
Hint to technocratic planners: invest more in physicists, chemists, and materials scientists. The recent explosion in value from technology has been driven by physical science -- software gets way too much credit. From the former we got a factor of a million or more in compute power, data storage, and bandwidth. From the latter, we gained (perhaps) an order of magnitude or two in effectiveness: how much better are current OSes and programming languages than Unix and C, both of which are ~50 years old now?


Of relevance to this discussion: a big chunk of AlphaGo's performance improvement over other Go programs is due to raw compute power (link via Jess Riedel). The vertical axis is ELO score. You can see that without multi-GPU compute, AlphaGo has relatively pedestrian strength.
hsu  scitariat  comparison  software  hardware  performance  sv  tech  trends  ai  machine-learning  deep-learning  deepgoog  google  roots  impact  hard-tech  multiplicative  the-world-is-just-atoms  technology  trivia  cocktail  big-picture  hi-order-bits 
4 weeks ago by nhaliday
Information Processing: US Needs a National AI Strategy: A Sputnik Moment?
FT podcasts on US-China competition and AI:

A new recommended career path for effective altruists: China specialist:
Our rough guess is that it would be useful for there to be at least ten people in the community with good knowledge in this area within the next few years.

By “good knowledge” we mean they’ve spent at least 3 years studying these topics and/or living in China.

We chose ten because that would be enough for several people to cover each of the major areas listed (e.g. 4 within AI, 2 within biorisk, 2 within foreign relations, 1 in another area).

AI Policy and Governance Internship:
Deciphering China’s AI Dream
The context, components, capabilities, and consequences of
China’s strategy to lead the world in AI

Europe’s AI delusion:
Brussels is failing to grasp threats and opportunities of artificial intelligence.

When the computer program AlphaGo beat the Chinese professional Go player Ke Jie in a three-part match, it didn’t take long for Beijing to realize the implications.

If algorithms can already surpass the abilities of a master Go player, it can’t be long before they will be similarly supreme in the activity to which the classic board game has always been compared: war.

As I’ve written before, the great conflict of our time is about who can control the next wave of technological development: the widespread application of artificial intelligence in the economic and military spheres.


If China’s ambitions sound plausible, that’s because the country’s achievements in deep learning are so impressive already. After Microsoft announced that its speech recognition software surpassed human-level language recognition in October 2016, Andrew Ng, then head of research at Baidu, tweeted: “We had surpassed human-level Chinese recognition in 2015; happy to see Microsoft also get there for English less than a year later.”


One obvious advantage China enjoys is access to almost unlimited pools of data. The machine-learning technologies boosting the current wave of AI expansion are as good as the amount of data they can use. That could be the number of people driving cars, photos labeled on the internet or voice samples for translation apps. With 700 or 800 million Chinese internet users and fewer data protection rules, China is as rich in data as the Gulf States are in oil.

How can Europe and the United States compete? They will have to be commensurately better in developing algorithms and computer power. Sadly, Europe is falling behind in these areas as well.


Chinese commentators have embraced the idea of a coming singularity: the moment when AI surpasses human ability. At that point a number of interesting things happen. First, future AI development will be conducted by AI itself, creating exponential feedback loops. Second, humans will become useless for waging war. At that point, the human mind will be unable to keep pace with robotized warfare. With advanced image recognition, data analytics, prediction systems, military brain science and unmanned systems, devastating wars might be waged and won in a matter of minutes.


The argument in the new strategy is fully defensive. It first considers how AI raises new threats and then goes on to discuss the opportunities. The EU and Chinese strategies follow opposite logics. Already on its second page, the text frets about the legal and ethical problems raised by AI and discusses the “legitimate concerns” the technology generates.

The EU’s strategy is organized around three concerns: the need to boost Europe’s AI capacity, ethical issues and social challenges. Unfortunately, even the first dimension quickly turns out to be about “European values” and the need to place “the human” at the center of AI — forgetting that the first word in AI is not “human” but “artificial.”
US military: "LOL, China thinks it's going to be a major player in AI, but we've got all the top AI researchers. You guys will help us develop weapons, right?"

US AI researchers: "No."

US military: "But... maybe just a computer vision app."

US AI researchers: "NO."
AI-risk was a mistake.
hsu  scitariat  commentary  video  presentation  comparison  usa  china  asia  sinosphere  frontier  technology  science  ai  speedometer  innovation  google  barons  deepgoog  stories  white-paper  strategy  migration  iran  human-capital  corporation  creative  alien-character  military  human-ml  nationalism-globalism  security  investing  government  games  deterrence  defense  nuclear  arms  competition  risk  ai-control  musk  optimism  multi  news  org:mag  europe  EU  80000-hours  effective-altruism  proposal  article  realness  offense-defense  war  biotech  altruism  language  foreign-lang  philosophy  the-great-west-whale  enhancement  foreign-policy  geopolitics  anglo  jobs  career  planning  hmm  travel  charity  tech  intel  media  teaching  tutoring  russia  india  miri-cfar  pdf  automation  class  labor  polisci  society  trust  n-factor  corruption  leviathan  ethics  authoritarianism  individualism-collectivism  revolution  economics  inequality  civic  law  regulation  data  scale  pro-rata  capital  zero-positive-sum  cooperate-defect  distribution  time-series  tre 
february 2018 by nhaliday
New Theory Cracks Open the Black Box of Deep Learning | Quanta Magazine
A new idea called the “information bottleneck” is helping to explain the puzzling success of today’s artificial-intelligence algorithms — and might also explain how human brains learn.

sounds like he's just talking about autoencoders?
news  org:mag  org:sci  popsci  announcement  research  deep-learning  machine-learning  acm  information-theory  bits  neuro  model-class  big-surf  frontier  nibble  hmm  signal-noise  deepgoog  expert  ideas  wild-ideas  summary  talks  video  israel  roots  physics  interdisciplinary  ai  intelligence  shannon  giants  arrows  preimage  lifts-projections  composition-decomposition  characterization  markov  gradient-descent  papers  liner-notes  experiment  hi-order-bits  generalization  expert-experience  explanans  org:inst  speedometer 
september 2017 by nhaliday
Superintelligence Risk Project Update II
For example, I asked him what he thought of the idea that to we could get AGI with current techniques, primarily deep neural nets and reinforcement learning, without learning anything new about how intelligence works or how to implement it ("Prosaic AGI" [1]). He didn't think this was possible, and believes there are deep conceptual issues we still need to get a handle on. He's also less impressed with deep learning than he was before he started working in it: in his experience it's a much more brittle technology than he had been expecting. Specifically, when trying to replicate results, he's often found that they depend on a bunch of parameters being in just the right range, and without that the systems don't perform nearly as well.

The bottom line, to him, was that since we are still many breakthroughs away from getting to AGI, we can't productively work on reducing superintelligence risk now.

He told me that he worries that the AI risk community is not solving real problems: they're making deductions and inferences that are self-consistent but not being tested or verified in the world. Since we can't tell if that's progress, it probably isn't. I asked if he was referring to MIRI's work here, and he said their work was an example of the kind of approach he's skeptical about, though he wasn't trying to single them out. [2]
Earlier this week I had a conversation with an AI researcher [1] at one of the main industry labs as part of my project of assessing superintelligence risk. Here's what I got from them:

They see progress in ML as almost entirely constrained by hardware and data, to the point that if today's hardware and data had existed in the mid 1950s researchers would have gotten to approximately our current state within ten to twenty years. They gave the example of backprop: we saw how to train multi-layer neural nets decades before we had the computing power to actually train these nets to do useful things.

Similarly, people talk about AlphaGo as a big jump, where Go went from being "ten years away" to "done" within a couple years, but they said it wasn't like that. If Go work had stayed in academia, with academia-level budgets and resources, it probably would have taken nearly that long. What changed was a company seeing promising results, realizing what could be done, and putting way more engineers and hardware on the project than anyone had previously done. AlphaGo couldn't have happened earlier because the hardware wasn't there yet, and was only able to be brought forward by massive application of resources.
Summary: I'm not convinced that AI risk should be highly prioritized, but I'm also not convinced that it shouldn't. Highly qualified researchers in a position to have a good sense the field have massively different views on core questions like how capable ML systems are now, how capable they will be soon, and how we can influence their development. I do think these questions are possible to get a better handle on, but I think this would require much deeper ML knowledge than I have.
ratty  core-rats  ai  risk  ai-control  prediction  expert  machine-learning  deep-learning  speedometer  links  research  research-program  frontier  multi  interview  deepgoog  games  hardware  performance  roots  impetus  chart  big-picture  state-of-art  reinforcement  futurism  🤖  🖥  expert-experience  singularity  miri-cfar  empirical  evidence-based  speculation  volo-avolo  clever-rats  acmtariat  robust  ideas  crux  atoms  detail-architecture  software  gradient-descent 
july 2017 by nhaliday
The Bridge: 数字化 – 网络化 – 智能化: China’s Quest for an AI Revolution in Warfare
The PLA’s organizational tendencies could render it more inclined to take full advantage of the disruptive potential of artificial intelligence, without constraints due to concerns about keeping humans ‘in the loop.’ In its command culture, the PLA has tended to consolidate and centralize authorities at higher levels, remaining reluctant to delegate decision-making downward. The introduction of information technology has exacerbated the tendency of PLA commanders to micromanage subordinates through a practice known as “skip-echelon command” (越级指挥) that enables the circumvention of command bureaucracy to influence units and weapons systems at even a tactical level.[xxviii] This practice can be symptomatic of a culture of distrust and bureaucratic immaturity. The PLA has confronted and started to progress in mitigating its underlying human resource challenges, recruiting increasingly educated officers and enlisted personnel, while seeking to modernize and enhance political and ideological work aimed to ensure loyalty to the Chinese Communist Party. However, the employment of artificial intelligence could appeal to the PLA as a way to circumvent and work around those persistent issues. In the long term, the intersection of the PLA’s focus on ‘scientific’ approaches to warfare with the preference to consolidate and centralize decision-making could cause the PLA’s leadership to rely more upon artificial intelligence, rather than human judgment.
news  org:mag  org:foreign  trends  china  asia  sinosphere  war  meta:war  military  defense  strategy  current-events  ai  automation  technology  foreign-policy  realpolitik  expansionism  innovation  individualism-collectivism  values  prediction  deepgoog  games  n-factor  human-ml  alien-character  risk  ai-control 
june 2017 by nhaliday

related tags

80000-hours  absolute-relative  abstraction  academia  accuracy  acm  acmtariat  additive  adversarial  ai-control  ai  algorithms  alien-character  alignment  altruism  ama  analogy  analysis  anglo  anglosphere  announcement  applications  approximation  arms  arrows  article  asia  atoms  audio  authoritarianism  autism  auto-learning  automation  backup  bandits  bare-hands  barons  best-practices  biases  big-peeps  big-picture  big-surf  bio  biotech  bits  bots  brain-scan  britain  capital  career  characterization  charity  chart  china  civic  class  classic  clever-rats  coarse-fine  cocktail  cog-psych  commentary  comparison  competition  complex-systems  complexity  composition-decomposition  computation  computer-vision  concept  conference  cool  cooperate-defect  core-rats  corporation  corruption  cost-benefit  creative  critique  crux  crypto  cs  current-events  cybernetics  data  dataset  debate  decision-theory  deep-learning  defense  definite-planning  definition  dennett  descriptive  detail-architecture  deterrence  developmental  dimensionality  discrete  discussion  distribution  economics  effective-altruism  embeddings  empirical  engineering  enhancement  ensembles  epistemic  error  estimate  ethics  eu  europe  events  evidence-based  evolution  expansionism  experiment  expert-experience  expert  explanans  explanation  exposition  extratricky  facebook  fall-2016  faq  features  fixed-point  flexibility  flux-stasis  foreign-lang  foreign-policy  formal-values  frameworks  frontier  futurism  gallic  games  generalization  generative  genetic-load  genetics  geopolitics  giants  gnon  google  government  gradient-descent  gwern  hard-tech  hardware  heuristic  hi-order-bits  hmm  hsu  human-capital  human-ml  humanity  hypocrisy  ideas  impact  impetus  india  individualism-collectivism  inequality  information-theory  innovation  insight  intel  intelligence  interdisciplinary  interface  interpretability  interview  intricacy  investing  iq  iran  israel  iteration-recursion  japan  jargon  jobs  labor  land  language  large-factor  law  learning  lens  lesswrong  leviathan  lifts-projections  liner-notes  links  list  local-global  longform  machine-learning  markets  markov  measurement  media  meta:war  metameta  michael-nielsen  microsoft  migration  military  miri-cfar  model-class  models  multi  multiplicative  musk  n-factor  nationalism-globalism  nature  neuro-nitgrit  neuro  neurons  news  nibble  nitty-gritty  nlp  nuclear  number  occam  ocr  offense-defense  oly  online-learning  openai  optimism  optimization  order-disorder  org:biz  org:bleg  org:foreign  org:inst  org:lite  org:mag  org:mat  org:nat  org:ngo  org:rec  org:sci  organization  papers  parsimony  pdf  performance  philosophy  physics  pic  pinker  planning  podcast  polisci  popsci  pragmatic  prediction  preimage  preprint  presentation  privacy  pro-rata  problem-solving  profile  progression  project  properties  proposal  psychology  psychometrics  q-n-a  qra  quotes  random  rationality  ratty  realness  realpolitik  reddit  reduction  reflection  regularization  regularizer  regulation  reinforcement  research-program  research  revolution  rhetoric  rigor  risk  robotics  robust  roots  russia  sampling  scale  science  scitariat  search  security  shannon  shift  signal-noise  similarity  singularity  sinosphere  skeleton  skunkworks  smoothness  social-psych  social  society  software  speaking  speculation  speedometer  spock  ssc  state-of-art  stories  strategy  structure  summary  supply-demand  survey  sv  synthesis  talks  teaching  tech  technology  techtariat  telos-atelos  the-great-west-whale  the-self  the-world-is-just-atoms  theory-of-mind  thick-thin  things  thinking  time-series  time  todo  top-n  track-record  trade  tradeoffs  transitions  travel  trends  trivia  trust  truth  turing  tutoring  twitter  universalism-particularism  unsupervised  usa  values  video  volo-avolo  war  whiggish-hegelian  white-paper  wild-ideas  within-without  writing  yc  yvain  zero-positive-sum  🐸  👽  🖥  🤖 

Copy this bookmark: