jm + self-driving-cars   4

[1801.02780] Rogue Signs: Deceiving Traffic Sign Recognition with Malicious Ads and Logos
Well, so much for that idea.
We propose a new real-world attack against the computer vision based systems of autonomous vehicles (AVs). Our novel Sign Embedding attack exploits the concept of adversarial examples to modify innocuous signs and advertisements in the environment such that they are classified as the adversary's desired traffic sign with high confidence. Our attack greatly expands the scope of the threat posed to AVs since adversaries are no longer restricted to just modifying existing traffic signs as in previous work. Our attack pipeline generates adversarial samples which are robust to the environmental conditions and noisy image transformations present in the physical world. We ensure this by including a variety of possible image transformations in the optimization problem used to generate adversarial samples. We verify the robustness of the adversarial samples by printing them out and carrying out drive-by tests simulating the conditions under which image capture would occur in a real-world scenario. We experimented with physical attack samples for different distances, lighting conditions, and camera angles. In addition, extensive evaluations were carried out in the virtual setting for a variety of image transformations. The adversarial samples generated using our method have adversarial success rates in excess of 95% in the physical as well as virtual settings.
signs  road-safety  roads  traffic  self-driving-cars  cars  avs  security  machine-learning  computer-vision  ai 
9 weeks ago by jm
Anthony Levandowski has founded an AI religion
In September 2015, the multi-millionaire engineer at the heart of the patent and trade secrets lawsuit between Uber and Waymo, Google’s self-driving car company, founded a religious organization called Way of the Future. Its purpose, according to previously unreported state filings, is nothing less than to “develop and promote the realization of a Godhead based on Artificial Intelligence.”

this article is full of bananas.
google  crazy  uber  waymo  self-driving-cars  cars  religion  way-of-the-future  ai  god 
september 2017 by jm
Control theory meets machine learning
'DB: Is there a difference between how control theorists and machine learning researchers think about robustness and error?

BR: In machine learning, we almost always model our errors as being random rather than worst-case. In some sense, random errors are actually much more benign than worst-case errors. [...] In machine learning, by assuming average-case performance, rather than worst-case, we can design predictive algorithms by averaging out the errors over large data sets. We want to be robust to fluctuations in the data, but only on average. This is much less restrictive than the worst-case restrictions in controls.

DB: So control theory is model-based and concerned with worst case. Machine learning is data based and concerned with average case. Is there a middle ground?

BR: I think there is! And I think there's an exciting opportunity here to understand how to combine robust control and reinforcement learning. Being able to build systems from data alone simplifies the engineering process, and has had several recent promising results. Guaranteeing that these systems won't behave catastrophically will enable us to actually deploy machine learning systems in a variety of applications with major impacts on our lives. It might enable safe autonomous vehicles that can navigate complex terrains. Or could assist us in diagnostics and treatments in health care. There are a lot of exciting possibilities, and that's why I'm excited about how to find a bridge between these two viewpoints.'
control-theory  interviews  machine-learning  ml  worst-case  self-driving-cars  cs 
november 2015 by jm
Volvo says horrible 'self-parking car accident' happened because driver didn't have 'pedestrian detection'
Grim meathook future, courtesy of Volvo:
“The Volvo XC60 comes with City Safety as a standard feature however this does not include the Pedestrian detection functionality [...] The pedestrian detection feature [...] costs approximately $3,000.

However, there's another lesson here, in crappy car UX and the risks thereof:
But even if it did have the feature, Larsson says the driver would have interfered with it by the way they were driving and “accelerating heavily towards the people in the video.” “The pedestrian detection would likely have been inactivated due to the driver inactivating it by intentionally and actively accelerating,” said Larsson. “Hence, the auto braking function is overrided by the driver and deactivated.” Meanwhile, the people in the video seem to ignore their instincts and trust that the car assumed to be endowed with artificial intelligence knows not to hurt them. It is a sign of our incredible faith in the power of technology, but also, it’s a reminder that companies making AI-assisted vehicles need to make safety features standard and communicate clearly when they aren’t.
self-driving-cars  cars  ai  pedestrian  computer-vision  volvo  fail  accidents  grim-meathook-future 
may 2015 by jm

Copy this bookmark: