jm + self-driving   5

NTSB: Autopilot steered Tesla car toward traffic barrier before deadly crash
This is the Tesla self-crashing car in action. Remember how it works. It visually recognizes rear ends of cars using a BW camera and Mobileye (at least in early models) vision software. It also recognizes lane lines and tries to center between them. It has a low resolution radar system which ranges moving metallic objects like cars but ignores stationary obstacles. And there are some side-mounted sonars for detecting vehicles a few meters away on the side, which are not relevant here.

The system performed as designed. The white lines of the gore (the painted wedge) leading to this very shallow off ramp become far enough apart that they look like a lane.[1] If the vehicle ever got into the gore area, it would track as if in a lane, right into the crash barrier. It won't stop for the crash barrier, because it doesn't detect stationary obstacles. Here, it sped up, because there was no longer a car ahead. Then it lane-followed right into the crash barrier.

That's the fundamental problem here. These vehicles will run into stationary obstacles at full speed with no warning or emergency braking at all. That is by design. This is not an implementation bug or sensor failure. It follows directly from the decision to ship "Autopilot" with that sensor suite and set of capabilities.
tesla  fail  safety  self-driving  autopilot  cars  driving  sonar  radar  sensors  ai 
6 weeks ago by jm
These stickers make AI hallucinate things that aren’t there - The Verge
The sticker “allows attackers to create a physical-world attack without prior knowledge of the lighting conditions, camera angle, type of classifier being attacked, or even the other items within the scene.” So, after such an image is generated, it could be “distributed across the Internet for other attackers to print out and use.”

This is why many AI researchers are worried about how these methods might be used to attack systems like self-driving cars. Imagine a little patch you can stick onto the side of the motorway that makes your sedan think it sees a stop sign, or a sticker that stops you from being identified up by AI surveillance systems. “Even if humans are able to notice these patches, they may not understand the intent [and] instead view it as a form of art,” the researchers write.
self-driving  cars  ai  adversarial-classification  security  stickers  hacks  vision  surveillance  classification 
january 2018 by jm
V2V and the challenge of cooperating technology
A great deal of effort and attention has gone into a mobile data technology that you may not be aware of. This is "Vehicle to Vehicle" (V2V) communication designed so that cars can send data to other cars. There is special spectrum allocated at 5.9ghz, and a protocol named DSRC, derived from wifi, exists for communications from car-to-car and also between cars and roadside transmitters in the infrastructure, known as V2I.

This effort has been going on for some time, but those involved have had trouble finding a compelling application which users would pay for. Unable to find one, advocates hope that various national governments will mandate V2V radios in cars in the coming years for safety reasons. In December 2016, the U.S. Dept. of Transportation proposed just such a mandate. [....] "Connected Autonomous Vehicles -- Pick 2."
cars  self-driving  autonomous-vehicles  v2v  wireless  connectivity  networking  security 
may 2017 by jm
Toyota's Gill Pratt: "No one is close to achieving true level 5 [self-driving cars]"
The most important thing to understand is that not all miles are the same. Most miles that we drive are very easy, and we can drive them while daydreaming or thinking about something else or having a conversation. But some miles are really, really hard, and so it’s those difficult miles that we should be looking at: How often do those show up, and can you ensure on a given route that the car will actually be able to handle the whole route without any problem at all? Level 5 autonomy says all miles will be handled by the car in an autonomous mode without any need for human intervention at all, ever.

So if we’re talking to a company that says, “We can do full autonomy in this pre-mapped area and we’ve mapped almost every area,” that’s not Level 5. That’s Level 4. And I wouldn’t even stop there: I would ask, “Is that at all times of the day, is it in all weather, is it in all traffic?” And then what you’ll usually find is a little bit of hedging on that too. The trouble with this Level 4 thing, or the “full autonomy” phrase, is that it covers a very wide spectrum of possible competencies. It covers “my car can run fully autonomously in a dedicated lane that has no other traffic,” which isn’t very different from a train on a set of rails, to “I can drive in Rome in the middle of the worst traffic they ever have there, while it’s raining," which is quite hard.

Because the “full autonomy” phrase can mean such a wide range of things, you really have to ask the question, “What do you really mean, what are the actual circumstances?” And usually you’ll find that it’s geofenced for area, it may be restricted by how much traffic it can handle, for the weather, the time of day, things like that. So that’s the elaboration of why we’re not even close.
autonomy  driving  self-driving  cars  ai  robots  toyota  weather 
january 2017 by jm
Self-driving cars: overlooking data privacy is a car crash waiting to happen
Interesting point -- self-driving cars are likely to be awash in telemetry data, "phoned home"
self-driving  cars  vehicles  law  data  privacy  data-privacy  surveillance 
july 2016 by jm

Copy this bookmark: