jm + vision   6

These stickers make AI hallucinate things that aren’t there - The Verge
The sticker “allows attackers to create a physical-world attack without prior knowledge of the lighting conditions, camera angle, type of classifier being attacked, or even the other items within the scene.” So, after such an image is generated, it could be “distributed across the Internet for other attackers to print out and use.”

This is why many AI researchers are worried about how these methods might be used to attack systems like self-driving cars. Imagine a little patch you can stick onto the side of the motorway that makes your sedan think it sees a stop sign, or a sticker that stops you from being identified up by AI surveillance systems. “Even if humans are able to notice these patches, they may not understand the intent [and] instead view it as a form of art,” the researchers write.
self-driving  cars  ai  adversarial-classification  security  stickers  hacks  vision  surveillance  classification 
17 days ago by jm
Great comment on the "realism" of space photos
In short, the answer to the question “is this what it would look like if I was there?” is almost always no, but that is true of every photograph. The photos taken from space cameras are no more fake or false than the photos taken from any camera. Like all photos they are a visual interpretation using color to display data. Most space photos have information online about how they were created, what filters were used, and all kinds of interesting details about processing. The discussion about whether a space photo is real or fake is meaningless. There's no distinction between photoshopped and not. It's a nuanced view but the nature of the situation demands it.
photography  photos  space  cassini  probes  cameras  light  wavelengths  science  vision  realism  real 
november 2016 by jm
"Meta-Perceptual Helmets For The Dead Zoo"
with Neil McKenzie, Nov 9-16 2014, in the National History Museum in Dublin:

'These six helmets/viewing devices start off by exploring physical conditions of viewing: if we have two eyes, they why is our vision so limited? Why do we have so little perception of depth? Why don’t our two eyes offer us two different, complementary views of the world around us? Why can’t they extend from our body so we can see over or around things? Why don’t they allow us to look behind and in front at the same time, or sideways in both directions? Why can’t our two eyes simultaneously focus on two different tasks?

Looking through Michael Land’s defining work Animal Eyes, we see that nature has indeed explored all of these possibilities: a Hammerhead Shark has hyper-stereo vision; a horse sees 350° around itself; a chameleon has separately rotatable eyes…

The series of Meta-Perceptual Helmets do indeed explore these zoological typologies: proposing to humans the hyper-stereo vision of the hammerhead shark; or the wide peripheral vision of the horse; or the backward/forward vision of the chameleon… but they also take us into the unnatural world of mythology and literature: the Cheshire Cat Helmet is so called because of the strange lingering effect of dominating visual information such as a smile or the eyes; the Cyclops allows one large central eye to take in the world around while a second tiny hidden eye focuses on a close up task (why has the creature never evolved that can focus on denitting without constantly having to glance around?).'

(via Emma)
perception  helmets  dublin  ireland  museums  dead-zoo  sharks  eyes  vision  art 
october 2014 by jm
What an RAF pilot can teach us about being safe on the road
Good article on road safety and visual perception, for both cyclists and drivers.
vision  driving  cycling  tips  cognitive-psychology  safety  hi-viz 
december 2013 by jm
#AltDevBlogADay » Latency Mitigation Strategies
John Carmack on the low-latency coding techniques used to support head mounted display devices.

Virtual reality (VR) is one of the most demanding human-in-the-loop applications from a latency standpoint. The latency between the physical movement of a user’s head and updated photons from a head mounted display reaching their eyes is one of the most critical factors in providing a high quality experience.

Human sensory systems can detect very small relative delays in parts of the visual or, especially, audio fields, but when absolute delays are below approximately 20 milliseconds they are generally imperceptible. Interactive 3D systems today typically have latencies that are several times that figure, but alternate configurations of the same hardware components can allow that target to be reached.

A discussion of the sources of latency throughout a system follows, along with techniques for reducing the latency in the processing done on the host system.
head-mounted-display  display  ui  latency  vision  coding  john-carmack 
february 2013 by jm

Copy this bookmark:



description:


tags: