ai/ml   289

« earlier    

Review: Topaz Sharpen AI is Amazing
I got an email notifying me of the release of Topaz Sharpen AI, a program that enhances details and fixes out-of-focus/blurred shots. I initially expected that it was something similar to Adobe Enhance Details, which slightly enhanced the details of some specific shots and didn’t work for many other images. Topaz provided a demo fully-functional for 30 days, so I decided to give it a try.
Honestly speaking, I didn’t expect much. AI is the buzz word these days. Every company claims that their products feature wonderful AI but usually such AIs underperform my expectations.
I tell you the conclusion first so that you don’t have to waste your time. I was very, very impressed with Topaz Labs’ technology. It doesn’t work perfectly well with all images and it has some drawbacks, but the overall technology is really amazing.
Let me show you some images I processed using this software.
photography  AI/ML  photo  editing 
march 2019 by rgl7194
Introducing Spectre – Halide
We’re excited to announce Spectre, our second app for iPhone.
Spectre is a computational shutter for iPhone that allows everyone to take brilliant long exposures.
A regular photo captures only a fraction of a second. Taking a photo over several seconds — a long exposure—unlocks all sorts of practical and artistic effects.
Make a bridge at rush hour look perfectly empty...
You no longer have to get up at the crack of dawn to photograph busy tourist spots...
Walk past colored lights to create a beautiful flow of color...
And much more: Play with light sticks and fireworks to create magical images. Turn highways into rivers of light, and water into a dreamlike painting.
iphone  ios  camera  apps  photography  AI/ML 
march 2019 by rgl7194
Spectre is an AI Long Exposure Camera for iPhone by the Makers of Halide
Developer Benjamin Sandofsky and designer Sebastiaan de With, the duo behind the popular iPhone camera app Halide, have announced a brand new camera app called Spectre. It’s an AI-powered camera that helps you shoot long exposure photos with your iPhone.
Shooting long exposure photos traditionally requires stabilizing your camera and figuring out the right amount of light. Spectre takes care of those things for you, allowing you to focus entirely on the scene you’re capturing.
Want to remove people or moving objects from a crowded scene? Simply point your camera at it and capture a medium or long duration shot — the crowd or objects will be magically removed from the resulting photo.
iphone  ios  camera  apps  photography  AI/ML 
march 2019 by rgl7194
Spectre: A Computational Approach to Long-Exposure iPhone Photography – MacStories
Spectre is a new specialized camera app from the team that created Halide, one of our favorite camera apps on iOS. The Halide team describes Spectre as a computational shutter for the iPhone, which allows the app to do things like remove people from a crowded scene, create artistic images of rushing water, and produce light trails at night. The same sort of images can be created using traditional cameras, but getting the exposure right, holding the camera absolutely still, and accounting for other factors make them difficult to get right. With Spectre, artificial intelligence is used to simplify the process and make long-exposure photography accessible to anyone with an iPhone.
If you’ve used Halide, you’ll be at home in Spectre, which shares a similar interface. Overall though, Spectre is simpler than Halide if for no other reason than that it’s tailored for a very specific type of photography.
iphone  ios  camera  apps  photography  AI/ML 
march 2019 by rgl7194
Yes, “algorithms” can be biased. Here’s why | Ars Technica
Op-ed: a computer scientist weighs in on the downsides of AI.
Dr. Steve Bellovin is professor of computer science at Columbia University, where he researches "networks, security, and why the two don't get along." He is the author of Thinking Security and the co-author of Firewalls and Internet Security: Repelling the Wily Hacker. The opinions expressed in this piece do not necessarily represent those of Ars Technica.
Newly elected Rep. Alexandria Ocasio-Cortez (D-NY) recently stated that facial recognition "algorithms" (and by extension all "algorithms") "always have these racial inequities that get translated" and that "those algorithms are still pegged to basic human assumptions. They're just automated assumptions. And if you don't fix the bias, then you are just automating the bias."
She was mocked for this claim on the grounds that "algorithms" are "driven by math" and thus can't be biased—but she's basically right. Let's take a look at why.
First, some notes on terminology—and in particular a clarification for why I keep putting scare quotes around the word "algorithm." As anyone who has ever taken an introductory programming class knows, algorithms are at the heart of computer programming and computer science. (No, those two are not the same, but I won't go into that today.) In popular discourse, however, the word is widely misused.
op-ed  bias  algorithm  math  AI/ML 
january 2019 by rgl7194
Introduction to Natural Language Processing
Published on Jan 16, 2019
This is an update of a talk I originally gave in 2010. I had intended to make a wholesale update to all the slides, but noticed that one of them was worth keeping verbatim: a snapshot of the state of the art back then (see slide 38). Less than a decade has passed since then but there are some interesting and noticeable changes. For example, there was no word2vec, GloVe or fastText, or any of the neurally-inspired distributed representations and frameworks that are now so popular. Also no mention of sentiment analysis (maybe that was an oversight on my part, but I rather think that what we perceive as a commodity technology now was just not sufficiently mainstream back then).
Also if you compare with Jurafsky and Martin's current take on the state of the art (see slide 39), you could argue that POS tagging, NER, IE and MT have all made significant progress too (which I would agree with). I am not sure I share their view that summarisation is in the 'still really hard' category; but like many things, it depends on how & where you set the quality bar.
search  AI/ML  presentation 
january 2019 by rgl7194
Introduction to Natural Language Processing (slideshow) | Information Interaction
Earlier this week I gave a talk called “Introduction to NLP” as part of a class I am currently teaching at the University of Notre Dame. This is an update of a talk I originally gave in 2010, whilst working for Endeca. I had intended to make a wholesale update to all the slides, but noticed that one of them was worth keeping verbatim: a snapshot of the state of the art back then (see slide 38). Less than a decade has passed since then (that’s a short time to me but there are some interesting and noticeable changes. For example, there is no word2vec, GloVe or fastText, or any of the neurally-inspired distributed representations and frameworks that are now so popular (let alone BERT, ELMo & the latest wave). Also no mention of sentiment analysis: maybe that was an oversight on my part, but I rather think that what we perceive as a commodity technology now was just not sufficiently mainstream back then.
Also if you compare with Jurafsky and Martin’s current take on the state of the art (see slide 39), you could argue that POS tagging, NER, IE and MT have all made significant progress too since then (which I would agree with). I am not sure I share their view that summarisation is in the ‘still really hard’ category; but like many things, it depends on how & where you set the quality bar. Anyway, I’ve appended the slides below. I’ll aim to post further materials as we work thru the course (watch this space)
search  AI/ML  presentation 
january 2019 by rgl7194
Best Maker YouTube Channels/Temi/Listen Notes | Cool Tools
Automatic transcripts
When doing interviews, I like to have a transcript of the conversation. This is useful for fans of podcasts, and for journalism. The best transcripts are done by humans, but I can get very cheap, very fast transcripts that are 90-95% accurate done by AI. (Depends on quality of recording and accents.) Temi will give me a transcript for 10 cents per minute of audio ($6/hr), delivered in about an hour turn-around. The Word doc or PDF output will have time stamps on it, making it easy to go back to find the actual audio for correction if needed. The Temi transcript is accurate enough to find key passages; with one listen-through I can quickly clean it up for public consumption. — KK

Podcast search engine
One way to find new podcasts is a website called Listen Notes — a search engine for almost all podcasts around the world. You can search for topics or a specific person and find related episodes. Or set alerts for keyword mentions. I’m not a daily podcast listener but every once in a while I’ll want to hear what people are saying about a certain news story or random topic on my mind, and in those cases Listen Notes is very useful. — CD
cool_tools  transcript  AI/ML  podcast  search 
january 2019 by rgl7194
Daring Fireball: John Giannandrea Named to Apple’s Executive Team
Apple Newsroom:
Apple today announced that John Giannandrea has been named to the company’s executive team as senior vice president of Machine Learning and Artificial Intelligence Strategy. He joined Apple in April 2018.
Giannandrea oversees the strategy for AI and Machine Learning across all Apple products and services, as well as the development of Core ML and Siri technologies. His team’s focus on advancing and tightly integrating machine learning into Apple products is delivering more personal, intelligent and natural interactions for customers while protecting user privacy.
Giannandrea, you will recall, came to Apple from Google, where he was in charge of AI and search. It is quite possible that he is the best person in the world Apple could have hired to head up artificial intelligence and machine learning. Apple’s goal, obviously, is to meet or exceed Google in these areas — which is to say to lead the industry.
apple  AI/ML  google  daring_fireball  press_release 
december 2018 by rgl7194
Optimizing Siri on HomePod in Far‑Field Settings - Apple
The typical audio environment for HomePod has many challenges — echo, reverberation, and noise. Unlike Siri on iPhone, which operates close to the user’s mouth, Siri on HomePod must work well in a far-field setting. Users want to invoke Siri from many locations, like the couch or the kitchen, without regard to where HomePod sits. A complete online system, which addresses all of the environmental issues that HomePod can experience, requires a tight integration of various multichannel signal processing technologies. Accordingly, the Audio Software Engineering and Siri Speech teams built a system that integrates both supervised deep learning models and unsupervised online learning algorithms and that leverages multiple microphone signals. The system selects the optimal audio stream for the speech recognizer by using top-down knowledge from the “Hey Siri” trigger phrase detectors. In this article, we discuss the machine learning techniques we use for online signal processing, as well as the challenges we faced and our solutions for achieving environmental and algorithmic robustness while ensuring energy efficiency.
apple  AI/ML  research  report  homepod  siri  audio 
december 2018 by rgl7194
Apple research paper outlines how Apple has optimized Siri to work on HomePod | iLounge News
Apple has published a new entry in its Machine Learning Journal providing in-depth technical information on how Apple designed Siri on the HomePod to deal with hearing and understanding a user’s voice in the larger spaces in which HomePod is intended to operate. Titled Optimizing Siri on HomePod in Far‑Field Settings, the paper explains how Siri on HomePod had to be designed to work in “challenging usage scenarios” such as dealing with users standing much farther away from the HomePod than they typically would be from their iPhone, as well as dealing with loud music playback from the HomePod itself, and making out the user speaking despite other sound sources in a room like a TV or household appliances. In the article, Apple goes on to outline how the HomePod’s six microphones and multichannel signal processing system built into its A8 chip work together to adapt to a variety of changing conditions while still making sure that Siri can hear the person speaking and respond appropriately. Machine learning algorithms are employed as part of the signal processing to create advanced algorithms for common features like echo cancellation and noise reduction, improving Siri’s reliability across a wide variety of frequently changing environments.
apple  AI/ML  research  report  homepod  siri  audio 
december 2018 by rgl7194
Apple published a surprising amount of detail about how the HomePod works | Ars Technica
Machine learning is a big focus at Apple right now—a blog post explains why.
Today, Apple published a long and informative blog post by its audio software engineering and speech teams about how they use machine learning to make Siri responsive on the HomePod, and it reveals a lot about why Apple has made machine learning such a focus of late.
The post discusses working in a far-field setting where users are calling on Siri from any number of locations around the room relative to the HomePod's location. The premise is essentially that making Siri work on the HomePod is harder than on the iPhone for that reason. The device must compete with loud music playback from itself.
Apple addresses these issues with multiple microphones along with machine learning methods—specifically:
Mask-based multichannel filtering using deep learning to remove echo and background noise
Unsupervised learning to separate simultaneous sound sources and trigger-phrase based stream selection to eliminate interfering speech
apple  AI/ML  research  report  homepod  siri  audio 
december 2018 by rgl7194
Apple's small Silk Labs purchase pushes AI to the edge | Computerworld
Apple’s AI push into on-device machine learning continues with news of its acquisition of Silk Labs breaking just as the U.S. heads into its annual holiday season.
The Information states Apple quietly acquired Silk Labs earlier this year.
Apple’s new purchase seems a good one.
The acquisition closely matches Apple’s feelings about the need to put AI/machine intelligence at the edge. Devices must be smart enough to function when they are offline and secure enough not to damage the privacy of customers.
apple  M&A  business  AI/ML  privacy 
november 2018 by rgl7194
How Driverless Cars Could Make Us Better Drivers - WhoWhatWhy
Algorithms to the Rescue!
Scientists are constantly striving to solve all the world’s problems — from global warming to oceans choking on plastics. Now robotics experts are turning their attention to traffic, using computer modeling to determine whether self-driving cars could ease the stressful, dangerous problem of congestion.
The results suggest that Artificial Intelligence (AI) could make driving safer and more pleasant, using autonomous vehicles as role models for other drivers on the roads.
Of course, driverless technology is still in its infancy, and there have been tragic consequences during trials of self-driving cars. But this new research was virtual — its results resemble a computer game more than a real-life experiment with actual traffic flow, as shown in the video below..
cars  driving  technology  self-driving  AI/ML 
november 2018 by rgl7194
Tech is coming for the weed industry at MJBizCon - The Verge
Now that cannabis is big business, entrepreneurs are eager to bring tech to the weed industry, which means that cannabis is on the blockchain and machine learning has come for marijuana cultivation.
Both technologies were on display at this year’s Marijuana Business Daily Conference, or MJBizCon, in Las Vegas. Six years ago, MJBizCon started with 17 exhibitors and 400 attendees in Denver. Since then, more states have legalized marijuana, which means more demand for cannabis and more opportunities to make money. For example, analysts expect that the cannabis market in Michigan, which recently became the first Midwestern state to approve recreational marijuana, will reach nearly $2 billion annually in a few years. Globally, the cannabis market is expected to expand from $13 billion this year to $32 billion in five years, experts say. That growth is why this year’s MJBizCon has over 1,000 companies exhibiting and 25,000 attendants from 63 countries.
marijuana  business  technology  blockchain  AI/ML 
november 2018 by rgl7194
How The Wall Street Journal is preparing its journalists to detect deepfakes
“We have seen this rapid rise in deep learning technology and the question is: Is that going to keep going, or is it plateauing? What’s going to happen next?”
Artificial intelligence is fueling the next phase of misinformation. The new type of synthetic media known as deepfakes poses major challenges for newsrooms when it comes to verification. This content is indeed difficult to track: Can you tell which of the images below is a fake?
(Check the bottom of this story for the answer.)
We at The Wall Street Journal are taking this threat seriously and have launched an internal deepfakes task force led by the Ethics & Standards and the Research & Development teams. This group, the WSJ Media Forensics Committee, is comprised of video, photo, visuals, research, platform, and news editors who have been trained in deepfake detection. Beyond this core effort, we’re hosting training seminars with reporters, developing newsroom guides, and collaborating with academic institutions such as Cornell Tech to identify ways technology can be used to combat this problem.
“Raising awareness in the newsroom about the latest technology is critical,” said Christine Glancey, a deputy editor on the Ethics & Standards team who spearheaded the forensics committee. “We don’t know where future deepfakes might surface so we want all eyes watching out for disinformation.”
Here’s an overview for journalists of the insights we’ve gained and the practices we’re using around deepfakes.
news  factcheck  AI/ML  fake 
november 2018 by rgl7194
NY Times Using Google AI to Digitize 5M+ Photos and Find 'Untold Stories'
The New York Times has teamed up with Google Cloud for digitizing five to seven million old photos in its archive. Google’s AI will also be tasked with unearthing “untold stories” in the massive trove of historical images.
“For over 100 years, The Times has archived approximately five to seven million of its old photos in hundreds of file cabinets three stories below street level near their Times Square offices in a location called the ‘morgue’,” Google writes. “Many of the photos have been stored in folders and not seen in years. Although a card catalog provides an overview of the archive’s contents, there are many details in the photos that are not captured in an indexed form.”
google  photography  nytimes  AI/ML  digital  scanning 
november 2018 by rgl7194

« earlier    

related tags

ai  algorithm  apple  apps  audio  bayesian  bias  blockchain  book  business  camera  cars  cellphones  cool_tools  daring_fireball  datascience  datasets  digital  driving  editing  ethics  factcheck  fairness  fake  fake_news  favorites  google  homepod  ifttt  ios  iphone  m&a  machine-learning  marijuana  math  ml  modeling  news  nytimes  online_learning  op-ed  optimization  parental_controls  photo  photography  pocket  podcast  presentation  press_release  privacy  propaganda  read  report  research  scanning  search  self-driving  siri  social_media  spatial  technical_articles  technology  transcript  video  visualization 

Copy this bookmark:



description:


tags: