charlesarthur + privacy   296

Brave uncovers widespread surveillance of UK citizens by private companies embedded on UK council websites • Brave
Johnny Ryan is chief policy officer at Brave, an independent browser:
<p>“<a href="https://brave.com/wp-content/uploads/2020/02/Surveillance-on-UK-council-websites_compressed_version.pdf">Surveillance on UK council websites</a>”, a new report from Brave, reveals the extent of private companies’ surveillance of UK citizens when they seek help for addiction, disability, and poverty from their local government authorities.

None of the data collecting companies recorded in this study had received consent from the website visitor to lawfully process data. 

• Nearly all councils in the UK permit at least one company to learn about the behaviour of people visiting their websites<br />• People seeking information about disability, poverty, drugs and alcoholism services are profiled by data brokers on some council websites<br />• 198 council websites in the UK use the “real-time bidding” (RTB) form of advertising. Real-time bidding is the biggest data breach ever recorded in the UK. Though illegality is not in dispute, the UK Information Commissioner (ICO) has failed to act<br />• Google owns all five of the top embedded elements loaded by UK council websites, giving it the power to know what virtually anyone in the UK views on council sites<br />• Over of a quarter of the UK population is served by councils that embed Twitter, Facebook, and others on their websites, leaking data about what sensitive issues people read about to these companies.</p>


Hard to believe that none of the companies had consent from the visitor; isn't that why we're always clicking cookie settings?
advertising  privacy  government 
2 days ago by charlesarthur
Chinese hacking is alarming. So are data brokers • The New York Times
Charlie Warzel:
<p>Mr. Begor, Equifax’s chief executive, noted that “cybercrime is one of the greatest threats facing our nation today.” But what he ignored was his own company’s role in creating a glaring vulnerability in the system. If we’re to think of cybercrime like an analog counterpart, then Equifax is a bank on Main Street that forgot to lock its vault.

Why rob a bank? Because that’s where the money is. Why hack a data broker? Because that’s where the information is.

The analogy isn’t quite apt, though, because Equifax, like other data brokers, doesn’t fill its vaults with deposits from willing customers. Equifax amasses personal data on millions of Americans whether we want it to or not, creating valuable profiles that can be used to approve or deny loans or insurance claims. That data, which can help dictate the outcome of major events in our lives (where we live, our finances, even potentially our health), then becomes a target.

From this vantage, it’s unclear why data brokers should continue to collect such sensitive information at scale. Setting aside Equifax’s long, sordid history of privacy concerns and its refusal to let Americans opt out of collection, the very existence of such information, stored by private companies with little oversight, is a systemic risk..</p>


It was difficult to pick a section to extract: this is a terrific article. But who's going to rein in the data brokers?
china  privacy  cybercrime 
3 days ago by charlesarthur
Watching you watch: the tracking system of over-the-top TV streaming devices • the morning paper
Adrian Colyer:
<p>The <a href="https://dl.acm.org/doi/10.1145/3319535.3354198">results from this paper</a> are all too predictable: channels on Over-The-Top (OTT) streaming devices are insecure and riddled with privacy leaks. The authors quantify the scale of the problem, and note that users have even less viable defence mechanisms than they do on web and mobile platforms. When you watch TV, the TV is watching you.
<p>In this paper, we examine the advertising and tracking ecosystems of Over-The-Top ("OTT") streaming devices, which deliver Internet-based video content to traditional TVs/display devices. OTT devices refer to a family of services and devices that either directly connect to a TV (e.g., streaming sticks and boxes) or enable functionality within a TV (e.g. smart TVs) to facilitate the delivery of Internet-based video content.</p>


The study focuses on Roku and Amazon Fire TV, which together account for between 59% and 65% of the global market…

…Trackers are everywhere! On Roku TV, the most prevalent tracker is for Google’s doubleclick.net (975/1000 channels). On Amazon Fire TV it is amazon-adsystem.com (687/1000). Facebook is notably less present on TV than it is in mobile and web channels…

…Nine of the top 100 channels on Roku, and 14 of the top 100 channels on Amazon Fire TV leak the title of each video watched to a tracking domain. The Roku channels leaked this information over unencrypted connections.

79% of Roku channels send at least one request in cleartext, and 76% of Fire TV channels.</p>


Amazing how pretty much every platform has to rediscover security as a followup, but is fantastically good at implementing whatever the advertising world works. A little reminder that you, the customer, mean far less than them, the advertisers.
advertising  privacy  hardware 
6 days ago by charlesarthur
website data leaks pose greater risks than most people realize • Harvard School of Engineering and Applied Sciences
Adam Zewe:
<p>The students [Kin Attari and Dasha Metropolitansky] found a dataset from a breach of credit reporting company Experian, which didn’t get much news coverage when it occurred in 2015. It contained personal information on six million individuals. The dataset was divided by state, so Metropolitansky and Attari decided to focus on Washington D.C. The data included 69 variables—everything from a person’s home address and phone number to their credit score, history of political donations, and even how many children they have.

But this was data from just one leak in isolation. Metropolitansky and Attari wondered if they could identify an individual across all other leaks that have occurred, combining stolen personal information from perhaps hundreds of sources.

There are sites on the dark web that archive data leaks, allowing an individual to enter an email and view all leaks in which the email appears. Attari built a tool that performs this look-up at scale.

“The program takes in a list of personally identifiable information, such as a list of emails or usernames, and searches across the leaks for all the credential data it can find for each person,” he said.

The Experian Washington dataset found by Metropolitansky and Attari contained more than 40,000 unique email addresses. Attari extracted these unique emails and entered them into the tool, which searched for all data leaks in which the emails appear as well as leaked credentials, such as passwords and usernames.

The tool output a dataset of the leaks and credentials associated with the Experian email addresses. Metropolitansky then joined this data with the complete 69-variable Experian dataset, linking users’ cyber identities with their real-world identities.

“What we were able to do is alarming because we can now find vulnerabilities in people’s online presence very quickly,” Metropolitansky said. “For instance, if I can aggregate all the leaked credentials associated with you in one place, then I can see the passwords and usernames that you use over and over again.”

Of the 96,000 passwords contained in the dataset the students used, only 26,000 were unique.

“We also showed that a cyber criminal doesn’t have to have a specific victim in mind. They can now search for victims who meet a certain set of criteria,” Metropolitansky said.</p>
privacy  data  hacking 
10 days ago by charlesarthur
FCC confirms carriers ‘apparently’ broke the law by selling customer location data • The Verge
Chris Welch:
<p>The controversy originated with a Motherboard report that made clear just how negligent carriers including T-Mobile, Sprint, and AT&T had gotten with selling the real-time location of their wireless subscribers. That information could trickle down to bounty hunters and complete strangers for a worryingly small amount of money — without the wireless customers ever having a clue.

Carriers tried to ease the resulting blowback by saying either they would stop their location sales practices or had already done so. AT&T even went so far as to argue it wasn’t violating any laws. But US lawmakers still wanted a better understanding of how such sensitive data was getting around so freely, which led Energy and Commerce Committee Chair Rep. Frank Pallone (D-NJ) to summon Pai to an “emergency briefing” that the FCC chairman ended up skipping.

Now, after <a href="https://www.fcc.gov/document/chairman-pai-letter-congress-wireless-location-investigation">what Pai says</a> was an “extensive investigation,” the question turns to just how severely the FCC will penalize the mobile providers involved. Will it be something substantial or merely a wrist slap that leaves no lasting reminder for the companies that gave away some of the most sensitive data your phone can produce?</p>


Let's see if Pai is going to do anything material, or if he's just in place to Do Things That Are The Opposite Of What Obama Did. It seems to me that location data is more valuable than facial recognition systems.
location  privacy 
13 days ago by charlesarthur
Avast Antivirus is shutting down its data collection arm, effective immediately • VICE
Jason Koebler:
<p>[Motherboard/PC Mag's] investigation found that Avast, through a subsidiary called Jumpshot, made millions of dollars following its users around the internet. Jumpshot told its clients, which include Microsoft, Google, McKinsey, Pepsi, Home Depot, Yelp, and many others that it could track “every search. Every click. Every buy. On every site.”

Avast CEO Ondrej Vlcek wrote in a public letter Thursday morning that he and the company’s board of directors have decided to “terminate the Jumpshot data collection and wind down Jumpshot’s operations, with immediate effect.”

Earlier Thursday, the company announced that it had agreed to buy back a 35% stake in Jumpshot that it sold to the data analytics and marketing company Ascential last year. In July, Avast said that the 35% stake in Jumpshot was worth $60.76m.

Vlcek, who became CEO of Avast seven months ago, said he has spent the first few months of his job “re-evaluating every portion of our business,” and that the Jumpshot revelations had eroded trust in the company: "I feel personally responsible and I would like to apologize to all concerned."

“I came to the conclusion that the data collection business is not in line with our privacy priorities as a company in 2020 and beyond,” he wrote. “It is key to me that Avast’s sole purpose is to make the world a safer place, and I knew that ultimately, everything in the company would have to become aligned with that North Star of ours.”

Vlcek said that the decision to shut down Jumpshot “will regrettably impact hundreds of loyal Jumpshot employees and dozens of its customer [but] it is absolutely the right thing to do.”</p>


They had <em>hundreds</em> of people working on Jumpshot? I'm guessing not a huge number of them were engineers.
jumpshot  data  privacy 
17 days ago by charlesarthur
Ring doorbell app packed with third-party trackers • Electronic Frontier Foundation
Bill Budington:
<p>An investigation by EFF of the Ring doorbell app for Android found it to be packed with third-party trackers sending out a plethora of customers’ personally identifiable information (PII). Four main analytics and marketing companies were discovered to be receiving information such as the names, private IP addresses, mobile network carriers, persistent identifiers, and sensor data on the devices of paying customers.

The danger in sending even small bits of information is that analytics and tracking companies are able to combine these bits together to form a unique picture of the user’s device…

…Ring has exhibited a pattern of behavior that attempts to mitigate exposure to criticism and scrutiny while benefiting from the wide array of customer data available to them…

…Our testing, using Ring for Android version 3.21.1, revealed PII delivery to branch.io, mixpanel.com, appsflyer.com and facebook.com. Facebook, via its Graph API, is alerted when the app is opened and upon device actions such as app deactivation after screen lock due to inactivity. Information delivered to Facebook (even if you don’t have a Facebook account) includes time zone, device model, language preferences, screen resolution, and a unique identifier (anon_id), which persists even when you reset the OS-level advertiser ID.

Branch, which describes itself as a “deep linking” platform, receives a number of unique identifiers (device_fingerprint_id, hardware_id, identity_id) as well as your device’s local IP address, model, screen resolution, and DPI…

…Ring gives MixPanel the most information by far. Users’ full names, email addresses, device information such as OS version and model, whether bluetooth is enabled, and app settings such as the number of locations a user has Ring devices installed in, are all collected and reported to MixPanel. </p>
ring  amazon  privacy  surveillance 
17 days ago by charlesarthur
Facebook's 'Clear History' tool doesn't clear worth a damn • Gizmodo
Shoshana Wodinsky:
<p>“To help shed more light on these practices that are common yet not always well understood, today we’re introducing a new way to view and control your off-Facebook activity,” Zuckerberg said in the post. “<a href="https://www.facebook.com/off-facebook-activity">Off-Facebook Activity</a> lets you see a summary of the apps and websites that send us information about your activity, and clear this information from your account if you want to.”

Zuck’s use of the phrases “control your off-Facebook activity” and “clear this information from your account” is kinda misleading—you’re not really controlling or clearing much of anything. By using this tool, you’re just telling Facebook to put the data it has on you into two separate buckets that are otherwise mixed together. Put another way, Facebook is offering a one-stop-shop to opt-out of any ties between the sites and services you peruse daily that have some sort of Facebook software installed and your own-platform activity on Facebook or Instagram.

The only thing you’re clearing is a connection Facebook made between its data and the data it gets from third parties, not the data itself.

As an ad-tech reporter, my bread and butter involves downloading shit that does god-knows-what with your data, which is why I shouldn’t’ve been surprised that Facebook hoovered data from more 520 partners across the internet—either sites I’d visited or apps I’d downloaded. For Gizmodo alone, Facebook tracked “252 interactions” drawn from the handful of plug-ins our blog has installed. (To be clear, you’re going to run into these kinds of trackers e.v.e.r.y.w.h.e.r.e.—not just on our site.)</p>


It shows six months' data, though it's not clear whether that's all it keeps. Mine has three: a company website I ordered pet food from, a small news site, and a small crowdfunding site. I certainly didn't consent to any of them handing data to Facebook.

However, it says "this list doesn't show all of the activity that we've received. Activity that is not shown includes information that we've received when you're not logged in to Facebook, or when we can't confirm that you've previously used Facebook on that device." Surely the very thing we want to know is <em>what it sees when we're not logged in</em>. Clearing your history isn't much help either: "We'll continue to receive your activity from the businesses and organisations that you visit in the future."

Useless, Facebook. Useless.
facebook  data  privacy 
18 days ago by charlesarthur
Leaked documents expose the secretive market for your web browsing data • VICE
Joseph Cox (VICE) and Michael Kan (PC Mag):
<p>An antivirus program used by hundreds of millions of people around the world is selling highly sensitive web browsing data to many of the world's biggest companies, a joint investigation by Motherboard and PCMag has found. Our report relies on leaked user data, contracts, and other company documents that show the sale of this data is both highly sensitive and is in many cases supposed to remain confidential between the company selling the data and the clients purchasing it.

The documents, from a subsidiary of the antivirus giant Avast called Jumpshot, shine new light on the secretive sale and supply chain of peoples' internet browsing histories. They show that the Avast antivirus program installed on a person's computer collects data, and that Jumpshot repackages it into various different products that are then sold to many of the largest companies in the world. Some past, present, and potential clients include Google, Yelp, Microsoft, McKinsey, Pepsi, Sephora, Home Depot, Condé Nast, Intuit, and many others. Some clients paid millions of dollars for products that include a so-called "All Clicks Feed," which can track user behavior, clicks, and movement across websites in highly precise detail.

Avast claims to have more than 435 million active users per month, and Jumpshot says it has data from 100 million devices. Avast collects data from users that opt-in and then provides that to Jumpshot, but multiple Avast users told Motherboard they were not aware Avast sold browsing data, raising questions about how informed that consent is.</p>


I'll go with.. not very informed? Antivirus: the only thing worse is viruses.
security  advertising  privacy  antivirus 
19 days ago by charlesarthur
DNA collection at the border threatens the privacy of all Americans • The New York Times
Daniel I. Morales, Natalie Ram and Jessica L. Roberts:
<p>How we treat the people that cross our borders speaks to our identity as a nation. Immigrants are Americans of the future and the criteria we use to select or bar immigrants reflect our aspirations for the society we wish to become. The new DNA collection program may yet revive darker, eugenic impulses in immigration history. Modern, quota-based immigration law was born of a desire to improve the “quality” of America’s racial stock by drastically limiting immigration from peoples “scientifically” believed to be less intelligent than other groups. Italians and other southern European immigrants, for example, were granted fewer visas based on this false science.

It is a small leap from requiring immigrants to submit their DNA to verify familial relationships, or to mitigate future criminal risk (the pretexts the government has cited to justify its recent policy change) to requiring DNA screening of immigrants for health, disability, intelligence or disease. These screens for “fitness”— likely based on questionable science — could ultimately be used to deny entry into the United States or, if discovered later, as a basis for expulsion. Regardless of reliability we would not support genetic screening for fitness. Courts have usually failed to protect immigrants from such impulses, so it is up to citizens to learn from this history and decide that building a society this way is unacceptable.</p>


The point that DNA could be used to deny entry, and then might be expanded to the general population, is a good one. If you think that it couldn't possibly happen, look at the utter inability of the American system to rein in Trump (or his mini-me, Stephen Miller), and cast that forward a few years.
privacy  surveillance  dna 
19 days ago by charlesarthur
23andMe lays off 100 people: CEO Anne Wojcicki explains why • CNBC
Christina Farr:
<p>Home DNA-testing company 23andMe is laying off about 100 people, or 14% of its staff, on Thursday, <a href="https://www.cnbc.com/2019/08/25/dna-tests-from-companies-like-23andme-ancestry-see-sales-slowdown.html">in the wake of declining sales</a>.

The layoffs include the operations teams, which were focused on the company’s growth and scaling efforts, as well as other teams. In the coming months, the company plans to tighten its focus on the direct-to-consumer business and its therapeutics arm while scaling back its clinical studies arm.

CEO Anne Wojcicki told CNBC she’s been “surprised” to see the market starting to turn.

Wojcicki has theories, but she doesn’t have clear proof for why consumers are shying away from getting tests that reveal their percentage of Irish heritage, propensity for a favorite ice cream flavor, or whether they have a limited set of variants that are associated with breast cancer. Either way, she notes, she’s downsizing because it’s “what the market is ready for.”

“This has been slow and painful for us,” she said.</p>


The reality is she doesn't know why it's slowing down; maybe privacy, maybe economic concerns. Or maybe once you get past the early adopters, people don't care about their genetic ancestry, and don't really want to know their genetic future. That puts a very definite ceiling on sales.
privacy  dna  23andme  genetics 
20 days ago by charlesarthur
New 'transformational' code to protect children's privacy online • BBC News
<p>The code includes a list of 15 standards that companies behind online services are expected to comply with to protect children's privacy.

Examples of online services which are included are toys which are connected to the internet, apps, social media platforms, online games, educational websites and streaming service.

Firms who design, develop or run such products must provide a "baseline" of data protection for children, the code says.

The standards also include:<br />• Location settings that would allow a child's location to be shared should be switched off by default<br />• Privacy settings to be set to high by default and nudge techniques to encourage children to weaken their settings should not be used

"I believe that it will be transformational," Ms Denham told the Press Association. "I think in a generation from now when my grandchildren have children they will be astonished to think that we ever didn't protect kids online. I think it will be as ordinary as keeping children safe by putting on a seat belt."

Ms Denham said the move was widely supported by firms, although added that the gaming industry and some other tech companies expressed concern about their business model.

She added: "We have an existing law, GDPR, that requires special treatment of children and I think these 15 standards will bring about greater consistency and a base level of protection in the design and implementation of games and apps and websites and social media."</p>


As Denham also points out, 20% of internet users in Britain are children. The hope is that this code will come into force in autumn of 2021. Fingers crossed.
internet  privacy  children 
25 days ago by charlesarthur
Regarding Reuters’s report that Apple dropped plan for encrypting iCloud backups • Daring Fireball
John Gruber:
<p>[the Reuters journalist who wrote the scoop, Joseph] Menn is a solid reporter and I have no reason to doubt what he is reporting. What I suspect though, based on (a) everything we all know about Apple, and (b) my own private conversations over the last several years, with rank-and-file Apple sources who’ve been directly involved with the company’s security engineering, is that Menn’s sources for the “Apple told the FBI that it planned to offer users end-to-end encryption when storing their phone data on iCloud” bit were the FBI sources, not the Apple sources, and that it is not accurate.

It simply is not in Apple’s nature to tell anyone outside the company about any of its future product plans. I’m not sure how I could make that more clear. It is not in Apple’s DNA to ask permission for anything. (Cf. the theory that a company’s culture is permanently shaped by the personality of its founders.)

Encrypting iCloud backups would be perfectly legal. There would be no legal requirement for Apple to brief the FBI ahead of time. Nor would there be any reason to brief the FBI ahead of time just to get the FBI’s opinion on the idea. We all know what the FBI thinks about strong encryption…

…Surely there are hundreds, maybe thousands, of people every day who need to access their iCloud backups who do not remember their password. The fact that Apple can help them is a benefit to those users. That’s why I would endorse following the way local iTunes device backups work: make encryption an option, with a clear warning that if you lose your backup password, no one, including Apple, will be able to restore your data. I would be surprised if Apple’s plan for encrypted iCloud backups were not exactly that.</p>


Gruber has been mulling over this, and <a href="https://daringfireball.net/linked/2020/01/21/android-encrypted-backups">points out</a> that Google offers (optional?) encryption of backups of Android phones. And also that Tim Cook <a href="https://daringfireball.net/linked/2020/01/21/cook-der-spiegel">hinted in October 2018</a> that iCloud might move to encrypted backups.

Save people from their own mistakes, or save people from the FBI? It's quite the balance. Ironic too that Google's backups are the encrypted ones - but we don't hear the FBI gnashing its teeth over those.
apple  android  encryption  privacy  backups 
25 days ago by charlesarthur
The secretive company that might end privacy as we know it • The New York Times
Kashmir Hill:
<p>Until now, technology that readily identifies everyone based on his or her face has been taboo because of its radical erosion of privacy. Tech companies <a href="https://www.cnet.com/news/facebook-built-a-facial-recognition-app-for-employees/">capable of releasing such a tool</a> have refrained from doing so; in 2011, Google’s chairman at the time said it was the one technology the company had held back because it could be used “in a very bad way.” Some large cities, including San Francisco, have barred police from using facial recognition technology.

But without public scrutiny, more than 600 law enforcement agencies have started using Clearview in the past year, according to the company, which declined to provide a list. The computer code underlying its app, analyzed by The New York Times, includes programming language to pair it with augmented-reality glasses; users would potentially be able to identify every person they saw. The tool could identify activists at a protest or an attractive stranger on the subway, revealing not just their names but where they lived, what they did and whom they knew.

And it’s not just law enforcement: Clearview has also licensed the app to at least a handful of companies for security purposes.

“The weaponization possibilities of this are endless,” said Eric Goldman, co-director of the High Tech Law Institute at Santa Clara University. “Imagine a rogue law enforcement officer who wants to stalk potential romantic partners, or a foreign government using this to dig up secrets about people to blackmail them or throw them in jail.”

…“With Clearview, you can use photos that aren’t perfect,” Sergeant Ferrara said. “A person can be wearing a hat or glasses, or it can be a profile shot or partial view of their face.”

He uploaded his own photo to the system, and it brought up his Venmo page. He ran photos from old, dead-end cases and identified more than 30 suspects. In September, the Gainesville Police Department paid $10,000 for an annual Clearview license.</p>


The dam has broken, the toothpaste is out of the tube. Essentially, it's a viral hit - for police forces. But if it gets into citizens' hands, things will really get wild.
privacy  facialrecognition 
29 days ago by charlesarthur
Now stores must tell you how they're tracking your every move • WIRED
Tom Simonite:
<p>To anyone with eyes in their kneecaps, the notice outside gadget retailer B8ta’s glossy store next to San Francisco’s new NBA arena is obvious. “We care about your privacy,” the small plaque proclaims, offering a web address and QR code.

Anyone curious and limber enough to bend down and follow these pointers is taken to the retailer’s online privacy policy, which discloses that stepping inside the store puts you in range of technology that automatically collects personal information. That includes “smartphone detectors” and Wi-Fi routers that note the location and unique identifiers of your phone, and cameras equipped with software that estimates your age and gender.

B8ta added the signage to its six California stores and expanded its online privacy policy late last year as it prepared to comply with a new state law that took effect this month called the California Consumer Privacy Act. The law requires businesses to disclose what personal information they collect from consumers at or before the time it is collected. It gives state residents the right to request data collected about them be deleted and to forbid a business from selling it.

CCPA’s most visible effect has been a plague of website popups on California residents. But the law also applies to offline data collection.</p>


The annoyance is felt directly, and lawmakers get the blame - because the surveillance is silent, but pervasive.
privacy  data 
4 weeks ago by charlesarthur
New study: the advertising industry is systematically breaking the law • Forbrukerrådet
<p>The online advertising industry is behind comprehensive illegal collection and indiscriminate use of personal data, research from the Norwegian Consumer Council shows.
Based on the findings, more than 20 consumer and civil society organisations in Europe and from different parts of the world are urging their authorities to investigate the practices of the online advertising industry.

The <a href="https://www.forbrukerradet.no/undersokelse/no-undersokelsekategori/report-out-of-control/">report uncovers how</a> every time we use apps, hundreds of shadowy entities are receiving personal data about our interests, habits, and behaviour. This information is used to profile consumers, which can be used for targeted advertising, but may also lead to discrimination, manipulation and exploitation.

These practices are out of control and in breach of European data protection legislation. The extent of tracking makes it impossible for us to make informed choices about how our personal data is collected, shared and used, says Finn Myrstad, director of digital policy in the Norwegian Consumer Council.

The Norwegian Consumer Council is now filing formal complaints against Grindr, a dating app for gay, bi, trans, and queer people and companies that were receiving personal data through the app;  Twitter`s MoPub, AT&T’s AppNexus, OpenX, AdColony and Smaato. The complaints are directed to the Norwegian Data Protection Authority for breaches of the General Data Protection Regulation.</p>


Could be some fun fines if the complaints are upheld.
advertising  adtech  privacy  norway 
4 weeks ago by charlesarthur
Special sunglasses, license-plate dresses, Juggalo face paint: how to be anonymous in the age of surveillance • The Seattle Times
Melissa Hellmann:
<p>Daniel Castro, the vice president of nonprofit think tank Information Technology and Innovation Foundation, believes the error rates could be reduced by comparing images to a wider range of databases that are more diverse.

Facial recognition systems have proved effective in pursuing criminal investigation leads, he said, and are more accurate than humans at verifying people’s identities at border crossings. The development of policies and practices around the retention and usage of data could avoid government misuse, he said.

“The general use of this technology in the United States is very reasonable,” said Castro. “They’re being undertaken by police agencies that are trying to balance communities’ public safety interests with individual privacy.”

Still, in Doctorow’s eyes, the glasses serve as a conversation starter about the perils of granting governments and companies unbridled access to our personal data.

The motivation to seek out antidotes to an over-powerful force has political and symbolic significance for Doctorow, an L.A.-based science-fiction author and privacy advocate. His father’s family fled the Soviet Union, which used surveillance to control the masses.

“We are entirely too sanguine about the idea that surveillance technologies will be built by people we agree with for goals we are happy to support,” he said. “For this technology to be developed and for there to be no countermeasures is a road map to tyranny.”</p>
privacy  surveillance  technology  anonymity 
4 weeks ago by charlesarthur
Apple's new privacy features have further rattled the location-based ad market • Digiday
Seb Joseph:
<p>Right now opt-in rates to share [location] data with apps when they’re not in use are often below 50%, said Benoit Grouchko, who runs the ad tech business Teemo that creates software for apps to collect location data. Three years ago those opt-in rates were closer to 100%, he said. Higher opt-in rates prevailed when people weren’t aware that they even had a choice. Once installed on a phone, many apps would automatically start sharing a person’s location data.

Apple’s latest privacy protection move, however, is making people more aware that they do have a choice about which data is shared. Seven in 10 of the iPhone users tracked by location-verification business Location Sciences downloaded iOS 13 in the six weeks after it first became available, and 80% of those users stopped all background tracking across their devices.

“People have decided to stop their phones’ sharing location data at a universal level,” said Jason Smith, chief business officer at Location Sciences.

All the background location data that previously had been made available for targeted advertising is lost to marketers when people decide they don’t want their apps to share it with other companies.

“This also impacts the ability to tie users that research online and purchase in store or driving, and measuring footfall for clients becomes far more opaque,” said Paul Kasamias, managing partner at Publicis Media agency Starcom. “The drop in spend is also likely to come via small- to medium-sized advertisers, where cost efficiency is paramount and there is a physical footprint, as targeting the right user at the right time will become more difficult.”

Other media buyers say they are starting to feel the ripple effects of Apple’s move when they work with certain ad tech vendors.

“We have seen a drop in sales pitches from providers on location-data solutions, and there is a rise in ensuring that the data-exchange piece is addressed transparently up front as part of bigger deals,” said Sargi Mann, evp of digital strategy at Havas Media.</p>


"Once installed, many apps would automatically start sharing." Essentially we had cars without seatbelts, and the hospitals recommended not using them.
apple  location  privacy 
4 weeks ago by charlesarthur
Ring fired employees for watching customer videos • VICE
Joseph Cox:
<p>Amazon-owned home security camera company Ring has fired employees for improperly accessing Ring users' video data, according to a <a href="https://www.documentcloud.org/documents/6603161-Ring-Response-Letter.html">letter</a> the company wrote to Senators and obtained by Motherboard.

The news highlights a risk across many different tech companies: employees may abuse access granted as part of their jobs to look at customer data or information. In Ring's case this data can be particularly sensitive though, as customers often put the cameras inside their home.

"We are aware of incidents discussed below where employees violated our policies," the letter from Ring, dated January 6, reads. "Over the last four years, Ring has received four complaints or inquiries regarding a team member's access to Ring video data," it continues. Ring explains that although each of these people were authorized to view video data, their attempted access went beyond what they needed to access for their job.

"In each instance, once Ring was made aware of the alleged conduct, Ring promptly investigated the incident, and after determining that the individual violated company policy, terminated the individual," the letter adds.</p>


"Once Ring was made aware" is suitably vague. Someone told on the staff? And there's still the problem that it uses a simple email/password combination to log in to something intentionally accessible across the whole internet.
privacy  amazon  ring 
5 weeks ago by charlesarthur
At CES, Apple, Facebook and Amazon are preaching privacy. Don’t believe the hype • The Washington Post
Geoffrey Fowler:
<p>It’s a big deal that techies are even talking about privacy; CES has long been the epicenter of cheerleading for connecting everything to the Internet. But this isn’t the solution we need. Call it privacy-washing: when tech companies market control and transparency over data but continue gobbling it up.

Apple may, in fact, be one of the lesser offenders. Facebook’s privacy chief Erin Egan was also on that CES panel and said, with a straight face, “I think privacy is protected today for people on Facebook.” A few months ago, the social networking giant agreed to pay a $5 billion fine to the Federal Trade Commission for privacy violations.

As part of its privacy-champion marketing, Facebook introduced in time for CES a new version of its “privacy checkup” page, which simplifies some of its many privacy knobs and controls but doesn’t give us new powers to stop the social network from surveilling us.

Elsewhere at CES, Google pitched its always-listening voice Assistant as designed for privacy because you can now tell it, “Hey, Google, that wasn’t for you,” when you notice it randomly recording your family’s intimate conversations. Cool, thanks.

And Amazon’s Ring video doorbell company introduced a privacy and security dashboard that also doesn’t change most of its (insufficient) default privacy and security settings. (Amazon chief executive Jeff Bezos owns The Washington Post, but I review all tech with the same critical eye.)

Fortunately, one other panelist at Tuesday’s CES privacy panel — FTC Commissioner Rebecca Slaughter — was there for a reality check. Shortly after Facebook’s Egan made her pronouncement, Slaughter said: “I don’t want to talk about specific services or products, but as a general matter, no, I don’t think privacy is generally protected.” (Slaughter began her remarks by clarifying she was speaking only for herself and not the FTC.)</p>
privacy  tech 
5 weeks ago by charlesarthur
It seemed like a popular chat app. It’s secretly a spy tool • The New York Times
Mark Mazzetti, Nicole Perlroth and Ronen Bergman:
<p>It is billed as an easy and secure way to chat by video or text message with friends and family, even in a country that has restricted popular messaging services like WhatsApp and Skype.

But the service, ToTok, is actually a spying tool, according to American officials familiar with a classified intelligence assessment and a New York Times investigation into the app and its developers. It is used by the government of the United Arab Emirates to try to track every conversation, movement, relationship, appointment, sound and image of those who install it on their phones.

ToTok, introduced only months ago, was downloaded millions of times from the Apple and Google app stores by users throughout the Middle East, Europe, Asia, Africa and North America. While the majority of its users are in the Emirates, ToTok surged to become one of the most downloaded social apps in the United States last week, according to app rankings and App Annie, a research firm.

ToTok amounts to the latest escalation in a digital arms race among wealthy authoritarian governments, interviews with current and former American foreign officials and a forensic investigation showed. The governments are pursuing more effective and convenient methods to spy on foreign adversaries, criminal and terrorist networks, journalists and critics — efforts that have ensnared people all over the world in their surveillance nets.</p>


Apple and Google both banned ToTok from their app stores - and then Google <a href="https://www.theverge.com/platform/amp/2020/1/6/21051977/to-tok-app-google-play-store-uae-spying-privacy">reinstated it on Monday</a>. ToTok meanwhile has been <a href="https://twitter.com/KimZetter/status/1213591797663879168">trying to encourage "influencers"</a> to say nice things about it.
privacy  surveillance  adtech  emirates 
5 weeks ago by charlesarthur
Twelve million phones, one dataset, zero privacy • The New York Times
Stuart Thompson and Charlie Warzel:
<p>Every minute of every day, everywhere on the planet, dozens of companies — largely unregulated, little scrutinized — are logging the movements of tens of millions of people with mobile phones and storing the information in gigantic data files. The Times Privacy Project obtained one such file, by far the largest and most sensitive ever to be reviewed by journalists. It holds more than 50 billion location pings from the phones of more than 12 million Americans as they moved through several major cities, including Washington, New York, San Francisco and Los Angeles.

Each piece of information in this file represents the precise location of a single smartphone over a period of several months in 2016 and 2017. The data was provided to Times Opinion by sources who asked to remain anonymous because they were not authorized to share it and could face severe penalties for doing so. The sources of the information said they had grown alarmed about how it might be abused and urgently wanted to inform the public and lawmakers.

After spending months sifting through the data, tracking the movements of people across the country and speaking with dozens of data companies, technologists, lawyers and academics who study this field, we feel the same sense of alarm. In the cities that the data file covers, it tracks people from nearly every neighborhood and block, whether they live in mobile homes in Alexandria, Va., or luxury towers in Manhattan.

…or giant tech company, nor did it come from a governmental surveillance operation. It originated from a location data company, one of dozens quietly collecting precise movements using software slipped onto mobile phone apps. You’ve probably never heard of most of the companies — and yet to anyone who has access to this data, your life is an open book.

… Our privacy is only as secure as the least secure app on our device.</p>


Which isn't very. Is America ever going to discover privacy?
privacy  surveillance  location  america 
8 weeks ago by charlesarthur
A data leak exposed the personal information of over 3,000 Ring users • Buzzfeed News
Caroline Haskins:
<p>The log-in credentials for 3,672 Ring camera owners were compromised this week, exposing log-in emails, passwords, time zones, and the names people give to specific Ring cameras, which are often the same as camera locations, such as “bedroom” or “front door.”

Using the log-in email and password, an intruder could access a Ring customer’s home address, telephone number, and payment information, including the kind of card they have, and its last four digits and security code. An intruder could also access live camera footage from all active Ring cameras associated with an account, as well as a 30- to 60-day video history, depending on the user’s cloud storage plan.

We don’t know how this tranche of customer information was leaked. Ring denies any claims that the data was compromised as a part of a breach of Ring’s systems. A Ring spokesperson declined to tell BuzzFeed News when it became aware of the leak or whether it affected a third party that Ring uses to provide its services.

“Ring has not had a data breach. Our security team has investigated these incidents and we have no evidence of an unauthorized intrusion or compromise of Ring’s systems or network,” the spokesperson said. “It is not uncommon for bad actors to harvest data from other companies' data breaches and create lists like this so that other bad actors can attempt to gain access to other services.”

It is not clear what “other companies' data breaches” the spokesperson was referring to.</p>


Come on, there are tons of them - and if you use the same password as on Ring (lots of people do; password overload is everywhere) then you're vulnerable. Side note: Wirecutter, which recommends stuff, has <a href="https://twitter.com/wirecutter/status/1207730874860609536">suspended its recommendation</a> of Ring.
amazon  privacy  hacking 
8 weeks ago by charlesarthur
India proposes new rules to access its citizens’ data – TechCrunch
Manish Singh:
<p>India has proposed groundbreaking rules, akin to Europe’s GDPR, that would require technology companies to garner consent from citizens before collecting and processing their personal data.

But at the same time, the new rules also state that companies would have to hand over “non-personal” data of their users to the government, and New Delhi would also hold the power to collect any data of its citizens without consent to serve sovereignty and larger public interest.

The new rules, proposed in nation’s first major data protection law dubbed “Personal Data Protection Bill 2019,” a copy of which leaked on Tuesday, would permit New Delhi to “exempt any agency of government from application of Act in the interest of sovereignty and integrity of India, the security of the state, friendly relations with foreign states, public order.”

If the bill passes — and it is expected to be discussed in the Parliament in the coming weeks — select controversial laws drafted more than a decade ago would remain unchanged. The bill might also change how global technology companies that have invested billions of dollars in India, thanks in part to the lax laws, see the nation of more than 600 million internet users.</p>


Give with one hand, take with the other. India's government shows worrying signs of really overt authoritarianism.
privacy  data  india  gdpr 
9 weeks ago by charlesarthur
I ditched Google for DuckDuckGo. Here's why you should too • WIRED
James Temperton:
<p>It all started with a realization: Most the things I search for are easy to find. Did I really need the all-seeing, all-knowing algorithms of Google to assist me? Probably not. So I made a simple change: I opened up Firefox on my Android phone and switched Google search for DuckDuckGo. As a result, I’ve had a fairly tedious but important revelation: I search for really obvious stuff. Google’s own data backs this up. Its annual round-up of the most searched-for terms is basically a list of names and events: World Cup, Avicii, Mac Miller, Stan Lee, Black Panther, Megan Markle. The list goes on. And I don’t need to buy into Google’s leviathan network of privacy-invading trackers to find out what Black Panther is and when I can go and see it at my local cinema.

While I continue to use Google at work (more out of necessity, as my employer runs on G-Suite), on my phone I’m all about DuckDuckGo. I had, based on zero evidence, convinced myself that finding things on the internet was hard and, inevitably, involved a fair amount of tracking. After two years of not being tracked and targeted, I have slowly come to realize that this is nonsense.

DuckDuckGo works in broadly the same way as any other search engine, Google included. It combines data from hundreds of sources, including Wolfram Alpha, Wikipedia and Bing, with its own web crawler to surface the most relevant results. Google does exactly the same, albeit on a somewhat larger scale. The key difference: DuckDuckGo does not store IP addresses or user information.

Billed as the search engine that doesn’t track you, DuckDuckGo processes around 1.5 billion searches every month. Google, for contrast, processes around 3.5 billion searches per day. It’s hardly a fair fight, but DuckDuckGo is growing. In 2012 it averaged just 45 million searches per month. </p>


You can see <a href="https://duckduckgo.com/traffic">the growth in DuckDuckGo's traffic directly</a>. Though I don't get why he says he continues to use Google at work "more out of necessity". If search isn't special on his phone, why on his desktop? Also: DDG's traffic graph seems to be fractal - the same at every magnification.
google  search  privacy  duckduckgo 
10 weeks ago by charlesarthur
Most Americans think they’re being constantly tracked—and that there’s nothing they can do • MIT Technology Review
Angela Chen:
<p>More than 60% of Americans think it’s impossible to go through daily life without being tracked by companies or the government, according to a new Pew Research study. The results provide important context on the long-running question of how much Americans really care about privacy. 

It’s not just that Americans (correctly) think companies are collecting their data. They don’t like it. About 69% of Americans are skeptical that companies will use their private information in a way they’re comfortable with, while 79% don’t believe that companies will come clean if they misuse the information. 

When it comes to who they trust, there are differences by race. About 73% of black Americans, for instance, are at least a little worried about what law enforcement knows about them, compared with 56% of white Americans. But among all respondents, more than 80% were concerned about what social-media sites and advertisers might know. 

Despite these concerns, more than 80% of Americans feel they have no control over how their information is collected. 

Very few people read privacy policies, the survey shows. That’s understandable. A review of 150 policies from major websites found that the average one takes about 18 minutes to read and requires at least a college-level reading ability. Few people have time for that—and even if they did, most people are forced to agree anyway if they really need the service.</p>
surveillance  privacy 
november 2019 by charlesarthur
Undercover reporter reveals life in a Polish troll farm • The Guardian
Christian Davies:
<p>It is as common an occurrence on Polish Twitter as you are likely to get: a pair of conservative activists pouring scorn on the country’s divided liberal opposition.

“I burst out laughing!” writes Girl from Żoliborz, a self-described “traditionalist” commenting on a newspaper story about a former campaign adviser to Barack Obama and Emmanuel Macron coming to Warsaw to address a group of liberal activists.

“The opposition has nothing to offer. That’s why they use nonsense to pull the wool over people’s eyes,” replies Magda Rostocka, whose profile tells her almost 4,400 followers she is “left-handed with her heart on the right”.

In reality, neither woman existed. Both accounts were run by the paid employees of a small marketing company based in the city of Wrocław in southwest Poland.

But what the employee pretending to be Magda Rostocka did not know is that the colleague pretending to be Girl from Żoliborz was an undercover reporter who had infiltrated the company, giving rare insight into the means by which fake social media accounts are being used by private firms to influence unsuspecting voters and consumers…

…The accounts produced both leftwing and rightwing content, attracting attention, credibility and support from other social media users, who could then be rallied in support of the company’s clients.

“The aim is to build credibility with people from both sides of the political divide. Once you have won someone’s trust by reflecting their own views back at them, you are in a position to influence them,” said Wojciech Cieśla, who oversaw the investigation in collaboration with Investigate Europe, a consortium of European investigative reporters.

“Reading these communications, you can see how the leftwing and rightwing accounts would receive their daily instructions, how they would be marshalled and directed like two flanks of the same army on a battlefield.”</p>


Which sort of explains why Twitter doesn't need to turn down political ads: they're already there, earning it money from fake accounts run by real people which attract other ad money. But here's a neat twist to this story: "A majority of Cat@Net’s employees are understood to be disabled, allowing the company to derive substantial public subsidies from Poland’s National Disabled Rehabilitation Fund."
socialwarming  poland  privacy  twitter  fake  troll 
november 2019 by charlesarthur
Google chief: I'd disclose smart speakers before guests enter my home • BBC News
Leo Kelion comes up with a question Rick Osterloh hadn't expected:
<p>It's an admission that appears to have caught Google's devices chief by surprise.

After being challenged as to whether homeowners should tell guests smart devices - such as a Google Nest speaker or Amazon Echo display - are in use before they enter the building, he concludes that the answer is indeed yes.

"Gosh, I haven't thought about this before in quite this way," Rick Osterloh begins.

"It's quite important for all these technologies to think about all users... we have to consider all stakeholders that might be in proximity."

And then he commits.

"Does the owner of a home need to disclose to a guest? I would and do when someone enters into my home, and it's probably something that the products themselves should try to indicate."</p>
smartspeaker  privacy  google 
october 2019 by charlesarthur
Cheap Android smartphones have a disturbing secret • Fast Company
Michael Grothaus:
<p>Seventeen dollars for a smartphone sounds like a great deal, especially for people living in poverty who can barely afford rent.

But there’s a problem: low-cost smartphones are privacy nightmares.

According to an analysis by the advocacy group Privacy International, a $17 Android smartphone called MYA2 MyPhone, which was launched in December 2017, has a host of privacy problems that make its owner vulnerable to hackers and to data-hungry tech companies.

First, it comes with an outdated version of Android with known security vulnerabilities that can’t be updated or patched. The MYA2 also has apps that can’t be updated or deleted, and those apps contain multiple security and privacy flaws. One of those pre-installed apps that can’t be removed, Facebook Lite, gets default permission to track everywhere you go, upload all your contacts, and read your phone’s calendar. The fact that Facebook Lite can’t be removed is especially worrying because the app suffered a major privacy snafu earlier this year when hundreds of millions of Facebook Lite users had their passwords exposed. (Facebook did not respond to request for comment.)

Philippines-based MyPhone said the specs of the MYA2 limited it to shipping the phone with Android 6.0, and since then it says it has “lost access and support to update the apps we have pre-installed” with the device. Given that the MYA2 phone, like many low-cost Android smartphones, runs outdated versions of the Android OS and can’t be updated due to their hardware limitations, users of such phones are limited to relatively light privacy protections compared to what modern OSes, like Android 10, offer today.

The MYA2 is just one example of how cheap smartphones leak personal information, provide few if any privacy protections, and are incredibly easy to hack compared to their more expensive counterparts.</p>
mobile  android  privacy  hacking 
october 2019 by charlesarthur
Looking back at the Snowden revelations • A Few Thoughts on Cryptographic Engineering
Matthew Green (who is a highly respected cryptographer:
<p>Have things improved?

This is the $250 million question.

Some of the top-level indicators are surprisingly healthy. HTTPS adoption has taken off like a rocket, driven in part by Google’s willingness to use it as a signal for search rankings — and the rise of free Certificate Authorities like LetsEncrypt. It’s possible that these things would have happened eventually without Snowden, but it’s less likely.

End-to-end encrypted messaging has also taken off, largely due to adoption by WhatsApp and a host of relatively new apps. It’s reached the point where law enforcement agencies have begun to freak out, as the slide below illustrates.

<img src="https://matthewdgreen.files.wordpress.com/2019/09/e2e.png" width="100%" />
<em>Slightly dated numbers, source: CSIS (or this article)</em>

Does Snowden deserve credit for this? Maybe not directly, but it’s almost certain that concerns over the surveillance he revealed did play a role. (It’s worth noting that this adoption is not evenly distributed across the globe.)

It’s also worth pointing out that at least in the open source community the quality of our encryption software has improved enormously, largely due to the fact that major companies made well-funded efforts to harden their systems, in part as a result of serious flaws like Heartbleed — and in part as a response to the company’s own concerns about surveillance.

It might very well be that the NSA has lost a significant portion of its capability since Snowden.

The future isn’t American.

I’ve said this before, as have many others: even if you support the NSA’s mission, and believe that the U.S. is doing everything right, it doesn’t matter. Unfortunately, the future of surveillance has very little to do with what happens in Ft. Meade, Maryland. In fact, the world that Snowden brought to our attention isn’t necessarily a world that Americans have much say in.</p>
security  encryption  privacy  government  cryptography  snowden 
september 2019 by charlesarthur
Period tracker apps: Maya and MIA Fem are sharing deeply personal data with Facebook • Buzzfeed News
Megha Rajagopalan:
<p>UK-based advocacy group Privacy International, sharing its <a href="https://www.privacyinternational.org/long-read/3196/no-bodys-business-mine-how-menstruations-apps-are-sharing-your-data">findings</a> exclusively with BuzzFeed News, discovered period-tracking apps including MIA Fem and Maya sent women’s use of contraception, the timings of their monthly periods, symptoms like swelling and cramps, and more, directly to Facebook.

Women use such apps for a range of purposes, from tracking their period cycles to maximizing their chances of conceiving a child. On the Google Play store, Maya, owned by India-based Plackal Tech, has more than 5 million downloads. Period Tracker MIA Fem: Ovulation Calculator, owned by Cyprus-based Mobapp Development Limited, says it has more than 2 million users around the world. They are also available on the App Store.

The data sharing with Facebook happens via Facebook’s Software Development Kit (SDK), which helps app developers incorporate particular features and collect user data so Facebook can show them targeted ads, among other functions. When a user puts personal information into an app, that information may also be sent by the SDK to Facebook.

Asked about the report, Facebook told BuzzFeed News it had gotten in touch with the apps Privacy International identified to discuss possible violations of its terms of service, including sending prohibited types of sensitive information.

Maya informs Facebook whenever you open the app and starts sharing some data with Facebook even before the user agrees to the app’s privacy policy, Privacy International found.</p>
app  privacy  menstruation  facebook 
september 2019 by charlesarthur
Face recognition, bad people and bad data • Benedict Evans
Evans on fine form again:
<p>what exactly is in the training data - in your examples of X and Not-X? Are you sure? What ELSE is in those example sets?

My favourite example of what can go wrong here comes from a project for recognising cancer in photos of skin. The obvious problem is that you might not have an appropriate distribution of samples of skin in different tones. But another problem that can arise is that dermatologists tend to put rulers in the photo of cancer, for scale - so if all the examples of ‘cancer’ have a ruler and all the examples of ‘not-cancer’ do not, that might be a lot more statistically prominent than those small blemishes. You inadvertently built a ruler-recogniser instead of a cancer-recogniser.

The structural thing to understand here is that the system has no understanding of what it’s looking at - it has no concept of skin or cancer or colour or gender or people or even images. It doesn’t know what these things are any more than a washing machine knows what clothes are. It’s just doing a statistical comparison of data sets. So, again - what is your data set? How is it selected? What might be in it that you don’t notice - even if you’re looking? How might different human groups be represented in misleading ways? And what might be in your data that has nothing to do with people and no predictive value, yet affects the result? Are all your ‘healthy’ photos taken under incandescent light and all your ‘unhealthy’ pictures taken under LED light? You might not be able to tell, but the computer will be using that as a signal.</p>


A very astringent look at a lot of the hoopla about machine learning.
machinelearning  privacy  database 
september 2019 by charlesarthur
Just Delete Me : A directory of direct links to delete your account from web services.
:
<p>Many companies use dark pattern techniques to make it difficult to find how to delete your account. JustDelete.me aims to be a directory of urls to enable you to easily delete your account from web services.</p>


A service, apparently, from Backgroundchecks.org. Turns out that Facebook is only "medium" difficult to delete yourself from; some services (lookin' at you, Animal Crossing) are "impossible".
internet  privacy  security 
september 2019 by charlesarthur
Ten years on, Foursquare is now checking in to you • NY Mag
James D. Walsh on the "I'm the mayor of..." company's pivot to a business-to-business model, which it made in 2014:
<p>It projected iPhone sales in 2015 based on traffic to Apple stores and, in 2016, the huge drop in Chipotle’s sales figures (thanks to E. coli) two weeks before the burrito-maker announced its quarterly earnings. (It also used its data to show that foot traffic to Trump properties began declining after he announced his presidential campaign, and that traffic to Nike stores increased after the Colin Kaepernick ad.)

Co-founder and executive chairman Dennis Crowley says the human check-ins gave Foursquare engineers and data scientists the ability to verify and adjust location readings from other sources, like GPS, Wi-Fi, and Bluetooth. As it turns out, the goofy badges for Uncle Tony that made Foursquare easy to dismiss as a late-2000s fad were an incredibly powerful tool. “Everyone was laughing at us, ‘Oh, what are you, just people checking in at coffee shops?’” Crowley says. “Yeah, and they checked in billions of times. So we had this corpus of data, an army of people, who every day were like, ‘I’m at Think Coffee.’ ‘I’m at Think Coffee.’ ‘I’m at Think Coffee.’” Because of the “corpus” of data generated by people like Uncle Tony, Foursquare knows when the dimensions of storefronts change and can tell the difference between an office on the eighth floor and one of the ninth floor.

In addition to all of those active check-ins, at some point Foursquare began collecting passive data using a “check-in button you never had to press.” It doesn’t track people 24/7 (in addition to creeping people out, doing so would burn through phones’ batteries), but instead, if users opt-in to allow the company to “always” track their locations, the app will register when someone stops and determine whether that person is at a red light or inside an Urban Outfitters. The Foursquare database now includes 105 million places and 14 billion check-ins. The result, experts say, is a map that is often more reliable and detailed than the ones generated by Google and Facebook.</p>
advertising  privacy  foursquare  location 
september 2019 by charlesarthur
Deconstructing Google’s excuses on tracking protection • Freedom To Tinker
Jonathan Mayer and Arvind Narayanan:
<p>Blocking cookies is bad for privacy. That’s the new disingenuous argument from Google, trying to justify why Chrome is so far behind Safari and Firefox in offering privacy protections. As researchers who have spent over a decade studying web tracking and online advertising, we want to set the record straight.<br />Our high-level points are:

1) Cookie blocking does not undermine web privacy. Google’s claim to the contrary is privacy gaslighting.

2) There is little trustworthy evidence on the comparative value of tracking-based advertising.

3) Google has not devised an innovative way to balance privacy and advertising; it is latching onto prior approaches that it previously disclaimed as impractical.

4) Google is attempting a punt to the web standardization process, which will at best result in years of delay.

What follows is a reproduction of excerpts from yesterday’s announcement, annotated with our comments.</p>


This is quite a takedown of Google's claims that it would really love to do what Safari and Firefox are doing in terms of cooking blocking, but, uh, it's <em>complicated</em>.
apple  google  firefox  browsing  privacy 
september 2019 by charlesarthur
Facebook paid contractors to transcribe user audio files • Bloomberg
Sarah Frier:
<p>Facebook has been paying hundreds of outside contractors to transcribe clips of audio from users of its services, according to people with knowledge of the work.

The work has rattled the contract employees, who are not told where the audio was recorded or how it was obtained - only to transcribe it, said the people, who requested anonymity for fear of losing their jobs. They’re hearing Facebook users’ conversations, sometimes with vulgar content, but do not know why Facebook needs them transcribed, the people said.

Facebook confirmed that it had been transcribing users’ audio and said it will no longer do so, following scrutiny into other companies. “Much like Apple and Google, we paused human review of audio more than a week ago,” the company said Tuesday. The company said the users who were affected chose the option in Facebook’s Messenger app to have their voice chats transcribed. The contractors were checking whether Facebook’s artificial intelligence correctly interpreted the messages, which were anonymized.</p>


But of COURSE Facebook was doing this, same as everyone else. Clearly this was an open secret within the voice assistant industry.
facebook  ai  privacy  voice 
august 2019 by charlesarthur
Operator of email management service Unroll.me settles FTC allegations that it deceived consumers • Federal Trade Commission
<p>An email management company will be required to delete personal information it collected from consumers as part of a settlement with the Federal Trade Commission over allegations that the company deceived some consumers about how it accesses and uses their personal emails.

In a complaint, the FTC alleges that Unrollme Inc., falsely told consumers that it would not “touch” their personal emails, when in fact it was sharing the users’ email receipts (e-receipts) with its parent company, Slice Technologies, Inc.

E-receipts are emails sent to consumers following a completed transaction and can include, among other things, the user’s name, billing and shipping addresses, and information about products or services purchased by the consumer. Slice uses anonymous purchase information from Unrollme users’ e-receipts in the market research analytics products it sells.

Unrollme helps users unsubscribe from unwanted subscription emails and consolidates wanted email subscriptions into one daily email called the Rollup. The service requires users to provide Unrollme with access to their email accounts.

“What companies say about privacy matters to consumers,” said Andrew Smith, Director of the FTC’s Bureau of Consumer Protection. “It is unacceptable for companies to make false statements about whether they collect information from personal emails.”</p>


Pity there isn't a fine too. Unroll.me "closed" to EU customers back in May 2018 because it couldn't comply with GDPR; and had been discovered in early 2017 selling its data to Uber and others. (The <a href="https://www.theverge.com/2017/4/24/15406408/unrollme-uber-data-brokerage-apology-letter">CEO's mea culpa</a> from April 2017, which I linked to here, has mysteriously vanished from the company blog, which is filled instead with <a href="https://blog.unroll.me/page/5/">utter pap</a>, and it doesn't seem to figure in the retrospective. I did some digging on the Waybaack Machine: it was removed from the blog some time between mid-July and early August of 2018.)
unroll  ftc  privacy  email 
august 2019 by charlesarthur
Black Hat: GDPR privacy law exploited to reveal personal data • BBC News
Dave Lee:
<p>About one in four companies revealed personal information to a woman's partner, who had made a bogus demand for the data by citing an EU privacy law.
The security expert contacted dozens of UK and US-based firms to test how they would handle a "right of access" request made in someone else's name.

In each case, he asked for all the data that they held on his fiancée…

He declined to identify the organisations that had mishandled the requests, but said they had included:<br />• a UK hotel chain that shared a complete record of his partner's overnight stays<br />• two UK rail companies that provided records of all the journeys she had taken with them over several years<br />• a US-based educational company that handed over her high school grades, mother's maiden name and the results of a criminal background check survey

[University of Oxford-based researcher James] Pavur has, however, named some of the companies that he said had performed well. He said they included:<br />• the supermarket Tesco, which had demanded a photo ID<br />• the domestic retail chain Bed Bath and Beyond, which had insisted on a telephone interview<br />• American Airlines, which had spotted that he had uploaded a blank image to the passport field of its online form.</p>


Social engineering: still one of the best kinds of hacking.
dataprotection  privacy  gdpr  hacking 
august 2019 by charlesarthur
South Wales police to use facial recognition apps on phones • The Guardian
Ian Sample:
<p>Liberty, the campaign group, called the announcement “chilling”, adding that it was “shameful” that South Wales police had chosen to press ahead with handheld facial recognition systems even as it faced a court challenge over the technology.

In May, Liberty brought a legal case against the force for its recent use of automated facial recognition on city streets, at music festivals, and at football and rugby matches.

South Wales police said the technology would secure quicker arrests and enable officers to resolve cases of mistaken identity without the need for a trip to a station or custody suite. The officers testing the app would be under “careful supervision”, it said in a statement.

“This new app means that, with a single photo, officers can easily and quickly answer the question of ‘are you really the person we are looking for?’,” said deputy chief constable Richard Lewis. “When dealing with a person of interest during their patrols in our communities officers will be able to access instant, actionable data, allowing to them to identify whether the person stopped is, or is not, the person they need to speak to, without having to return to a police station.”</p>


There is next to zero information about which company built this app, what its accuracy is, and a whole lot more. Is it basically an identikit system on a phone?
privacy  apps  facialrecognition  police 
august 2019 by charlesarthur
Apple halts practice of contractors listening in to users on Siri • The Guardian
Alex Hern:
<p>Contractors working for Apple in Ireland said they were not told about the decision when they arrived for work on Friday morning, but were sent home for the weekend after being told the system they used for the grading “was not working” globally. Only managers were asked to stay on site, the contractors said, adding that they had not been told what the suspension means for their future employment.

The suspension was prompted by a report in the Guardian last week that revealed the company’s contractors “regularly” hear confidential and private information while carrying out the grading process, including in-progress drug deals, medical details and people having sex.

The bulk of that confidential information was recorded through accidental triggers of the Siri digital assistant, a whistleblower told the Guardian. The Apple Watch was particularly susceptible to such accidental triggers, they said. “The regularity of accidental triggers on the watch is incredibly high … The watch can record some snippets that will be 30 seconds – not that long, but you can gather a good idea of what’s going on.</p>

One week from the original report to this change. That's impressive - moreso given that Bloomberg had a weaker form of this report much earlier this year but didn't get anything like the detail. The power of newsprint: it makes a difference having something you can put on a chief executive's desk (even if you have to fly it out there).

Apple has indicated that it's eventually going to restart this, but on an opt-in basis.
apple  privacy  data  siri 
august 2019 by charlesarthur
Apple contractors 'regularly hear confidential details' on Siri recordings • The Guardian
Alex Hern:
<p>Apple contractors regularly hear confidential medical information, drug deals, and recordings of couples having sex, as part of their job providing quality control, or “grading”, the company’s Siri voice assistant, the Guardian has learned.

Although Apple does not explicitly disclose it in its consumer-facing privacy documentation, a small proportion of Siri recordings are passed on to contractors working for the company around the world. They are tasked with grading the responses on a variety of factors, including whether the activation of the voice assistant was deliberate or accidental, whether the query was something Siri could be expected to help with and whether Siri’s response was appropriate.

Apple says the data “is used to help Siri and dictation … understand you better and recognise what you say”.

But the company does not explicitly state that that work is undertaken by humans who listen to the pseudonymised recordings.</p>


So there's the trifecta: all of Amazon, Google and Apple sends some audio to humans to listen. In its way, rather like the revelation that <a href="https://www.theguardian.com/technology/2011/apr/20/iphone-tracking-prompts-privacy-fears">your smartphone maps where you go and stores it</a>, which we didn't intuitively know in 2011 - but turns out everyone did that too.
apple  siri  privacy 
july 2019 by charlesarthur
My browser, the spy: how extensions slurped up browsing histories from 4m users • Ars Technica
Dan Goodin:
<p>DataSpii begins with browser extensions—available mostly for Chrome but in more limited cases for Firefox as well—that, by Google's account, had as many as 4.1 million users. These extensions collected the URLs, webpage titles, and in some cases the embedded hyperlinks of every page that the browser user visited. Most of these collected Web histories were then published by a fee-based service called Nacho Analytics, which markets itself as “God mode for the Internet” and uses the tag line “See Anyone’s Analytics Account.”

Web histories may not sound especially sensitive, but a subset of the published links led to pages that are not protected by passwords—but only by a hard-to-guess sequence of characters (called tokens) included in the URL. Thus, the published links could allow viewers to access the content at these pages. (Security practitioners have long discouraged the publishing of sensitive information on pages that aren't password protected, but the practice remains widespread.)

According to the researcher who <a href="https://securitywithsam.com/">discovered and extensively documented the problem</a>, this non-stop flow of sensitive data over the past seven months has resulted in the publication of links to:

• Home and business surveillance videos hosted on Nest and other security services<br />• Tax returns, billing invoices, business documents, and presentation slides posted to, or hosted on, Microsoft OneDrive, Intuit.com, and other online services<br />• Vehicle identification numbers of recently bought automobiles, along with the names and addresses of the buyers<br />• Patient names, the doctors they visited, and other details listed by DrChrono, a patient care cloud platform that contracts with medical services<br />• Travel itineraries hosted on Priceline, Booking.com, and airline websites<br />• Facebook Messenger attachments and Facebook photos, even when the photos were set to be private.
</p>


Nacho Analytics turns out to have been grabbing data from tons of extensions, listed in the story.
browser  privacy 
july 2019 by charlesarthur
FaceApp responds to privacy concerns • TechCrunch
Natasha Lomas:
<p>The tl;dr here is that concerns had been raised that FaceApp, a Russian startup, uploads users’ photos to the cloud — without making it clear to them that processing is not going on locally on their device.

Another issue raised by FaceApp users was that the iOS app appears to be overriding settings if a user had denied access to their camera roll, after people reported they could still select and upload a photo — i.e. despite the app not having permission to access their photos.

As we reported earlier, the latter is actually allowed behavior in iOS — which gives users the power to choose to block an app from full camera roll access but select individual photos to upload if they so wish.

This isn’t a conspiracy, though Apple could probably come up with a better way of describing the permission, as we suggested earlier.

On the wider matter of cloud processing of what is, after all, facial data, FaceApp confirms that most of the processing needed to power its app’s beautifying/gender-bending/age-accerating/-defying effects are done in the cloud.

Though it claims it only uploads photos users have specifically selected for editing. Security tests have also not found evidence the app uploads a user’s entire camera roll.</p>


The app <a href="https://techcrunch.com/2017/02/08/faceapp-uses-neural-networks-for-photorealistic-selfie-tweaks/">first surfaced two years ago</a>, so that's a pretty tenacious startup.
faceapp  russia  privacy 
july 2019 by charlesarthur
The US, China, and case 311/18 on Standard Contractual Clauses • European Law Blog
Peter Swire:
<p>In the aftermath of the 2015 case [on Facebook transferring data to the US, which found against Facebook and invalidated those transfers], most companies that transfer data from the EU were left to rely on contract standards promulgated by the European Commission, called Standard Contractual Clauses (SCC).  The SCCs set strict requirements for handling personal data by the company that transfers the data.

The legality of SCCs is now before the CJEU, with a similar challenge to Privacy Shield awaiting the outcome of the first case.

A CJEU decision that invalidates SCCs would result in the prohibition of most transfers of personal data from the EU to the US. The case primarily concerns the quality of legal safeguards in the United States for government surveillance, especially by the NSA. (Note – I was selected to provide independent expert testimony on US law by Facebook; under Irish law, I was prohibited from contact with Facebook while serving as an expert, and I have played no further role in the litigation.)

A decision invalidating SCCs, however, would pose a terrible dilemma to EU courts and decisionmakers.

At a minimum, the CJEU might “merely” prohibit data flows to the US due to a finding of lack of sufficient safeguards, notably an insufficient remedy for an EU data subject who makes a subject access request to the NSA. The EU on this approach would continue to authorize the transfer of personal data to countries not directly covered by the Court decision, such as, for example, China.  This approach would be completely unjustified: it would prohibit transfers of data to the US, which has numerous legal safeguards characteristic of a state under the rule of law, while allowing such transfers toward China, where the protection of personal data vis-à-vis the government is essentially non-existent.</p>
data  privacy  europe  china 
july 2019 by charlesarthur
German privacy watchdog: Microsoft’s Office 365 cannot be used in public schools • WinBuzzer
Luke Jones:
<p>A data authority in the German State of Hesse has warned Microsoft’s Office 365 cannot be used in schools. Michael Ronellenfitsch, Hesse’s data protection commissioner, says the standard Office 365 configuration creates privacy issues.

He warned this week that data stored in the cloud by the productivity suite could be accessed in the United States. Specifically, personal information from teachers and students would be in the cloud. Ronellenfitsch says even if the data was held in centers in Europe, it is still “exposed to possible access by US authorities”.

The commissioner says public intuitions in Hesse and across Germany “have a special responsibility with regard to the permissibility and traceability of the processing of personal data."…

…It is worth noting that Ronellenfitsch previously endorsed the use of Office 365 in schools. Back in 2017, he said schools can use the suite under certain conditions that match Germany’s data protection compliancy laws. At the time, Microsoft was partnered with Deutsche Telekom and offering the “Germany Cloud” initiative that is now depreciated.</p>


This isn't an opportunity for Google or Apple: they don't meet the authority's criteria on privacy and data either.
privacy  data  microsoft 
july 2019 by charlesarthur
Yep, human workers are listening to recordings from Google Assistant, too • The Verge
:
<p>In the story by VRT NWS, which focuses on Dutch and Flemish speaking Google Assistant users, the broadcaster reviewed a thousand or so recordings, 153 of which had been captured accidentally. A contractor told the publication that he transcribes around 1,000 audio clips from Google Assistant every week. In one of the clips he reviewed he heard a female voice in distress and said he felt that “physical violence” had been involved. “And then it becomes real people you’re listening to, not just voices,” said the contractor.

Tech companies say that sending audio clips to humans to be transcribed is an essential process for improving their speech recognition technology. They also stress that only a small percentage of recordings are shared in this way. A spokesperson for Google told Wired that just 0.2 percent of all recordings are transcribed by humans, and that these audio clips are never presented with identifying information about the user.

However, that doesn’t stop individuals revealing sensitive information in the recording themselves. And companies are certainly not upfront about this transcription process. The privacy policy page for Google Home, for example, does not mention the company’s use of human contractors, or the possibility that Home might mistakenly record users.

These obfuscations could cause legal trouble for the company, says Michael Veale, a technology privacy researcher at the Alan Turing Institute in London. He told Wired that this level of disclosure might not meet the standards set by the EU’s GDPR regulations. “You have to be very specific on what you’re implementing and how,” said Veale. “I think Google hasn’t done that because it would look creepy.”</p>

Guess it's time for Apple to say yes or no to this question, just for completeness. But this certainly backs up why I don't activate any Google Assistant or Alexa devices. Google <a href="https://www.blog.google/products/assistant/more-information-about-our-processes-safeguard-speech-data/">has a blogpost about this</a>, complaining about the worker "leaking confidential Dutch audio data". Sure, but if the data hadn't been there in the first place...
google  ai  privacy  speech 
july 2019 by charlesarthur
Google’s 4,000-word privacy policy is a secret history of the internet • The New York Times
Charlie Warzel:
<p>The late 1990s was a simpler time for Google. The nascent company was merely a search engine, and Gmail, Android and YouTube were but glimmers in the startup’s eye. Google’s first privacy policy reflected that simplicity. It was short and earnest, a quaint artifact of a different time in Silicon Valley, when Google offered 600 words to explain how it was collecting and using personal information.

That version of the internet (and Google) is gone. Over the past 20 years, that same privacy policy has been rewritten into a sprawling 4,000-word explanation of the company’s data practices.

This evolution, across two decades and 30 versions, is the story of the internet’s transformation through the eyes of one of its most crucial entities. The web is now terribly complex, and Google has a privacy policy to match.</p>


The visuals for this - because it is done through visuals - are lovely, but also telling. The longer the privacy policy, the less private you are to the company.
google  internet  privacy  gdpr 
july 2019 by charlesarthur
Is Firefox better than Chrome? It comes down to privacy • The Washington Post
Geoffrey Fowler:
<p>Seen from the inside, [Google's] Chrome browser looks a lot like surveillance software.

Lately I’ve been investigating the secret life of my data, running experiments to see what technology really gets up to under the cover of privacy policies that nobody reads. It turns out, having the world’s biggest advertising company make the most popular Web browser was about as smart as letting kids run a candy shop.

It made me decide to ditch Chrome for a new version of nonprofit Mozilla’s Firefox, which has default privacy protections. Switching involved less inconvenience than you might imagine.

My tests of Chrome vs. Firefox unearthed a personal data caper of absurd proportions. In a week of Web surfing on my desktop, I discovered 11,189 requests for tracker “cookies” that Chrome would have ushered right onto my computer but were automatically blocked by Firefox. These little files are the hooks that data firms, including Google itself, use to follow what websites you visit so they can build profiles of your interests, income and personality.

Chrome welcomed trackers even at websites you would think would be private. I watched Aetna and the Federal Student Aid website set cookies for Facebook and Google. They surreptitiously told the data giants every time I pulled up the insurance and loan service’s login pages.</p>
google  chrome  privacy  firefox 
july 2019 by charlesarthur
Kids’ apps are filled with manipulative ads, according to a new study • Vox
Chavie Lieber:
<p>suddenly, the game is interrupted. A bubble pops up with a new mini game idea, and when a child clicks on the bubble, they are invited to purchase it for $1.99, or unlock all new games for $3.99. There’s a red X button to cancel the pop-up, but if the child clicks on it, the character on the screen shakes its head, looks sad, and even begins to cry.

The game, developed by the Slovenian software company Bubadu and intended for kids as young as 6, is marketed as “educational” because it teaches kids about different types of medical treatments.

But it’s structured so that the decision to not buy anything from the game is wrong; the child is shamed into thinking they’ve done something wrong. Pulling such a move on a young gamer raises troubling ethical questions, especially as children’s gaming apps — and advertising within them — have become increasingly popular.

On Tuesday, a group of 22 consumer and public health advocacy groups sent a letter to the Federal Trade Commission calling on the organization to look into the questionable practices of the children’s app market. The <a href="https://www.commercialfreechildhood.org/sites/default/files/devel-generate/piw/apps_FTC_letter.pdf?eType=EmailBlastContent&eId=2d7ad26c-de09-49f3-8b60-6327024f43fb">letter asks the FTC to investigate apps</a> that “routinely lure young children to make purchases and watch ads” and hold the developers of these games accountable.</p>
ftc  advertising  regulation  privacy  children 
july 2019 by charlesarthur
Mozilla: No plans to enable DNS-over-HTTPS by default in the UK • ZDNet
Catalin Cimpanu:
<p>After the UK's leading industry group of internet service providers named Mozilla an "Internet Villain" because of its intentions to support a new DNS security protocol named DNS-over-HTTPS (DoH) inside Firefox, the browser maker told ZDNet that such plans don't currently exist.

"We have no current plans to enable DoH by default in the UK," a spokesperson ZDNet last night.

The browser maker's decision comes after both ISPs and the UK government, through MPs and GCHQ have criticized Mozilla and fellow browser maker Google during the last two months for their plans to support DNS-over-HTTPS.

The technology, if enabled, would thwart the ability of some internet service providers to sniff customer traffic in order to block users from accessing bad sites, such as those hosting copyright-infringing materials, child abuse images, and extremist material.

UK ISPs block websites at the government requests; they also block other sites voluntarily at the request of various child protection groups, and they block adult sites as part of parental controls options they provide to their customers.

Not all UK ISPs will be impacted by Mozilla and Google supporting DNS-over-HTTPS, as some use different technologies to filter customers' traffic…</p>


This is the story which <a href="https://www.thetimes.co.uk/article/warning-over-google-chrome-browsers-new-threat-to-children-vm09w9jpr">came out horrendously confused in the Sunday Times</a> about three months ago, talking about "plans to encrypt Chrome", which left everyone who understands what the words actually mean puzzled.
privacy  isp  uk  dns  https 
july 2019 by charlesarthur
Over 1,300 Android apps scrape personal data regardless of permissions • TechRadar
David Lumb:
<p>Researchers at the International Computer Science Institute (ICSI) created a controlled environment to test 88,000 apps downloaded from the US Google Play Store. They peeked at what data the apps were sending back, compared it to what users were permitting and - surprise - <a href="https://www.ftc.gov/system/files/documents/public_events/1415032/privacycon2019_serge_egelman.pdf">1,325 apps were forking over specific user data they shouldn’t have</a>.

Among the test pool were “popular apps from all categories,” according to ICSI’s report. 

The researchers disclosed their findings to both the US Federal Trade Commission and Google (receiving a bug bounty for their efforts), though the latter stated a fix would only be coming in the full release of Android Q, according to CNET.

Before you get annoyed at yet another unforeseen loophole, those 1,325 apps didn’t exploit a lone security vulnerability - they used a variety of angles to circumvent permissions and get access to user data, including geolocation, emails, phone numbers, and device-identifying IMEI numbers.

One way apps determined user locations was to get the MAC addresses of connected WiFi base stations from the ARP cache, while another used picture metadata to discover specific location info even if a user didn’t grant the app location permissions. The latter is what the ICSI researchers described as a “side channel” - using a circuitous method to get data.

They also noticed apps using “covert channels” to snag info: third-party code libraries developed by a pair of Chinese companies secretly used the SD card as a storage point for the user’s IMEI number. If a user allowed a single app using either of those libraries access to the IMEI, it was automatically shared with other apps.</p>


Android Q isn't going to be universally adopted by any means. Data leaks are going to go on.
android  data  privacy  security 
july 2019 by charlesarthur
Superhuman is Spying on You » Mike Industries
Mike Davidson has been using Superhuman - you know, the $30 per month email service that does it all for you - for a while:
<p>when I see great design, I proactively try to spread it as far and wide as possible.

What I see in Superhuman though is a company that has mistaken taking advantage of people for good design. They’ve identified a feature that provides value to some of their customers (i.e. seeing if someone has opened your email yet) and they’ve trampled the privacy of every single person they send email to in order to achieve that. Superhuman never asks the person on the other end if they are OK with sending a read receipt (complete with timestamp and geolocation). Superhuman never offers a way to opt out. Just as troublingly, Superhuman teaches its user to surveil by default. I imagine many users sign up for this, see the feature, and say to themselves “Cool! Read receipts! I guess that’s one of the things my $30 a month buys me.”

When products are introduced into the market with behaviors like this, customers are trained to think they are not just legal but also ethical. They don’t always take the next step and ask themselves “wait, should I be doing this?” It’s kind of like if you walked by someone’s window at night and saw them naked. You could do one of two things: a) look away and get out of there, realizing you saw something that person wouldn’t want you to see, or b) keep staring, because if they really didn’t want anyone to see them, they should have closed their blinds. It’s two ways of looking at the world, and Superhuman is not just allowing for option B but <em>actively causing it to happen</em>.</p>


Tracking pixels like that aren't unique to Superhuman; PR companies use them all the time, and others too. But that's different, as Davidson explains. He deals with peoples' responses in his blogpost (including one from an investor in Superhuman), and its legal boilerplate. In short: Superhuman has been <a href="https://en.wikipedia.org/wiki/Milkshake_Duck">milkshake ducked</a>.
superhuman  email  surveillance  privacy 
july 2019 by charlesarthur
The Pentagon has a laser that can identify people from a distance—by their heartbeat • MIT Technology Review
David Hambling:
<p>A new device, developed for the Pentagon after US Special Forces requested it, can identify people without seeing their face: instead it detects their unique cardiac signature with an infrared laser. While it works at 200 meters (219 yards), longer distances could be possible with a better laser. “I don’t want to say you could do it from space,” says Steward Remaly, of the Pentagon’s Combatting Terrorism Technical Support Office, “but longer ranges should be possible.”

Contact infrared sensors are often used to automatically record a patient’s pulse. They work by detecting the changes in reflection of infrared light caused by blood flow. By contrast, the new device, called Jetson, uses a technique known as laser vibrometry to detect the surface movement caused by the heartbeat. This works though typical clothing like a shirt and a jacket (though not thicker clothing such as a winter coat)…

…Cardiac signatures are already used for security identification. The Canadian company Nymi has developed a wrist-worn pulse sensor as an alternative to fingerprint identification. The technology has been trialed by the Halifax building society in the UK.</p>
privacy  biometrics  technology  heart 
july 2019 by charlesarthur
Google's new reCaptcha has a dark side • Fast Company
Katharine Schwab:
<p>According to two security researchers who’ve studied reCaptcha, one of the ways that Google determines whether you’re a malicious user or not is whether you already have a Google cookie installed on your browser. It’s the same cookie that allows you to open new tabs in your browser and not have to re-log in to your Google account every time. But according to Mohamed Akrout, a computer science PhD student at the University of Toronto who has studied reCaptcha, it appears that Google is also using its cookies to determine whether someone is a human in reCaptcha v3 tests. Akrout wrote in an April paper about how reCaptcha v3 simulations that ran on a browser with a connected Google account received lower risk scores than browsers without a connected Google account. “If you have a Google account it’s more likely you are human,” he says. Google did not respond to questions about the role that Google cookies play in reCaptcha.

With reCaptcha v3, technology consultant Marcos Perona and Akrout’s tests both found that their reCaptcha scores were always low risk when they visited a test website on a browser where they were already logged into a Google account. Alternatively, if they went to the test website from a private browser like Tor or a VPN, their scores were high risk.

To make this risk-score system work accurately, website administrators are supposed to embed reCaptcha v3 code on all of the pages of their website, not just on forms or log-in pages. Then, reCaptcha learns over time how their website’s users typically act, helping the machine learning algorithm underlying it to generate more accurate risk scores.</p>


But that also means Google is seeing everything you do. Okayyy but.. it does anyway?
google  privacy  captcha 
june 2019 by charlesarthur
Before you use a password manager • Medium
Stuart Schechter:
<p>In this article, I’ll start by examining the benefits and risks of using a password manager. It’s hard to overstate the importance of protecting the data in your password manager, and having a recovery strategy for that data, so I’ll cover that next. I’ll then present a low-risk approach to experimenting with using a password manager, which will help you understand the tough choices you’ll need to make before using it for your most-important passwords. I’ll close with a handy list of the most important decisions you’ll need to make when using a password manager.

There are a lot of password managers to choose from. There’s a password manager built into every major web browser today, and many stand-alone password managers that work across browsers. In addition to remembering your passwords, most password managers will type your password into login forms. The better ones will create randomly-generated passwords for you, ensuring that you’re not using easily-guessed passwords or re-using passwords between sites. Some will even identify passwords you’ve re-used between sites and help you replace them.
</p>


The low-risk approach seems like a good plan. It's the idea of jumping in that many people find problematic.
security  software  privacy  password 
june 2019 by charlesarthur
The new wilderness • Idle Words
Maciej Cieglowski on the erosion of what he calls "ambient privacy" - the expectation that your interactions aren't monitored or remembered:
<p>Ambient privacy is particularly hard to protect where it extends into social and public spaces outside the reach of privacy law. If I’m subjected to facial recognition at the airport, or tagged on social media at a little league game, or my public library installs an always-on Alexa microphone, no one is violating my legal rights. But a portion of my life has been brought under the magnifying glass of software. Even if the data harvested from me is anonymized in strict conformity with the most fashionable data protection laws, I’ve lost something by the fact of being monitored.

One can argue that ambient privacy is a relic of an older world, just like the ability to see the stars in the night sky was a pleasant but inessential feature of the world before electricity. This is the argument Mr. Zuckerberg made when he unilaterally removed privacy protections from every Facebook account back in 2010. Social norms had changed, he explained at the time, and Facebook was changing with them. Presumably now they have changed back.

My own suspicion is that ambient privacy plays an important role in civic life. When all discussion takes place under the eye of software, in a for-profit medium working to shape the participants’ behavior, it may not be possible to create the consensus and shared sense of reality that is a prerequisite for self-government. If that is true, then the move away from ambient privacy will be an irreversible change, because it will remove our ability to function as a democracy.

All of this leads me to see a parallel between privacy law and environmental law, another area where a technological shift forced us to protect a dwindling resource that earlier generations could take for granted.</p>


Always a must-read; easily comprehensible phrasing, but conveying deep meaning.
google  facebook  privacy  politics  democracy 
june 2019 by charlesarthur
LaLiga’s app listened in on fans to catch bars illegally streaming soccer • The Verge
Dami Lee:
<p>Spain’s data protection agency has fined the country’s soccer league, LaLiga, €250,000 (about $280,000) for allegedly violating EU data privacy and transparency laws. The app, which is used for keeping track of games and stats, was using the phone’s microphone and GPS to track bars illegally streaming soccer games, Spanish newspaper El País reported.

Using a Shazam-like technology, the app would record audio to identify soccer games, and use the geolocation of the phone to locate which bars were streaming without licenses. El Diario reports that fans have downloaded that app more than 10 million times, essentially turning them into undercover narcs. The league claims that the app asks for permission to access the phone’s microphone and location, and that the data — which is received as a code, not audio — is only used to detect LaLiga streams.</p>


You've got to admit: that is clever. Sneaky, but ever so clever. Of course people will be at bars with their smartphones. Of course.
privacy  hacking  smartphone 
june 2019 by charlesarthur
We read 150 privacy policies. They were an incomprehensible disaster • The New York Times
Kevin Litman-Navarro:
<p>For comparison, here are the scores for some classic texts. Only Immanuel Kant’s famously difficult “Critique of Pure Reason” registers a more challenging readability score than Facebook’s privacy policy. (To calculate their reading time, I measured the first chapter of each text.)

The vast majority of these privacy policies exceed the college reading level. And according to the most recent literacy survey conducted by the National Center for Education Statistics, over half of Americans may struggle to comprehend dense, lengthy texts. That means a significant chunk of the data collection economy is based on consenting to complicated documents that many Americans can’t understand.

The BBC has an unusually readable privacy policy. It’s written in short, declarative sentences, using plain language. Here’s how the policy outlines the BBC’s guidelines for collecting and using personal data:
<p>“We have to have a valid reason to use your personal information. It's called the ‘lawful basis for processing.’ Sometimes we might ask your permission to do things, like when you subscribe to an email. Other times, when you'd reasonably expect us to use your personal information, we don't ask your permission, but only when: the law says it's fine to use it, and it fits with the rights you have.”</p>


Airbnb’s privacy policy, on the other hand, is particularly inscrutable. It’s full of long, jargon-laden sentences that obscure Airbnb’s data practices and provides cover to use data in expansive ways…

“You’re confused into thinking these are there to inform users, as opposed to protect companies,” said Albert Gidari, the consulting director of privacy at the Stanford Center for Internet and Society.</p>


Amazing piece of work. Plaudits to the BBC, at least.
privacy  obscuration 
june 2019 by charlesarthur
Facebook turned off search features used to catch war criminals, child predators, and other bad actors • Buzzfeed News
Craig Silverman:
<p>In August 2017, the International Criminal Court issued a warrant for [Libyan military commander Mahmoud Mustafa Busayf al-Werfalli] for allegedly participating in or ordering the execution of 33 people in Benghazi, Libya. At the core of the evidence against him are seven videos, some of which were found on Facebook, that allegedly show Werfalli committing crimes. His case marked the first time the ICC issued a warrant based largely on material gathered from social media.

Now that kind of work is being put in jeopardy, according to Koenig, executive director of the Human Rights Center at the University of California, Berkeley. She said Facebook’s recent decision to turn off the features in its graph search product could be a “disaster” for human rights research.

“To make it even more difficult for human rights actors and war crimes investigators to search that site—right as they’re realizing the utility of the rich trove of information being shared online for documenting abuses—is a potential disaster for the human rights and war crimes community,” she said. “We need Facebook to be working with us and making access to such information easier, not more difficult.”

Simply put, Facebook graph search is a way to receive an answer to a specific query on Facebook, such as “people in Nebraska who like Metallica.” Using graph search, it’s possible to find public — and only public — content that’s not easily accessed via keyword searches.

Late last week, Facebook turned off several features that have long been accessible via graph search, such as the ability to find public videos that a specific Facebook user was tagged in. </p>
facebook  search  privacy 
june 2019 by charlesarthur
Apple launches 'Sign in with Apple' button for apps, ‘no tracking’ login • 9to5 Mac
Benjamin Mayo:
<p>Apple announced a new Sign in with Apple button as part of its iOS 13 announcements. The button offers Apple ID single-sign on functionality similar to sign-in buttons from Twitter, Facebook or Google.

Apple is marketing this as a privacy-secure sign-in option. Apple will mask user email addresses and other personal information, whilst still allowing the apps to contact users indirectly.

Users select what information to share with the destination app. You can share your real email address with the third-party app, or use the ‘hide my email’ option to forward email onwards. In the latter case, the app would only see a random anonymous email address.

Of course, apps must update to integrate the ‘Sign in with Apple’ button. A lot of apps may not want to add the Apple ID login because they cannot access customer data they want.</p>

Logical expectation is that Apple will push it on its devices, so apps and sites may feel they need to support it. But with the tech landscape as it is, there might be some reluctance to not gather data when you can slurp it up via Google or Facebook. Those sites and apps aren't on your side. They're on their own side.
Apple  data  privacy  signon 
june 2019 by charlesarthur
iPhone privacy is broken…and apps are to blame • WSJ
Joanna Stern:
<p>Congratulations! You’ve bought an iPhone! You made one of the best privacy-conscious decisions... until you download an app from Apple’s App Store. Most are littered with secret trackers, slurping up your personal data and sending it to more places than you can count.

Over the last few weeks, my colleague Mark Secada and I tested 80 apps, most of which are promoted in Apple’s App Store as “Apps We Love.” All but one used third-party trackers for marketing, ads or analytics. The apps averaged four trackers apiece.

Some apps send personal data without ever informing users in their privacy policies, others just use industry-accepted—though sometimes shady—ad-tracking methods. As my colleague Sam Schechner reported a few months ago (also with Mark’s assistance), many apps send info to Facebook, even if you’re not logged into its social networks. In our new testing, we found that many also send info to other companies, including Google and mobile marketers, for reasons that are not apparent to the end user.

We focused on the iPhone in our testing—largely because of Apple’s aggressive marketing of personal privacy. However, apps in Google’s Play Store for Android use the same techniques. In some cases, when it comes to providing on-device information to developers and trackers, Android is worse. Google recently updated its app permissions and says it is taking a deeper look at how apps access personal user information.</p>


Stern must be furious that her former colleague Geoff Fowler, now at the Washington Post, got ahead of her with the story - his appeared a day or two before hers - but it shows that we've become complacent about apps, and especially the third-party trackers they tend to incorporate.
apple  apps  data  privacy 
may 2019 by charlesarthur
Apple promises privacy, but iPhone apps share your data with trackers, ad companies and research firms • The Washington Post
Geoffrey Fowler:
<p>You might assume you can count on Apple to sweat all the privacy details. After all, it touted in a recent ad, “What happens on your iPhone stays on your iPhone.” My investigation suggests otherwise.

IPhone apps I discovered tracking me by passing information to third parties — just while I was asleep — include Microsoft OneDrive, Intuit’s Mint, Nike, Spotify, The Washington Post and IBM’s the Weather Channel. One app, the crime-alert service Citizen, shared personally identifiable information in violation of its published privacy policy.

And your iPhone doesn’t only feed data trackers while you sleep. In a single week, I encountered over 5,400 trackers, mostly in apps, not including the incessant Yelp traffic. According to privacy firm Disconnect, which helped test my iPhone, those unwanted trackers would have spewed out 1.5 gigabytes of data over the span of a month. That’s half of an entire basic wireless service plan from AT&T.

“This is your data. Why should it even leave your phone? Why should it be collected by someone when you don’t know what they’re going to do with it?” says Patrick Jackson, a former National Security Agency researcher who is chief technology officer for Disconnect. He hooked my iPhone into special software so we could examine the traffic. “I know the value of data, and I don’t want mine in any hands where it doesn’t need to be,” he told me.

In a world of data brokers, Jackson is the data breaker. He developed <a href="https://itunes.apple.com/us/app/disconnect-privacy-pro-entire/id1057771839?ls=1&mt=8">an app called Privacy Pro</a> that identifies and blocks many trackers. If you’re a little bit techie, I recommend trying the free iOS version to glimpse the secret life of your iPhone.</p>


Certainly worth a try. That's a dismaying lot of trackers (hellooo Washington Post, for which Fowler writes). Expect Apple to try to crack down on this in a future iOS release - though the US could try something like GDPR. I wonder what those apps do in Europe.
mobile  privacy  apps 
may 2019 by charlesarthur
DuckDuckGo CEO Gabe Weinberg talks “do not track” legislation on Kara Swisher podcast Recode Decode • Vox
Eric Johnson:
<p>People don’t realize just how much they’re being tracked online, says DuckDuckGo CEO Gabe Weinberg — but he’s confident that once they learn how much tech companies like Google and Facebook are quietly slurping up their private data, they will demand a change.

“They’re getting purchase history, location history, browsing history, search history,” Weinberg said on the latest episode of Recode Decode with Kara Swisher. “And then when you go to, now, a website that has advertising from one of these networks, there’s a real-time bidding against you, as a person. There’s an auction to sell you an ad based on all this creepy information you didn’t even realize people captured.”

DuckDuckGo offers a privacy-minded search engine that has about 1 percent of the search market share in the US (Google’s share is more than 88 percent), as well as a free browser extension for Firefox and Google Chrome that blocks ad networks from tracking you. But rather than waiting for a comprehensive privacy bill to lurch through Congress over many years, he’s proposed a small, simple tweak to US regulations that might help: Make not being tracked by those networks the default, rather than something you have to opt into.

“The fact that consumers have already adopted it and it’s in the browser is just an amazing legislative opportunity, just give it teeth,” he said. “It’s actually a better mechanism for privacy laws because once you have this setting and it works, you don’t have to deal with all the popups anymore. You just set it once, and then sites can’t track you.”</p>


Weinberg is always good value. Also: DuckDuckGo is profitable; it doesn't have huge VC funding to chase to repay millions of times over.
search  duckduckgo  privacy 
may 2019 by charlesarthur
Inside Apple's top secret testing facilities where iPhone defences are forged in temperatures of -40C • The Independent
Andrew Griffin:
<p>The cost of those [Apple] products has led to some criticism from Apple's rivals, who have said that it is the price of privacy; that Apple is fine talking about how little data it collects, but it is only able to do so because of the substantial premiums they command. That was the argument recently made by Google boss Sundar Pichai, in just one of a range of recent broadsides between tech companies about privacy.

"Privacy cannot be a luxury good offered only to people who can afford to buy premium products and services," [Google chief Sundar] Pichai wrote in an op-ed in the New York Times. He didn't name Apple, but he didn't need to.

Pichai argued that the collection of data helps make technology affordable, echoing a sentiment often heard about Apple, that their commitment to privacy is only possible because their products are expensive and it can afford to take such a position. Having a more lax approach to privacy helps keep the products made by almost all of the biggest technology products in the world – from Google to Instagram – free, at least at the point of use.

"I don't buy into the luxury good dig," says Federighi, giving the impression he was genuinely surprised by the public attack.

"On the one hand gratifying that other companies in space over the last few months, seemed to be making a lot of positive noises about caring about privacy. I think it's a deeper issue than then, what a couple of months and a couple of press releases would make. I think you've got to look fundamentally at company cultures and values and business model. And those don't change overnight.

"But we certainly seek to both set a great example for the world to show what's possible to raise people's expectations about what they should expect the products, whether they get them from us or from other people. And of course, we love, ultimately, to sell Apple products to everyone we possibly could certainly not just a luxury, we think a great product experience is something everyone should have. So we aspire to develop those."</p>


Lots of other details in there, but this is the core.
apple  privacy  google 
may 2019 by charlesarthur
Google Gmail tracks your purchase history (not just from Google); here's how to delete it • CNBC
Todd Haselton and Megan Graham:
<p>Go here to see your own: http://myaccount.google.com/purchases.

“To help you easily view and keep track of your purchases, bookings and subscriptions in one place, we’ve created a private destination that can only be seen by you,” a Google spokesperson told CNBC. “You can delete this information at any time. We don’t use any information from your Gmail messages to serve you ads, and that includes the email receipts and confirmations shown on the Purchase page.”

But there isn’t an easy way to remove all of this. You can delete all the receipts in your Gmail inbox and archived messages. But, if you’re like me, you might save receipts in Gmail in case you need them later for returns. In order to remove them from Google Purchases and keep them in your Gmail inbox, you need to delete them one by one from the Purchases page. It would take forever to do that for years’ worth of purchase information.

Google’s privacy page says that only you can view your purchases. But it says “Information about your orders may also be saved with your activity in other Google services ” and that you can see and delete this information on a separate “My Activity” page.

Except you can’t. Google’s activity controls page doesn’t give you any ability to manage the data it stores on Purchases.</p>


There's an even more interesting page: <a href="https://myaccount.google.com/payments-and-subscriptions">Purchases and Subscriptions</a>, which you reach by hitting the back button on the Purchases page. What is Google up to with this? It's tracking purchases and subscriptions from absolutely all over. It might say that it's not using this to serve you ads, but frankly it's hard to think what this is for except that - unless it's being fed to the AI systems, which then make some sort of conclusion about ads. Perhaps it's to *avoid* serving you ads about things you've already bought - in which case "we don't use the information to serve you ads" would just about be true.
google  privacy  purchases 
may 2019 by charlesarthur
Angry Birds, Candy Crush, and a history of mobile game data collection • Vox
Kaitlyn Tiffany:
<p>Something as vague and banal-sounding as “gameplay data” is not as obviously salacious as the types of personal data collection we know we should be scandalized by. Nobody’s getting your Social Security number from Angry Birds. Nobody’s getting your private messages.

“With Facebook, you’re putting a lot more clearly personal information out there, and with a game you’re not really sure what it’s getting from you,” says Chris Hazard, an engineer with experience in gaming and AI, currently the CTO of a startup called Diveplane. “It’s not as front and center.” Basically, it’s not obvious that data about how you play a mobile game can be as useful and as personal as your wedding photos or a rattled-off screed about the Democratic National Committee.

But people should be worried. The intricacies of gameplay data can tell you a lot about what makes people tick, and what’s going on with them — studies have shown that you play games differently when you’re depressed, or dieting. “Nobody gets too upset about games,” Nieborg says. “But the underlying technology is really powerful. These people are really pushing the technology to the limits where the potential for abuse is massive.”

Developers collect data on who was playing, for how long, how well, and how much money they were spending. It doesn’t seem like sensitive information, and it’s useful mostly because it helps developers target their Facebook ads to find more people who will “monetize well” on these games.</p>
advertising  privacy  games 
may 2019 by charlesarthur
Google Face Match brings privacy debate into the home • Financial Times
Tim Bradshaw:
<p>The “Google Nest” rebranding comes with a prompt for Nest customers to merge their user accounts with their Google profiles. “We want to make sure we are seamlessly integrating these devices,” said Rishi Chandra, vice-president and general manager of Google’s Home and Nest products.

For some customers, merging Nest data could include years of information on a family’s comings and goings, home energy usage and security camera video recordings. Google says it will not use that information for advertising.

“That data will never be used for ads personalisation,” said Mr Chandra, before being corrected by a member of Google’s public relations team. “We can never say never,” he added hastily, “but the commitment we are making is, it is not being used.”

Google is hoping to recapture some of the trust it lost this year when it emerged that its Nest security hub included a secret microphone. Mr Chandra conceded that it was a “mistake” not to inform customers when it went on sale.</p>
google  nest  privacy 
may 2019 by charlesarthur
Hey, Alexa: stop recording me • The Washington Post
Geoffrey Fowler:
<p>“Eavesdropping” is a sensitive word for Amazon, which has battled lots of consumer confusion about when, how and even who is listening to us when we use an Alexa device. But much of this problem is of its own making.

Alexa keeps a record of what it hears every time an Echo speaker activates. It’s supposed to record only with a “wake word” — “Alexa!” — but anyone with one of these devices knows they go rogue. I counted dozens of times when mine recorded without a legitimate prompt. (Amazon says it has improved the accuracy of “Alexa” as a wake word by 50 percent over the past year.)

What can you do to stop Alexa from recording? Amazon’s answer is straight out of the Facebook playbook: “Customers have control,” it says — but the product’s design clearly isn’t meeting our needs. You can manually delete past recordings if you know <a href="http://amazon.com/alexaprivacy">exactly where to look</a> and remember to keep going back. You cannot stop Amazon from making these recordings, aside from muting the Echo’s microphone (defeating its main purpose) or unplugging the darned thing.</p>


As he points out, this is true too about devices that hook into the Alexa system if they're activated (I haven't activated it on Sonos speakers with the capability). Google has changed its defaults: it now doesn't record. Nor does Apple.
privacy  alexa 
may 2019 by charlesarthur
America’s favorite door-locking app has a data privacy problem • OneZero
Sage Lazzaro:
<p>Latch is on a mission to digitize the front door, offering apartment entry systems that forgo traditional keys in favor of being able to unlock entries with a smartphone. The company touts convenience — who wants to fiddle with a metal key? — and has a partnership with UPS, so you can get packages delivered inside your lobby without a doorman. But while it may keep homes private and secure, the same can’t be said about tenants’ personal data.

Latch — which has raised $96m in venture capital funding since launching in 2014, including $70m in its Series B last year — offers three products. Two are entry systems for specific units, and one is for lobbies and other common areas like elevators and garages. The company claims one in 10 new apartment buildings in the U.S. is being built with its products, with leading real estate developers like Brookfield and Alliance Residential now installing them across the country.

Experts say they’re concerned about the app’s privacy policy, which allows Latch to collect, store, and share sensitive personally identifiable information (PII) with its partners and, in some cases, landlords. And while Latch is far from the only tech company with questionable data practices, it’s harder for a tenant to decouple from their building’s door than, say, Instagram: If your landlord installs a product like the keyhole-free Latch R, you’re stuck. The issue of tenant consent is currently coming to a head in New York City, where residents of a Manhattan building are suing their landlord in part over privacy concerns related to the app.</p>

Latch wouldn't be interviewed but said that it offers smartphone app unlocking, Bluetooth proximity, or keycard. But the problem is still about controlling where the information goes.
Door  security  privacy 
may 2019 by charlesarthur
Amazon’s facial-recognition technology is supercharging local police • Washington Post
Drew Harwell:
<p>A grainy picture of someone’s face — captured by a security camera, a social-media account or a deputy’s smartphone — can quickly become a link to their identity, including their name, family and address. More than 1,000 facial-recognition searches were logged last year, said deputies, who sometimes used the results to find a suspect’s Facebook page, visit their home or make an arrest.

But Washington County [where Amazon's system has been used since late 2017] also became ground zero for a high-stakes battle over the unregulated growth of policing by algorithm. Defense attorneys, artificial-intelligence researchers and civil rights experts argue that the technology could lead to the wrongful arrest of innocent people who bear only a resemblance to a video image. [Amazon's system] Rekognition’s accuracy is also hotly disputed, and some experts worry that a case of mistaken identity by armed deputies could have dangerous implications, threatening privacy and people’s lives.

Some police agencies have in recent years run facial-recognition searches against state or FBI databases using systems built by contractors such as Cognitec, IDEMIA and NEC. But the rollout by Amazon has marked perhaps the biggest step in making the controversial face-scanning technology mainstream. Rekognition is easy to activate, requires no major technical infrastructure, and is offered to virtually anyone at bargain-barrel prices. Washington County spent about $700 to upload its first big haul of photos, and now, for all its searches, pays about $7 a month.

It’s impossible to tell, though, just how accurate or effective the technology has been during its first 18 months of real-world tests.</p>

That last bit feels like it ought to have a lot more emphasis, doesn't it? But wow, that is cheap. $7, compared with all the shoe leather and time of hunting down and going through photos.
Amazon  facialrecognition  police  privacy 
may 2019 by charlesarthur
The terrifying potential of the 5G network • The New Yorker
Sue Halpern:
<p>A totally connected world will also be especially susceptible to cyberattacks. Even before the introduction of 5G networks, hackers have breached the control center of a municipal dam system, stopped an Internet-connected car as it travelled down an interstate, and sabotaged home appliances. Ransomware, malware, crypto-jacking, identity theft, and data breaches have become so common that more Americans are afraid of cybercrime than they are of becoming a victim of violent crime. Adding more devices to the online universe is destined to create more opportunities for disruption. “5G is not just for refrigerators,” Spalding said. “It’s farm implements, it’s airplanes, it’s all kinds of different things that can actually kill people or that allow someone to reach into the network and direct those things to do what they want them to do. It’s a completely different threat that we’ve never experienced before.”

Spalding’s solution, he told me, was to build the 5G network from scratch, incorporating cyber defenses into its design. Because this would be a massive undertaking, he initially suggested that one option would be for the federal government to pay for it and, essentially, rent it out to the telecom companies. But he had scrapped that idea. A later draft, he said, proposed that the major telecom companies—Verizon, AT+T, Sprint, and T-Mobile—form a separate company to build the network together and share it. “It was meant to be a nationwide network,” Spalding told me, not a nationalized one. “They could build this network and then sell bandwidth to their retail customers. That was one idea, but it was never that the government would own the network. It was always about, How do we get industry to actually secure the system?”</p>
mobile  privacy  data  5g 
april 2019 by charlesarthur
Facebook sets aside billions of dollars for a potential FTC fine • The Washington Post
Elizabeth Dwoskin and Tony Romm:
<p>Facebook on Wednesday said it would set aside $3bn to cover costs in its ongoing investigation with the US Federal Trade Commission over the social media company’s privacy practices, as its recent scandals take a toll on its balance sheet in a big way.

That number, which the company said could ultimately range between $3bn and $5bn, correlates with the size of the fine the agency is expected to levy against the tech giant and would be represent the largest the FTC has ever imposed.

Facebook’s decision to set aside billions of dollars comes as the company continues negotiating with the FTC on a settlement that would end its investigation. As part of those talks, federal officials have sought to force Facebook to pay a fine into the billions of dollars, sources previously told the Post. That would set a new record for the largest fine imposed by the FTC for a repeat privacy violation, after Google had to pay $22.5m a few years ago.

The FTC came to determine that violations could result in a multi-billion dollar fine after computing the number of times Facebook breached a 2011 order with the government to improve its privacy practices.</p>


This is going to be quite a thing to watch. Will Facebook, like Google, be able to shrug it off and move on? If the FTC hands down that size of fine it's going to lead a lot of news bulletins. That will get a lot of peoples' attention.
facebook  privacy  fine 
april 2019 by charlesarthur
Facebook uploaded 1.5 million users' email contacts without permission • Business Insider
Rob Price:
<p>Facebook harvested the email contacts of 1.5 million users without their knowledge or consent when they opened their accounts.

Business Insider has learned that since May 2016, the social networking company has collected the contact lists of 1.5 million users new to the social network. The Silicon Valley company says they were "unintentionally uploaded to Facebook," and it is now deleting them. You can read Facebook's full statement below.

The revelation comes after a security researcher noticed that Facebook was asking some users to enter their email passwords when they signed up for new accounts to verify their identities, in a move widely condemned by security experts. Business Insider then discovered that if you did enter your email password, a message popped up saying it was "importing" your contacts, without asking for permission first.

At the time, it wasn't clear what was actually happening — but a Facebook spokesperson has now confirmed that 1.5 million people's contacts were collected this way, and fed into Facebook's systems, where they were used to build Facebook's web of social connections and recommend friends to add. It's not immediately clear if these contacts were also used for ad-targeting purposes. [Later: it did.]

Facebook says that prior to May 2016, it offered an option to verify a user's account and voluntarily upload their contacts at the same time. However, Facebook says, it changed the feature, and the text informing users that their contacts would be uploaded was deleted — but the underlying functionality was not. Facebook didn't access the content of users' emails, the spokesperson added.</p>


Notice how Facebook's errors always fall in favour of it getting more information, and using it to target ads? Never getting less information and reducing ad loads? Though at this point it looks sociopathic.
facebook  email  privacy 
april 2019 by charlesarthur
Tracking phones, Google is a dragnet for the police • The New York Times
Jennifer Valentino-DeVries:
<p>When detectives in a Phoenix suburb arrested a warehouse worker in a murder investigation last December, they credited a new technique with breaking open the case after other leads went cold.

The police told the suspect, Jorge Molina, they had data tracking his phone to the site where a man was shot nine months earlier. They had made the discovery after obtaining a search warrant that required Google to provide information on all devices it recorded near the killing, potentially capturing the whereabouts of anyone in the area.

Investigators also had other circumstantial evidence, including security video of someone firing a gun from a white Honda Civic, the same model that Mr. Molina owned, though they could not see the license plate or attacker.

But after he spent nearly a week in jail, the case against Mr. Molina fell apart as investigators learned new information and released him. Last month, the police arrested another man: his mother’s ex-boyfriend, who had sometimes used Mr. Molina’s car.

The warrants, which draw on an enormous Google database employees call Sensorvault, turn the business of tracking cellphone users’ locations into a digital dragnet for law enforcement. In an era of ubiquitous data gathering by tech companies, it is just the latest example of how personal information — where you go, who your friends are, what you read, eat and watch, and when you do it — is being used for purposes many people never expected. As privacy concerns have mounted among consumers, policymakers and regulators, tech companies have come under intensifying scrutiny over their data collection practices.</p>


Hello, Google's Location History feature - which <a href="https://www.nytimes.com/2019/04/13/technology/google-sensorvault-location-tracking.html">will collect data about your location all the time</a> (on Android) or when allowed (on iOS).

See yours: <a href="https://takeout.google.com/">https://takeout.google.com/</a>.
google  privacy  surveillance 
april 2019 by charlesarthur
Amazon workers are listening to what you tell Alexa • Bloomberg
Matt Day , Giles Turner , and Natalia Drozdiak:
<p>Amazon employs thousands of people around the world to help improve the Alexa digital assistant powering its line of Echo speakers. The team listens to voice recordings captured in Echo owners’ homes and offices. The recordings are transcribed, annotated and then fed back into the software as part of an effort to eliminate gaps in Alexa’s understanding of human speech and help it better respond to commands. 

The Alexa voice review process, described by seven people who have worked on the program, highlights the often-overlooked human role in training software algorithms. In marketing materials Amazon says Alexa “lives in the cloud and is always getting smarter.” But like many software tools built to learn from experience, humans are doing some of the teaching.

The team comprises a mix of contractors and full-time Amazon employees who work in outposts from Boston to Costa Rica, India and Romania, according to the people, who signed nondisclosure agreements barring them from speaking publicly about the program. They work nine hours a day, with each reviewer parsing as many as 1,000 audio clips per shift, according to two workers based at Amazon’s Bucharest office, which takes up the top three floors of the Globalworth building in the Romanian capital’s up-and-coming Pipera district.</p>


That is a LOT of listening. Is this another "not really AI" example?
amazon  privacy  surveillance  alexa 
april 2019 by charlesarthur
Does Google meet its users’ expectations around consumer privacy? This news industry research says no » Nieman Journalism Lab
Jason Kint:
<p>Digital Content Next surveyed a nationally representative sample1 to find out what people expect from Google — and, as with a similar study we conducted last year about Facebook, the results were unsettling.

Our findings show that many of Google’s data practices deviate from consumer expectations. We find it even more significant that consumer’s expectations are at an all-time low even after 2018, a year in which awareness around consumer privacy reached peak heights.

The results of the study are consistent with our Facebook study: People don’t want surveillance advertising. A majority of consumers indicated they don’t expect to be tracked across Google’s services, let alone be tracked across the web in order to make ads more targeted.

Q: Do you expect Google to collect data about a person’s activities on Google platforms (e.g. Android and Chrome) and apps (e.g. Search, YouTube, Maps, Waze)?<br />YES: 48%NO: 52%

Q: Do you expect Google to track a person’s browsing across the web in order to make ads more targeted?<br />YES: 43%NO: 57%

Nearly two out of three consumers don’t expect Google to track them across non-Google apps, offline activities from data brokers, or via their location history.</p>


Don't expect – or perhaps aren't aware that it's capable of doing.
google  privacy  surveillance 
april 2019 by charlesarthur
Microsoft, Facebook, trust and privacy • Benedict Evans
Evans finds strong parallels, 25-odd years apart:
<p>much like the [creators of the] Microsoft macro viruses, the ‘bad actors’ on Facebook did things that were in the manual. They didn’t prise open a locked window at the back of the building - they knocked on the front door and walked in. They did things that you were supposed to be able to do, but combined them in an order and with malign intent that hadn’t really been anticipated.

It’s also interesting to compare the public discussion of Microsoft and of Facebook before these events. In the 1990s, Microsoft was the ‘evil empire’, and a lot of the narrative within tech focused on how it should be more open, make it easier for people to develop software that worked with the Office monopoly, and make it easier to move information in and out of its products. Microsoft was ‘evil’ if it did anything to make life harder for developers. Unfortunately, whatever you thought of this narrative, it pointed in the wrong direction when it came to this use case. Here, Microsoft was too open, not too closed.

Equally, in the last 10 years many people have argued that Facebook is too much of a ‘walled garden’ - that is is too hard to get your information out and too hard for researchers to pull information from across the platform. People have argued that Facebook was too restrictive on how third party developers could use the platform. And people have objected to Facebook's attempts to enforce the single real identities of accounts. As for Microsoft, there may well have been justice in all of these arguments, but also as for Microsoft, they pointed in the wrong direction when it came to this particular scenario. For the Internet Research Agency, it was too easy to develop for Facebook, too easy to get data out, and too easy to change your identity. The walled garden wasn’t walled enough. </p>
security  facebook  microsoft  privacy 
april 2019 by charlesarthur
Guardian Mobile Fireweall aim to block the apps that grab your data • Fast Company
Glenn Fleishman:
<p>A New York Times report in December focused on location data being shared with third-party organizations and tied to specific users; in February, a Wall Street Journal investigation reported that app makers were sharing events as intimate as ovulation cycles and weight with Facebook. But no matter how alarmed you are by such scenarios, there hasn’t been much you could do. Mobile operating systems don’t let you monitor your network connection and block specific bits of data from leaving your phone.

That led Strafach and his colleagues at Sudo Security Group aim to take practical action. “We are aware of almost every active tracker that is in the App Store,” he says. Building on years of research, Sudo is putting the finishing touches on an iPhone app called Guardian Mobile Firewall, a product that combines a virtual private network (VPN) connection with a sophisticated custom firewall managed by Sudo.

It looks like Guardian will be the first commercial entry into a fresh category of apps and services that look not only just for malicious behavior, but also what analysis shows could be data about you leaving your phone without your explicit permission. It will identify and variably block all kinds of leakage, based on Sudo’s unique analysis of App Store apps.

Sudo is <a href="https://itunes.apple.com/us/app/guardian-firewall/id1363796315?mt=8">taking preorders for the app in the Apple Store</a> and plans a full launch no later than June. It will debut on iOS, and required some lengthy conversations with Apple’s app reviewers as Sudo laid out precisely what part of its filtering happens in the app (none of it) and what happens at its cloud-based firewall (everything). The price will be in the range of a high-end, unlimited VPN—about $8 or $9 a month. Sudo plans an expanded beta program in April, followed by a production release that will be automatically delivered to preorder customers.</p>


You'd need to be pretty worried about data grabs to pay that amount, wouldn't you? That's nearly a music subscription. Is your data *that* valuable? Wouldn't an adblocker be a lot cheaper?
sudo  data  privacy 
march 2019 by charlesarthur
Android Q will kill clipboard manager apps in the name of privacy • Android Police
Ryan Whitwam:
<p>Privacy is a primary focus of Android Q for Google, and that may spell trouble for some of your favorite apps. In Android Q, Google has restricted access to clipboard data <a href="https://www.androidpolice.com/2019/01/27/android-q-may-prevent-background-apps-from-reading-your-clipboard/">as previously rumoured</a>, which means most apps that currently aim to manage that data won't work anymore.

Having an app that sits in the background and collects clipboard data can be a handy way to recall past snippets of data. However, that same mechanism could be used for malicious intent. Google's playing it safe by restricting access to clipboard data to input method editors (you might know those as keyboards). Foreground apps that have focus will also be able to access the clipboard, but background apps won't.</p>


iOS and Android are on a very slow collision course to having the same approach to security.
androidq  privacy  clipboard 
march 2019 by charlesarthur
Eero is now officially part of Amazon, pledges to keep network data private • The Verge
Nilay Patel:
<p>concerns that Amazon would somehow make expanded use of Eero network data have been growing ever since the deal was announced — obviously, your Wi-Fi router can see all your network traffic, and Eero’s system in particular relies on a cloud service for network optimization and other features. But Eero is committed to keeping that data private, said [Eero CEO Nick] Weaver, who also <a href="https://blog.eero.com/as-we-join-the-amazon-family-were-accelerating-our-mission-to-deliver-perfect-connectivity-in-every-home/">published a blog post</a> this morning that explicitly promises Eero will never read any actual network traffic.

“If anything, we’re just going to strengthen our commitment to both privacy and security,” Weaver told us. “We’ve got some pretty clear privacy principles that we’ve used for developing all of our products, that are the really the underpinnings of everything. Those aren’t going to change.”

Those three principles, as laid out in the blog post, are that customers have a “right to privacy” that includes transparency around what data is being collected and control over that data; that network diagnostic information will only be collected to improve performance, security, and reliability; and that Eero will “actively minimize” the amount of data it can access, while treating the data it does collect with “the utmost security.”</p>


Never is a long time; there was a time when Nest was never going to be integrated into Google. A more proximate worry for a smaller group of people is whether it's going to keep advertising on podcasts.
amazon  eero  data  privacy 
march 2019 by charlesarthur
« earlier      
per page:    204080120160

related tags

5g  23andme  accountability  ad  adblocking  ads  adtech  adtracking  advertising  ai  airbnb  alexa  algorithm  amazon  america  analytics  android  androidauto  androidq  anonymity  anpr  antitrust  antivirus  api  app  apple  applewatch  apps  backups  ballot  bears  behaviour  bernerslee  bigdata  biometrics  blackberry  blackphone  bose  brexit  browser  browsing  business  buzz  california  cambridgeanalytica  camera  canvas  captcha  cars  ccpa  celltower  censorship  census  children  china  chrome  clinton  clipboard  cloud  cma  computing  cookie  crime  cryptography  culture  cybercrime  data  database  dataprotection  democracy  digital  dna  dns  Door  drones  duckduckgo  ecj  eero  eff  election  electricity  email  emirates  encryption  ethics  eu  europe  exactis  faceapp  facebook  facialrecognition  fake  fbi  FCC  fine  fingerprint  firefox  foursquare  fraud  ftc  games  gay  gdpr  genetics  gmail  google  google+  government  hacking  hardware  health  healthcare  heart  https  india  internet  invasion  iot  iphone  ireland  isp  itunes  jumpshot  kiosk  labour  law  lenovo  light  linux  location  london  machinelearning  mapping  maps  marketing  menstruation  messaging  microsoft  mobile  mohrer  nest  neuralnet  norway  nsa  numberplates  obscuration  ofgem  onavo  oneplus  opensource  optout  optus  oversight  palantir  parenting  password  periods  phonenumber  photos  podcast  poland  police  politics  porsche  portal  privacy  programming  publicity  publishing  purchases  regulation  research  ring  roomba  rtbf  runkeeper  russia  safari  samsung  schools  science  search  security  selfdrivingcar  signon  siri  smarthome  smartphone  smartspeaker  smarttv  smartwatch  sms  snowden  socialmedia  socialwarming  society  software  sonos  speech  spotify  spying  spyware  strangers  strategy  strava  sudo  superhuman  surveillance  survey  sweden  tax  tech  technology  tesla  tfl  timcook  tinder  tort  tracking  troll  twitter  uber  uk  unroll  us  verizon  video  vizio  voice  vpn  webcam  Wechat  whatsapp  whois  wifi  wikileaks  wireless  yahoo 

Copy this bookmark:



description:


tags: