Stay intimated with the recent happenings and occurrences all over the world...your satisfaction is our priority.

Thursday 12 December 2019

2019 was the year of voice assistant privacy dumpster fires

2019 was the "I Told You So" year for privacy advocates and voice assistants: the year in which every company that wanted you to trust them to put an always-on mic in the most intimate places in your home was revealed to have allowed thousands of low-waged contractors to listen in on millions of clips, many of them accidentally recorded: first it was Amazon (and again!), then Google, then Apple, then Microsoft.

What's more, these arms-length contractors who were getting your stolen audio were working under terrible conditions, in sweatshops where they were worked long hours, listening to potentially traumatizing audio, subjected to wage theft. And when the tech giants cut them off, they got shafted again. And despite the companies' protests that they're the only ones stealing your data, voice assistants have proven to be no more secure than any of Big Tech's other products (cue "Dumpster Fires R Us").

In a long, end-of-year wrapup of the state-of-the-leaky-smart-speaker, Bloomberg pieces together a coherent narrative from all of these fragmentary accounts, trying to assess how we got here. The story goes like this: true believers in voice computing (often inspired by science fiction, such as Jeff Bezos, whose enthusiasm for Alexa is attributed to his ardent Star Trek fandom) start to build voice assistants in full knowledge that these will not only be perceived as creepy, they will be creepy. They mislead the contractors who transcribe samples of commands into thinking that they're listening to fully informed beta-testers, but it's totally obvious that these are real customers who have no idea they're being listened in on. The companies lie to their customers, and even to Congress, about whether this is going on.

Meanwhile, they create teams that program canned responses to jokes, flirtations, and insults, in an effort to give their products "personalities," while privately building an internal consensus that users' continued reliance on their products mean that everyone is more-or-less OK with the privacy implications. When they're confronted with instances in which voice assistants capture private, intimate and unintentional data, they dismiss it as statistical anomalies. When they're asked about all the ways that the supposedly anonymized clips they send to low-waged transcription contractors can be reidentified, they put their fingers in their ears and insist that de-identification is an exact and robust science, rather than a faith-based initiative.

Whenever the cognitive dissonance gets too heavy, they retreat to a story about a beautiful future in which our collective sacrifice of privacy yields up speech recognition moduels so robust that they can run inside the devices without any need to convey audio to a server-farm for analysis -- if only they can keep running across the broad River Privacy across the backs of these data-breach alligators without losing a leg, everything will be fine on the far bank of Perfect Machine Learning Models.

In the meantime, the vendors are pushing partners to incorporate always-on mics into an ever-growing constellation of devices: microwaves, TVs, thermostats.

And as with every other stupid smartest-guy-in-the-room evil plot, the people involved just can't help tipping their hands. Otherwise, why would Facebook have named its secret program to listen in on you through its apps and send the audio to contractors "Prism" -- the name of the NSA's infamous, illegal mass-spying program revealed by the Edward Snowden leaks?

The difference between this system and a bug on a MacBook, of course, is that MacOS clearly asks users if they’d like to submit a report directly after a program crashes. It’s an opt-in prompt for each malfunction, as opposed to Siri’s blanket consent. Current and former contractors say most Siri requests are banal—“play a Justin Bieber song,” “where’s the nearest McDonald’s”—but they also recall hearing extremely graphic messages and lengthy racist or homophobic rants. A former data analyst who worked on Siri transcriptions for several years says workers in Cork swapped horror stories during smoke breaks. A current analyst, asked to recount the most outrageous clip to come through CrowdCollect, says it was akin to a scene from Fifty Shades of Grey.

Apple has said less than 0.2% of Siri requests undergo human analysis, and former managers dismiss the contractors’ accounts as overemphases on mere rounding errors. “ ‘Oh, I heard someone having sex’ or whatever. You also hear people farting and sneezing—there’s all kind of noise out there when you turn a microphone on,” says Tom Gruber, a Siri co-founder who led its advanced development group through 2018. “It’s not like the machine has an intention to record people making certain kinds of sounds. It’s like a statistical fluke.”

By 2019, after Apple brought Siri to products such as its wireless headphones and HomePod speaker, it was processing 15 billion voice commands a month; 0.2% of 15 billion is still 30 million potential flukes a month, or 360 million a year. The risks of inadvertent recording grew along with the use cases, says Mike Bastian, a former principal research scientist on the Siri team who left Apple earlier this year. He cites the Apple Watch’s “raise to speak” feature, which automatically activates Siri when it detects a wearer’s wrist being lifted, as especially dicey. “There was a high false positive rate,” he says.

Silicon Valley Is Listening to Your Most Intimate Moments [Austin Carr, Matt Day, Sarah Frier and Mark Gurman/Bloomberg]

(Image: Bill Ward, CC BY, methodshop .com, CC BY-SA, modified)

Share:

Popular Posts

Powered by Blogger.