SHARE
Illustration by Jeff Drew

Sure, you lock your home, and you probably don’t share your deepest secrets with random strangers. And if someone knocked on your door and asked to know when you last got your period, you’d tell them to get lost. 

Yet, as a smartphone user, you’re likely sharing highly personal information with total strangers every minute — strangers whose main focus is to convert every element of your personality into money. Click here. Vote for this candidate. Open this app again. Watch this ad. Buy this product. We’ve been giving out our private information to be able to use convenient, fun, and largely free apps, and we’re only now understanding the true costs.

Would you mind if an app that you specifically told not to use your location tracked your real-time movements anyway by pinging off nearby Bluetooth and Wi-Fi signals? What if the mobile therapy app you use for counseling told Facebook whenever you’re in a session and, without using your name, told an advertising firm the last time you felt suicidal? 

300x250

Or, what if there was a global pandemic and a company you’d never heard of revealed a map of cell phone locations showing that you hadn’t been doing your part to stay away from others and slow the spread of the deadly virus? Could that become enforceable? Could you be fined? Publicly shamed?

While most Americans say they’re concerned about how companies and the government use their data, Pew Research shows they also largely feel they have little to no control over the data that companies and the government collect about them.

Tech companies often defend data collection, noting they remove users’ names to “depersonalize” the information, but privacy experts say that’s pretty much bullshit: Location data without a name can easily be pinned to an individual when you see that pin travel between a workplace and a home address. And even if your internet activity is shared under a unique number instead of your name, the goal is to intimately understand exactly who you are, what you like, and what you’ll pay for. 

The good news is, privacy advocates say that we can avoid a dystopian future in which nothing is private. But to get there will take understanding the many ways that data and technology are already used to violate privacy and civil rights. Lawmakers also will need willpower to pass strong legislation that ensures actual consent to how our information is used and penalties for those who abuse our trust. People also need to decide if the risks outweigh the perks. 

“People don’t like it — they don’t like being known unless they’ve asked to be known,” said Jennifer King, director of privacy at Stanford University’s Center on Internet and Society. “Companies are banking on the fact that if they keep pushing us toward that world, we’ll just say, ‘Yeah, it is really convenient.’ ” 

Privacy advocates warn consumers that the default of most smart devices will involve the broadest possible sharing of their data.

First of All, We’re Being Tracked

At this point in the digital age, many Americans realize they’re being tracked in one way or another, whether by companies or governments, even if they don’t know just how detailed that tracking is. 

Seven years ago, whistleblower Edward Snowden revealed that the United States doesn’t just spy on the rest of the world. The federal government also tracks its own citizens through the National Security Agency, which maps cell phone locations, reads people’s emails, and monitors internet activities.

Then, about two years ago, former employees of tech company Cambridge Analytica revealed to lawmakers in the United States how they used Facebook surveys to secure thousands of data points about every American voter. Even voters who hadn’t signed up for the personality tests were captured in the scraped data, which was used to create highly targeted ads for “persuadable” voters to help Donald Trump’s 2016 presidential campaign. The company focused specifically on flipping persuadable voters in certain precincts, which then helped flip a few key states in his favor, as detailed in the documentary The Great Hack.

Now, as contact-tracing efforts are becoming widespread for COVID-19, the world has received its latest reminder that many companies far less recognizable than Google, Apple, Amazon, Facebook, or Microsoft are purchasing and using our location data all the time. 

With much of the world sheltering in place for weeks to slow the spread of the deadly virus, people quickly turned their attention to places that weren’t taking aggressive measures. Florida, for instance, was playing host to spring break partiers in mid-March, and dozens who traveled to the beaches there later tested positive for COVID-19.

The extent of how those travelers could have spread the virus was shown in late March, when location data and mapping companies Tectonix GEO and X-Mode Social created a visualization showing how thousands of phone users who spent time on a single Florida beach traveled across much of the United States over the next two weeks. 

Public reaction was mixed. Some found the map to be a helpful tool to show how easy it is for the virus to spread, underlining the importance of social-distancing measures. Others questioned how the companies obtained the data and called it terrifying.

The companies had received consent, they replied, noting that they comply with strict data protection policies in California and Europe, but many people don’t realize that when they allow an app to use their location for the service it provides, companies can also sell that location information to third parties who use it in “anonymized” applications like the kind that enabled the mapping. 

“We definitely understand the concern, but we take every effort to ensure privacy in the data we use,” Tectonix GEO responded to one Twitter user. “All device data is anonymized, and we only work with partners who share our commitment to privacy and security above all! It’s about using data to progress, not to invade!”

But users pointed out that if you can see all the stops a phone makes over the course of two weeks, it’s not truly anonymous.

To help public health officials with COVID-19, Google and Apple have both announced plans to create opt-in contact-tracing tools for Android and iPhone. Users would be notified if they came in contact with an infected person.

Contact Tracing: Coming to a Phone Near You

To help public health officials start to reopen the economy, Google and Apple have both announced plans to create opt-in contact-tracing tools for Android and iPhone.

The tracing tools would use your phone’s Bluetooth signal to ping off the devices of the people you’re around at coffee shops, grocery stores, and other public spaces. Strangers’ phones would store a number that your phone sends via Bluetooth, and your phone would store the number from their phone. The numbers, which could be generated and changed by phones regularly, would not be shared with the tech companies but stored in individuals’ phones for a few weeks. If someone tests positive for COVID-19, they could send an alert from their phone that would ping phones that gathered their signal over the past two weeks to let people know they may have come in contact with someone who tested positive. 

Without that type of tool and more extensive testing, experts warn that the only other way to prevent deaths from spiking again until there is a vaccine is to extend the stay-at-home orders that plunged more than 22 million Americans into unemployment in March and April. 

While the tool could allow more people to return to their routines, the American Civil Liberties Union (ACLU) warns that cell phone location data aren’t perfect and if they were used to enforce quarantines for those who have come into contact with the virus, phones would essentially be turned into ankle monitors. 

“The challenges posed by COVID-19 are extraordinary, and we should consider with an open mind any and all measures that might help contain the virus consistent with our fundamental principles,” the ACLU said in a statement in response to the proposals. “At the same time, location data contains an enormously invasive and personal set of information about each of us, with the potential to reveal such things as people’s social, sexual, religious, and political associations. The potential for invasions of privacy, abuse, and stigmatization is enormous.”

Mobile Healthcare

Currently, the United States lacks comprehensive legislation to protect the vast amounts of personal data created on our devices every day, from the type of pictures you like to the number of steps you walk. 

A patchwork of federal privacy protections outlines rules for things like sharing healthcare data, banking information, and credit reports and for collecting information on children under 13. Plus, the Federal Trade Commission enforces consumer protection cases against companies using unfair or deceptive practices. 

“But we don’t have what we think of as a comprehensive law, just a baseline law, that would apply to personal data, who collects it and why they collect it,” said Stacey Gray, senior counsel with the Future of Privacy Forum, a nonpartisan think tank that provides information on commercial privacy issues for policymakers. 

For example, while healthcare information collected by your doctor and other healthcare professionals is protected by HIPAA (the Health Insurance Portability and Accountability Act), the law doesn’t apply to many technologies you may use to track your health.

“People are realizing the same or similar information can be collected from your Apple Watch and other devices, which can see your health or mental state — that is not protected by HIPAA because it is not collected from a healthcare professional,” Gray said. “There are mobile apps that will let you track your pregnancy, your period, dieting.”

In 2019, advocacy group Privacy International published a report on period-tracker apps Mia Fem and Maya, showing that the apps were sharing information with Facebook and third parties. They shared things like whether users were keeping track of their menstruation or fertility, when they last had sex, whether they drank caffeine or alcohol, and when they last masturbated. Even users without a Facebook account had their data shared with the tech giant, the report found.

Similarly, Jezebel reported in February that the therapy app Better Help, which is heavily advertised on Facebook and offers therapy sessions with licensed healthcare professionals, tells Facebook when users are in the app, effectively sharing when they’re in therapy sessions. The app passed along users’ intake forms by assigning them a number instead of a name — a method that’s approved by HIPAA, Jezebel notes — giving a research and analytics firm called MixPanel intimate detail on a user’s self-reported sexuality, beliefs, and mental health.

“MixPanel is the kind of startup that’s omnipresent yet mostly invisible to people who don’t work in tech; it’s used by everyone from Uber and Airbnb to BMW,” Jezebel reports. “Its basic concession is producing monetizable data out of literally any human behavior: By tracking and cataloguing people’s habits and desires, the theory goes, companies can figure out how to best encourage their users to open an app again and again.”

The implications of health information-sharing could go far beyond the apparent desire to target highly personalized ads. Employer health plans continue to evolve, with some offering health-tracking apps for employees, with the promise of a discount on their insurance for using the tools. However, privacy advocates warn that insurance companies could eventually charge you more based on your health behaviors, and your employer could see health details like when you’re trying to become pregnant or whether you struggle with certain health conditions. 

 

Is My Phone Listening to Me?

Many people who use social media have had the experience of opening an app and seeing an ad for something they were just talking about with their friends, followed by the odd feeling that your phone has been listening to you.

“People are convinced their microphones are being used or pictures being taken, but by and large those things generally aren’t happening,” explained Serge Engelman, the chief technology officer for App Census, a company that tests apps to see what information they collect, how they collect it, and who they share it with. 

Engelman, who also directs the International Computer Science Institute research lab at the University of California, Berkeley, said that, truly, advertisers know just enough about you to direct relevant ads your way.

“It’s profiling,” he said, “mostly by persistent identifiers.”

A persistent identifier is a unique number that can be tied directly to your device, such as a number tied to your SIM card, and another known as the IMEI, or the International Mobile Equipment Identity.  

You can think of that like a license plate for your phone, he said.

“By itself, the license plate number is a pretty meaningless piece of information, but if you start recording every place you see it, you can learn a lot about the user’s activities and preferences,” Engelman said. “That’s all made possible by linking that number to the user’s actions and activities. It’s the same way a cookie works.”

But unlike cookies, which similarly track your internet browsing but can be cleared from your browser history, there wasn’t an equivalent option to clear history for mobile phones until about 2013, he said. Now, Google and Apple allow users to reset their advertising ID, but if that is still collected alongside a persistent identifier like the IMEI, companies can still track your behaviors across platforms. 

Through App Census, Engelman and other researchers have used Android phones to test tens of thousands of apps. What they found is that even after the changes meant to allow users to reset their temporary IDs, most apps were still sending the persistent identifiers with information they collected. 

“The problem is, from the consumer standpoint, there’s no way of knowing when this is happening and when it’s not,” Engelman said. “The average user is not writing their own version of Android to analyze what data is being sent.”

Companies typically defend this type of data collection — using advertising IDs or persistent identifiers — as they claim that the number “de-identifies” the information from a user’s name and therefore protects his or her privacy.

“That’s utter bullshit,” Engelman said. “They collect these explicitly so they can augment information about you over time. They’re using it explicitly to identify you.”

You, the single thirtysomething woman who often buys shoes and cat litter. You, the fortysomething married man who wants a riding lawn mower. You, the 60-year-old retiree with an open line of credit at a mid-level retail store who collects Coca-Cola memorabilia. 

Entire companies are devoted to tying your depersonalized data with identifying information that can be found elsewhere, which many people don’t realize, Engelman said.

“The problem is, most regulatory agencies, at least in this country, are complaint-based,” Engelman said. “They rely on consumer complaints. How can you open an investigation based on consumer complaints when consumers don’t even know what’s happening?”

 

Are Privacy Policies Enough?

So how could people be more protected? The 2018 General Data Protection Regulation, or GDPR, in Europe requires that companies allow people to opt out of having their data shared and that companies have a legal basis for collecting information, but broad language in privacy policies often covers types of data sharing that users can’t fully comprehend, experts say. 

“In the consumer area, broadly there are, like, zero restrictions there,” said King, the privacy expert at Stanford’s Center for Internet and Society. “I can track you across multiple platforms, I can track your data and sell it, as long as I tell you in the policy, which people don’t read and is not written to be read.”

The majority of Americans (79%) say they’re concerned about how companies use their data, yet the same Pew Research Center data from late 2019 showed that only about one in five Americans usually read through the privacy policies that grant companies broad use of their data.

King said she’s often asked what individuals can do to protect their privacy, but there’s very little you can do as one person to protect yourself against the biggest threats.

“It’ll probably require industry-level solutions or legislated solutions, as opposed to flipping some knobs on your cell phone,” she said. “That’s the fundamental problem.”

Plus, for users to opt out, they need to know the companies that have their data, Engelman said. 

“The dirty secret for that is the companies themselves don’t know who they’re sharing the data with,” he said. 

Advertisers collect information so dynamically, in the very moment that people are using apps, that many companies would likely have a hard time qualifying how that data was shared, he said.

It’s important to recognize the limitations that exist for consumers and push for informed consent, he said. That includes knowing the full context of how the data you choose to share may be passed on. If a consumer agrees to share his or her location with a weather app, they likely only expect that location to be used to pull up their local forecast. Any secondary use of that location information should require consent and not just fall under an umbrella privacy policy that no one is actually going to read, Engelman said.

“What I would like to see is that people have enough information to make informed decisions,” he said.

 

Smart Assistants and the Internet of Things

Unlike concerns about smartphone listening capabilities, if you’ve bought a smart home assistant like Amazon’s Alexa or Google Home, you likely understand that on some level, the device needs to be listening to hear its wake-up command.

To have Alexa turn off your lights, or read you a recipe, the smart speaker needs to first catch the magic words that indicate you want her to do something, but as smart assistants started rolling out in recent years, it wasn’t initially clear just how easily those devices would accidentally pick up audio they weren’t meant to hear or that it would be listened to by other people.

After consumers complained of odd behaviors with Alexa, the most popular smart assistant, it was revealed that recordings captured by the devices are sent to Amazon, where employees listen for the sounds and phrases that may trip up the system to improve its accuracy. But as you can imagine, some recordings made in error captured snippets of private conversations and even people having sex.

“From a privacy standpoint, what a disaster,” King said.

It would’ve been easier if Amazon had first asked people to opt in and share their recordings, explaining that they would be used to make the system better, similar to when a computer program crashes and asks for permission to send an error report, she said. Instead, the default setting remains that Amazon can use recordings to improve its service, but users now have the option to opt out.

As many other home devices become more connected, creating the so-called “Internet of Things,” other privacy risks are popping up. 

Some smart TVs now include microphones and cameras that could be hacked by stalkers or the government to watch people in their living rooms and bedrooms. Less nefariously, most smart TVs collect every detail of what you watch to target show suggestions and ads. 

Amazon’s Ring Doorbell security system widely shares videos with law enforcement if users agree, raising questions of how those images could be used for other purposes, like facial recognition. The company also shares user information with third parties, sending the full name, email address, and number of devices a user owns to the analytics firm MixPanel, according to a January report from the Electronic Frontier Foundation, a nonprofit that fights for civil liberties. In 2019, hackers exposed vulnerabilities in the system by getting access to the cameras and using the built-in speaker to talk to children in their homes. 

While many systems offer some way to opt out of their tracking, King noted that consumers should assume their devices will default to the broadest possible sharing of their data.

 

Facial Recognition

Americans learned of another wide-reaching privacy overreach early this year, when The New York Times reported on a company called Clearview AI. Clearview had created a massive database of photos scraped from public posts on social media and across the web to create a powerful facial recognition tool that allows users to find out who someone is, and it even links back to the original posts. 

The Times reported that the tool was being used by hundreds of law enforcement agencies and was more comprehensive than any recognition tool created by the government or other Silicon Valley companies.

“The tool could identify activists at a protest or an attractive stranger on the subway, revealing not just their names but where they lived, what they did, and whom they knew,” the Times reported, noting just a few of the potential implications of such a tool.

Face recognition by law enforcement is, for the most part, very loosely regulated, which leads to significant issues, according to research by the Georgetown University Center on Privacy and Technology.

In some cases, police departments have used photos of celebrities they claim look somewhat like a suspect to search for matches. In others, departments have uploaded composite sketches, which led to matches with people who looked far different from the eventual suspect connected with the crime, the center reports. 

In one case highlighted in the center’s “Garbage In, Garbage Out” report, the New York police department wasn’t getting any matches with a photo of a black man whose mouth was wide open, so the department Googled “black male model” and edited another man’s closed lips onto his face to try to find a match, said Jameson Spivack, a policy associate with the Georgetown center.

“You can see, first of all, fabrication of evidence and, second of all, the racial implications of this thing,” Spivack said. “It’s really wild the kinds of things they’ve done.”

Importantly, face recognition gives government power they’ve never had before, Spivack said.

“In 2015, police in Baltimore County used face recognition on the Freddie Gray protesters to locate, identify, and arrest people who had unrelated outstanding arrest warrants,” Spivack said. “This is a politically protected demonstration, and without the protesters being aware of it, the police were using facial recognition to identify people with completely unrelated warrants and target them from the crowd.”

The technology also struggles with accuracy, with issues in identifying people of color, women, and younger people, he said. With no regulations to audit systems for accuracy, errors can persist.

Some states enter driver’s license photos into face recognition databases, while others include only mugshot photos. When the Georgetown center researched how widespread databases were in 2016, they found that about 54% of Americans were included in at least one database, Spivack said. 

“A majority of Americans are subjected to face recognition,” he said. “It’s very likely that has increased, but we have no way of knowing.”

Washington State passed facial recognition legislation this year that Microsoft has been pushing in other states around the country, Spivack says. The rule requires government agencies to write an accountability report before using the technology, have a policy for external information sharing, and train officers in proper use.

The rule also requires a warrant for ongoing or real-time surveillance, but all other uses are allowed, which is troubling, Spivack said. Trying to identify someone with the technology constitutes a search, he argued, and should require probable cause.

“One way to think about this is if you’re in a face recognition database, you’re essentially in a perpetual lineup,” he said. “You’re always a suspect who could come up. A lot will say, ‘Well, I didn’t commit a crime.’ It’s not really about that. It’s more, ‘Does an error-prone, biased technology think you committed a crime?’ Then you have to worry.” 

Until the kinks in the technology are worked out and proper protections of Constitutional rights are codified, the center and other privacy rights groups are advocating that states implement a moratorium on the use of facial recognition. 

 

Meaningful Legislation

Europe’s General Data Protection Regulation, which took effect in May 2018, is the strictest data protection policy in the world. It requires companies to inform users of what data will be collected and how it will be used while also allowing editing or deletion of some types of data. On request, companies need to provide users with all the data they have on them. 

Companies that don’t comply with those and other rules can be fined millions of dollars. 

Many want to push for something similar or even more protective in America.

Currently, California is the only state to have passed a similar level of protection, with the California Consumer Privacy Act. This year, Washington State, home to tech giants Microsoft and Amazon, came close to passing an even more protective measure than California’s called the Washington Privacy Act, which would have required companies to conduct risk assessments and allow people to edit or delete their data. But the measure failed when lawmakers couldn’t agree on how it should be enforced. One contingent wanted the state Attorney General’s office to be responsible for enforcement, while the other also wanted the right to private action.

Privacy advocates, including the American Civil Liberties Union of Washington, point out that the act was also full of loopholes, and it would have prevented local jurisdictions from passing more protective legislation.

“It was astonishing to see all the places where rights that were listed were circumvented by exemptions,” said Jennifer Lee, the technology and liberty project manager for ACLU Washington. “How can you say consumers actually have meaningful rights if they’re not enforceable and undermined by a laundry list of loopholes?”

While state legislation can fill an important vacuum in data protection laws, Washington State Senate Majority Leader Sen. Andy Billig (D-Spokane) said he thinks federal standards would better protect all citizens. 

“While I think Washington is generally a leader in technology and consumer protection, and it would make sense for Washington to be a leader in this area, ultimately federal legislation would be the best, so there’s one standard throughout the country,” Billig said. 

Washington politicians are also leading on the issue at the federal level. Sen. Maria Cantwell (D-Washington) introduced the Consumer Online Privacy Rights Act (COPRA) with Democratic leadership in late 2019. The act would ensure, among other things, that people around the country have the right to: access their data and see how it’s being shared; control the movement of that data; delete or correct their data; and take their data to a competing product or service. It also provides a right to private action against violators. But many who work in privacy say proposed rules like COPRA, and even the GDPR, don’t go far enough because they require people to opt out instead of opting in. 

Lee said protective legislation requires two major questions to be answered: For what purpose is your data being collected, and is it collected with your consent?

“You might not know how you’re hemorrhaging your data, or who has it, but when aggregated and combined with different data sets, that can really reveal a very intimate picture of your life,” Lee said. “And if it’s not adequately protected, it can be used to discriminate against anyone in critical decisions, like our healthcare, housing, education, or loans. It’s something everyone should be worried about.” 

 

A version of this article first appeared in the Inlander, a weekly based in Spokane, Washington.

LEAVE A REPLY