The term artificial intelligence, and its synonym machine learning, says a lot about this type of technology. It refers to the concept of machines being able to analyze data and get better over time in understanding the world and its users.
We use the technologies produced by artificial intelligence daily. Notice how the autocorrect on your phone starts to recognize words not in the dictionary, just because you use them repeatedly? That’s machine learning. And there are numerous other examples of how artificial intelligence allows machines (and their makers) to learn all about you, your interests, your politics, your health, and more.
[Keep up with the latest in privacy. Sign up for the ExpressVPN blog newsletter.]
7 ways AI tech learns way too much about you
Who uses it: law enforcement, border control
It has been established that AI-powered facial recognition is currently so effective that you can be identified even if your face is obscured—by, for example, sunglasses or a face mask. Where this gets tricky is its use in mass surveillance, specifically surrounding the fear that individuals could be identified and apprehended without due process.
In 2020, American facial recognition company Clearview AI has been widely condemned for gathering of over 3 billion images—sourced from publicly available platforms like Facebook, Twitter, and Flickr—and using them to develop a facial-recognition app. This was then sold to over 2,400 law-enforcement agencies across the U.S.
Hoan Ton-That, co-founder of Clearview AI stated that the tool was strictly for use by law enforcement and that it’s “fair game to help law enforcement solve crimes using publicly available data.” Evidently, the rest of the world was not convinced.
China’s Skynet mass surveillance program—yes, its actual name—began development in 2005 and is currently one of the world’s most sophisticated facial-recognition programs. Where it’s different from Clearview is that it uses live video feeds from an estimated half a billion security cameras across the country.
Despite varying degrees of video surveillance quality, Skynet has a purported accuracy rate of around 99.8%. What could be more invasive, you ask? Facial recognition sunglasses. China has also begun using sunglasses with facial recognition capabilities built into them to provide law enforcement officers with instant ground-level access. This not only provides far superior video resolution, but suspects can be identified in as little as 100 milliseconds.
This is especially scary when you consider that there have been instances of AI assisted surveillance that profile minorities at a higher degree of false positives.
Who uses it: virtual assistants, medical practitioners, military
Voice recognition is perhaps a more pervasive threat with the ease of access for audio recording through smartphones. To further demonstrate the advantage—disadvantage?—that smartphones have in this regard is that they are virtually everywhere in ways that are closer to us physically than CCTV cameras. At least at this distance, you can adjust for app permissions on your smartphone.
Earlier this year, several musicians and human rights groups pressed Spotify to abandon plans for a tool that made music recommendations based on voice recognition. The technology, patented by Spotify, would take input from a user’s voice and surrounding noise in order to make recommendations based on age, gender, accent, and mood. Unsurprisingly, the technology was deemed “creepy” and “invasive”.
Who uses it: smartphones
On paper, autocorrect is great. It’s designed to make your communication more efficient, can help with typos when you’re in a hurry, and has the added benefit of teaching you how to spell difficult words.
When iOS 13 was released back in Q3 2019, iPhone users across the globe were reporting a decrease in the quality of the operating system’s autocorrect feature while typing. This includes random and unnecessary capitalizations, entire words being recommended or replaced that are out of context, and strange multilingual recommendations.
Some speculated that with less access to user information due to its increased focus on privacy, Apple was unable to accurately train its AI. In other words, the decrease in autocorrect quality could be tied to an increase in privacy. Whereas Google, on the other hand, is able to produce far more accurate responses due precisely to the fact that it gathers and analyzes more user information.
Who uses it: streaming services, e-commerce sites
Comparisons to Minority Report aside, the concept behind being able to predict “the future” through AI is remarkably simple. At least insofar that a system can be trained to help recognise input, aggregate that data, and make predictions based on that input. The more source data entered for analysis, the more accurate the prediction becomes. For example, you’ve just finished The Office and you want to watch something similar in tone and Peacock recommends Parks and Recreation. Sounds good, right?
I’d rather the algorithm show a recommendation that doesn’t fit my tastes. It’s the correct predictions that are discomfiting.
Way back in 2012, an AI-trained algorithm was able to successfully predict a high school student’s pregnancy before her family knew, which came as a shock when she was sent coupons in the mail for baby-related products. Forbes discovered that each Target customer is assigned an ID number that’s linked to their name, email, or credit card. Every subsequent purchase made was logged along with external information that was procured from “other sources.” From the collected data, Target was able to establish a “pregnancy prediction” score which was then able to not only extrapolate accurately the likelihood that someone was pregnant but also their due date.
Examples: Google Assistant, Amazon Alexa, Apple’s Siri
Whether it’s a domestic robot like Rosie from The Jetsons, computers in Star Trek, or KITT from Knight Rider, the idea of an artificial assistant has been in the collective consciousness throughout the 20th century via popular media.
A study conducted by the University of Waterloo concluded that people are more likely to engage with anthropomorphized—humanlike—virtual assistants. With an increased resemblance in tone and intonation to humans, modern virtual assistants are slowly blurring the lines between humans and robots. In fact, they’re getting so good at learning how to interact with humans that people are beginning to fall in love with AI assistants.
Think about that for a second…AI virtual assistants are so good at learning certain nuances on interacting with humans that people are developing romantic attachments to them.
You’ve probably heard, or experienced, these stories countless times:
- You were researching a product that you wanted to buy and found yourself being served an ad on your social feed for related products or other products from the same brand
- You had a conversation with someone about a topic that you very rarely discuss, and were subsequently served relevant ads on your social feeds based on that exact topic
Dr Alvin Lee, director for the Master of Marketing at Deakin University in Melbourne, has described that sentiment regarding this type of hyper-targeted marketing comes down to how people perceive privacy. He goes on to state that these days, most people have a reasonable expectation that their online habits and personal information are being monitored and analyzed by a variety of parties.
Across the board, social-media platforms are increasingly using AI in a variety of ways to interact with users. This can include such measures as: predicting hate speech, fashion predictions, and AI-trained gesture recognition for augmented reality. Where it gets tricky is when posts and ads recommended on social feeds are tailored to individual political affiliations, slowly pushing users into ideological echo chambers.
Each major social media platform has some form of AI trained to recognize and thwart hate speech, but there have been situations where that has backfired. One notable example involves Antonio Radić, a Croatian chess player, who had his YouTube account blocked for “harmful and dangerous” content. Radić’s potential use of the phrase “black vs. white” may have been the tipping point for YouTube’s algorithm. Despite his channel being reinstated less than 24 hours later, it does highlight the shortcomings of machine learning in understanding human behavior.
Who uses it: law enforcement, medical practitioners
A company in Belfast called Liopa has billed itself as the world’s only startup focused on “automated lip reading via visual speech recognition.” Through a partnership with the Lancashire Teaching Hospital Trust, Liopa have developed SRAVI—or Speech Recognition App for the Voice Impaired. The software is designed to convert silent lip movement into text that can be used to help healthcare workers better understand patients who have difficulty speaking. This can include those who have undergone treatment for tracheostomies, laryngectomies, and/or extended periods of intubation. Applications for SRAVI external to medical use include video surveillance when no audio is available, and conversely in situations when there is an overwhelming amount of noise pollution—team video conferencing for example.
Conversely, forensic lip reading refers to the practice of speechreading to assist in gathering evidence. For example, engaging a lip reader to aid in determining the content of a conversation in a surveillance video without audio. In 2016, Google’s DeepMind AI along with the University of Oxford, used machine learning with over 5,000 hours of BBC programs to create a lip-reading system that was so accurate it managed to outscore professionals. Now imagine that being used across a mass surveillance system.
You know what they say about loose lips.