“ALEXA! Who murdered her?” was a headline that piqued everyone’s curiosity in the year 2017. It all began in Florida when a certain man named Adam Reechard Crespo was found guilty of murdering his girlfriend, Silvia Galva. The police weren’t entirely convinced by Crespo’s side of the story. Investigators seized a pair of Amazon Alexa echo speakers from the crime scene, hoping that the recordings from this voice assistant would give some momentum to the investigation. This wasn’t the first time the recordings from Alexa were treated as a piece of evidence. Early in 2015, the court subpoenaed Amazon to hand over the recordings in the Bates-Collins murder case.
With that said, this whole scenario undoubtedly sends some jitters down the spine. The multinational giants who are the makers of these voice assistants claim that these devices only record the conversations upon saying the “wake word”, but what happens in the backend is uncertain to this day. Are the Alexa’s and Cortana’s transcending to become the nosy neighbours that they are not supposed to be and eavesdropping on every single conversation without the actual consent of an individual?
In this context, the privacy of our data and the ways in which companies are handling it becomes highly questionable. A recent thriller film named “Kimi” is based on the storyline of how a voice assistant records and archives a conversation which then becomes chief evidence in a murder. The director of this film Steven Soderberg has claimed that he was always pessimistic and distrustful of the voice AI.
I have spoken too much about the questionable integrity of voice AI but does the story end there? The answer is “no”. Voice AIs probably form nano-bytes of our entire data, which is being captured and transferred to sources elsewhere. Well, the legitimacy of these sources is unknown.
The proliferation of the internet and, most recently, the internet of things (IoT) are indeed signs of digital evolution, but they also raise certain red flags. The recent Arab International Cybersecurity Summit (AICS) that happened in Bahrain saw stalwarts in the data privacy domain all over the world. The overshadowing theme of this conference was how the increased use of AI could place us under the threat of data hijacking and personal attacks in the future.
We all must be conscious of the fact that we leave our digital footprint everywhere. This trial of data left behind us becomes the primary channel for online behavioural targeting and advertising. AI-centred Marketing 5.0 has propelled the employment of personalised marketing. These personalised marketing techniques have, over the years, been a unique statement proposition for many businesses and have aided in enhancing the customer experience. Type “how do I” in the Google search bar and see how google autocomplete magically completes your query. The autocomplete results vary for each individual. For one person, it may be “How do I fix my computer” and for another person, it may be “How do I contact amazon customer support”. Google generates these autocomplete results based on our previous search history and other analytics.
My phone is lying idle, and I am not fidgeting with it like I usually do, but one thought keeps me on tenterhooks. It is reported by many data authorities that even when our mobiles and laptops are lying around unused large chunks of our data stored in these devices is being transported to some unsought sources in countries like China and so on, all because we un-acknowledgingly give our consent to many apps and websites. Another intriguing question that comes to my mind is, “How often do we read the privacy statements on websites and apps before we tick “I ACCEPT”? uncomprehending such notices and giving consent ignorantly is definitely inviting unwanted threats to our security.
We cannot say that there is a flaw on the part of the user in all situations. These privacy notices are often unintelligible and difficult to interpret for ordinary people. Amongst all these, time and the urgency of obtaining information are some huge constraints.
I am reminded of another AI and IoT-powered invention which has been rising in popularity in recent days, “smart homes”. According to Mordor Intelligence, the smart home market was valued at $79.13 billion in 2021 and is predicted to increase to $313.95 billion by 2027, with a compound annual growth rate (CAGR) of 25.3%. The idea of a smart home seems absolutely smart, but it also brings a few privacy concerns. In 2017, Vizio, an American company that manufactures and sells television, was fined 2.2 million dollars by the Federal Trade Commission. FTC discovered that Vizio was tracking its customers’ viewing habits through its smart TVs. Vizio captured a selection of pixels on the screen that it matched to a database of TV, movie, and commercial content on a second-by-second basis. This information was subsequently sold to third-party corporations for use in customer-targeted advertising. To make matters worse for Vizio, the data collection feature was enabled by default, and customers were not given a choice to opt out, meaning the majority were ignorant of the activity.
In this context, I am reminded of Europe, one continent that no longer wanted any organisation taking the data security of its citizens for a ride. In 2018, the European Parliament, in collaboration with the council of the European Union, implemented the General Data Protection Regulation. GDPR is a rule that compels enterprises to respect EU individuals’ personal data and privacy while doing transactions within EU member states. Non-compliance might cost businesses dearly. In the same year, another privacy regulation act, named the California Consumer Privacy Act (CCPA), was implemented. It has many similarities with GDPR.
Apart from these, Apple developed a groundbreaking app tracking transparency feature. This feature restricts the ability of social media companies to collect and use the personal data of the consumer. With this feature, many users are opting out of the tracking option on their mobile phones, which has cost companies like Facebook over $12.8 billion in 2022. Also, Google is currently working on phasing out all third-party cookies on Chrome by 2024. With this initiative, it plans to make the user experience less intrusive and encourage marketers to change their advertising approach to be less invasive.
With all these thoroughgoing laws and initiatives by many companies, data privacy is all set to become the new USP of many businesses. According to research by Gartner, 73% of marketers fear that privacy concerns will negatively impact their analytics efforts, but with rising apprehensions among consumers about their data security, the marketing narrative is slowly changing to value the data security of consumers. Many companies are stepping up their data privacy game which is appreciable, but we still have a long way to go.