One day after Texas sued Meta over its face recognition software, Alphabet – Google's parent company – announced major changes to how it deals with user privacy.
Specifically, the company said it plans to limit how apps track users on its Android smartphone platform. The move is in line with Apple’s recent restraints on how advertisers track consumer behavior on iPhones.
Over the next two years, Google said it plans to replace the current system used to identify users with one that is more mindful of users’ privacy. Not coincidently, the move comes as regulators take aim at Big Tech and as consumers express more concern about privacy.
Walking a fine line
When it comes to regulators, Julie Rubash, chief privacy counsel at data privacy firm Sourcepoint, says the government has to walk a fine line.
“Regulation and enforcement, when it comes to artificial intelligence require a delicate balance to address the potential harms that could come from misuse of artificial intelligence (AI) without stifling the potential benefits to society that could be gained from this powerful technology,” Rubash told ConsumerAffairs.
Rubash says advances in AI, when used properly, could provide consumers with highly personalized product experiences that actually improve lives instead of merely selling a product.
“Imagine the benefits of a car that understands and adapts to your driving patterns, medical technology that adapts based on your medical history, and educational tools that understand how your child learns,” she said.
What regulators have to be concerned about, Rubash says, is how this powerful technology could be harmful, such as its potential to lead to exploitation, unfair decision-making, discrimination, or unwanted disclosures.
Looking at the big picture
Heather Federman, chief privacy officer at data intelligence company BigID, agrees that there are many aspects of technology advancements that are positive. She also agrees that regulators need to see the big picture.
“The concern is around what the tech companies deploying these technologies actually do with the data output,” Federman told ConsumerAffairs. “Are they sharing this information with data brokers to make credit and employment eligibility decisions about us? Or are they using it to build even better technologies that benefit society overall?”
The question, says Federman, comes down to how the data is used and whether that use is responsible.
Rubash says Big Tech and regulators should both have the goal of putting control in the hands of the consumer through clear, easy-to-understand, easy-to-exercise decisions.
“Consumers should have the right to decide for themselves whether they want the products they interact with to adapt to their driving habits, medical history, learning styles, and other characteristics and preferences,” she said.