Security and Privacy Concerns

Tech News

China-based doorbell manufacturer sued over security issues

Feds probe Eken and other Chinese manufacturers for potential privacy and data security violations

Featured Tech News photo

The Federal Communications Commission (FCC) has proposed a $734,872 fine against Eken, a Hong Kong-based smart home device maker, for providing false U.S. agent information during the equipment authorization process.

The action comes after a Consumer Reports (CR) investigation uncovered major security vulnerabilities in Eken’s doorbell devices.

The FCC is auditing certifications tied to the same U.S. agent information as Eken. The agency's  Enforcement Bureau is investigat...

Read article
Featured Tech News photo

Latest Articles

  1. Roku and Instacart now offering on-screen ordering and one-hour delivery of products
  2. AI isn't always as smart as we think it is
  3. Hurricane Helene may cause problems for the semiconductor industry
  4. CNN online? That will be $3.99
  5. DirecTV acquires Dish and Sling TV for $1

Not sure how to choose?

Get expert buying tips about Security and Privacy Concerns delivered to your inbox.

    By entering your email, you agree to sign up for consumer news, tips and giveaways from ConsumerAffairs. Unsubscribe at any time.

    Thanks for subscribing.

    You have successfully subscribed to our newsletter! Enjoy reading our tips and recommendations.

    Recent Articles

    Newest
    • Newest
    • Oldest
    Article Image

    Some mental health apps are still causing headaches

    How can you sniff out the nastier apps before you download them? An expert shares their tips.

    As demand for mental health services continues to rise, the Mozilla Foundation’s latest round of its *Privacy Not Included research says that despite warnings that app developers need to shape up.

    The foundation has slapped 59% of the mental health apps it studied with *Privacy Not Included warning labels because they fail to safeguard an app user’s privacy and protect their data. 

    “Our main goal is better protection for consumers, so we were encouraged to see that some apps made changes that amount to better privacy for the public,” said Jen Caltrider, a privacy researcher and consumer privacy advocate and Mozilla’s *Privacy Not Included team lead. 

    “And sometimes all that had to be done to make those positive changes was to ask the companies to do better. But the worst offenders are still letting consumers down in scary ways, tracking and sharing their most intimate information and leaving them incredibly vulnerable. The handful of apps that handle data responsibly and respectfully prove that it can be done right.”

    The good and the bad

    After developers took heed of the previous Mozilla mental health app study, there were some good – as well as some are-you-kidding-me – results. Almost a third of the apps, including Youper, Woebot, PTSD Coach, and the AI chatbot Wysa, made improvements over their 2022 performance. Those last two received a “Best Of” citation, which Mozilla uses to spotlight the apps doing privacy and security the right way.

    One piece of bad news was that an astonishing 40% of the apps researched got worse in the last year. One that Mozilla researchers found troubling was Replika: My AI Friend, an app downloaded 10 million times on Google Play and “millions'' more (according to the description) on the Apple app store.

    The analysts called Replika one of the worst apps they have ever reviewed because of its weak password requirements, sharing of personal data with advertisers, and its recording of personal photos, videos, and voice and text messages consumers shared with the app’s chatbot. 

    Another scary app was Cerebral. It set a new mark for the number of trackers: 799 within the first minute of download. Plus, the foundation charged that several others — Talkspace, Happify, and BetterHelp — couldn’t wait to get their hands on a user’s private information, reportedly pushing consumers into taking questionnaires up front without asking for consent.

    "They claim collecting your information will help them deliver you a better service, but ... they aren’t using your personal information to help you feel better, they are using your personal information to make them money."

    How to carefully choose a mental health app

    Given that there's a lot of pitfalls embedded in the mental health apps Mozilla reviewed, the smart money is on finding out what those are before downloading one. ConsumerAffairs asked Caltrider and Lucas Hamrick, CEO of ORE Sys, what are their best practices in this regard. 

    Read the “About this App” section on the app stores. Both emphasized the importance of clicking on the little arrow beside the ‘About This App’ section of the listing prior to downloading it.

    "Then I go down to the bottom of that box and find the word ‘Permissions’ and click on the link under there. That tells me what app permissions the app wants to use,” Caltrider said.

    “If an app offers me tips for weight loss wants to know all my contacts, that seems weird. Or if an app says it can help me recognize songs wants access to my microphone, okay, that makes sense. But if it asks for access to my camera, I’m like, ‘nah’ you don’t need that."

    Hamrick says if an app implies it can do anything "health" related, the information an app developer provided should also cover any information on therapeutic methods used within the app. "

    These methods should complement any recognized therapeutic practices in your mental health and wellness journey," he commented.

    Read the privacy policy for what information is collected. Privacy policies are insurmountable documents full of gobbledygook, but Caltrider suggests that app users look for the most telling information like what personal info is collected, how it’s used, and who it’s shared with or sold to. It doesn’t take long to see if an app triggers their “creepy” senses, she said.

    Check trusted resources. Her last suggestion is to check and see if a trusted source like Mozilla’s *Privacy Not Included or Common Sense Media.  Common Sense is a good source for parents because it also reviews app concerns like sex, nudity, drinking, smoking, and violence.

    “There are people out there doing work like this to help consumers. Use us, we’re here to help!” she said.

    As demand for mental health services continues to rise, the Mozilla Foundation’s latest round of its *Privacy Not Included research says that despite warni...

    Article Image

    Mining cryptocurrency uses a 'disturbing' amount of energy, lawmakers say

    The issue can impact consumers’ utility bills

    A group of U.S. Senators are raising concerns about the environmental impacts of cryptomining and are asking the Environmental Protection Agency (EPA) and the Department of Energy (DOE) to work together to require cryptominers to report their emissions and energy use.

    “The results of our investigation… are disturbing… revealing that cryptominers are large energy users that account for a significant – and rapidly growing – amount of carbon emissions. Our investigation suggests that the overall U.S. cryptomining industry is likely to be problematic for energy and emissions,” wrote Sens. Elizabeth Warren (D-MA), Sheldon Whitehouse (D-RI), Edward J. Markey (D-MA.), Jeff Merkley (D-OR) and Representatives Jared Huffman (D-CA) and Rashida Tlaib (D-MI.) 

    “But little is known about the full scope of cryptomining activity. Given these concerns, it is imperative that your agencies work together to address the lack of information about cryptomining’s energy use and environmental impacts, and use all available authorities at your disposal…  to require reporting of energy use and emissions from cryptominers,” wrote the lawmakers.”

    Cryptomining requires a lot of energy

    Connecting cryptocurrency to energy usage is a difficult thing for most people to wrap their heads around. After all, most people just think of buying and selling cryptocurrencies on a phone or computer.

    However, cryptocurrencies like Bitcoin have a massive carbon footprint and require a lot of energy to produce. Bitcoin’s method of verifying transactions requires a sea of computers to solve complex mathematical problems, and those computers need energy to drive those processes.

    The senators noted that the total annual global electricity consumption associated with the two largest cryptocurrencies – Bitcoin and Ethereum – is comparable to the electrical usage of the entire United Kingdom for one year. Findings show that these mining activities resulted in almost 80 million tons of carbon dioxide emissions in 2021.

    All of this cryptomining activity directly impacts how much consumers pay for electricity. According to a recent study, “the power demands of cryptocurrency mining operations in upstate New York push up annual electric bills by about $165 million for small businesses and $79 million for individuals.”

    The senators aren’t giving the DOE and EPA much time to come up with some answers to this issue. The agencies have until August 15 to lay out their plans on how they will require reporting about cryptomining’s energy use and environmental impact. Agency officials will also have to answer a series of questions about their collective ability to monitor the situation going forward.

    A group of U.S. Senators are raising concerns about the environmental impacts of cryptomining and are asking the Environmental Protection Agency (EPA) and...

    Article Image

    Samsung phone owners experience slowdown of more than 10,000 apps

    The company says it’s on the case, but it didn’t say when a fix will be issued

    Samsung phone owners are up in arms over concerns that internal performance limits built into their devices are responsible for throttling more than 10,000 apps. 

    The issue is reportedly connected to Samsung’s Game Optimizing Service, and it is being investigated by the company.

    The apps that have been affected include a variety of games and information apps from bigger companies like Nintendo, Netflix, YouTube, and Facebook. Several of Samsung’s own apps, like Samsung Pay and Samsung Pass, have experienced slowdowns.

    Reports suggest that the likely motive behind the throttling is an attempt by Samsung to improve battery life.

    This isn't Samsung's first throttling brouhaha. In 2018, the company was fined millions of dollars for slowing down smartphones through software updates.

    Samsung looks to address the issue

    In a statement given to The Verge, Samsung’s Kelly Yeo said the company plans to roll out a software update “soon” so users will have control of an app’s performance, not the company. 

    “Our priority is to deliver the best mobile experience for consumers,” Yeo said in defense of the company's Game Optimizing Service. “GOS has been designed to help game apps achieve a great performance while managing device temperature effectively. GOS does not manage the performance of non-gaming apps.”

    Samsung phone owners will have to be patient, though. The company did not offer a timeline as to when the update will be available.

    Samsung phone owners are up in arms over concerns that internal performance limits built into their devices are responsible for throttling more than 10,000...

    Article Image

    Florida’s efforts to protect consumer data fail

    Gov. Ron DeSantis’ effort to give ownership of mined data back to consumers has encountered a roadblock

    Efforts to minimize the often-unchecked power of Big Tech to use consumers’ personal data has taken a blow in Florida. 

    Gov. Ron DeSantis’ Consumer Data Privacy crusade to give ownership of all the data mined by companies back to consumers came to a screeching halt Friday, when the Florida state legislature failed to reach a consensus on where to draw the line in the sand regarding how much of a person’s private data Big Tech should be allowed to gather and repurpose.

    “We started an important conversation about data privacy for Floridians and took strong first steps toward common sense changes,” Rep. Fiona McFarland, sponsor of the Florida House version of the legislation, told the Sun-Sentinel.

    “Each session there are dozens of important issues that we debate and consider in a short 60-day window. This is the nature of the legislative process, and I look forward to continuing the good work on this complicated issue in the next session,” she said.

    Doomed from the beginning

    Despite the good intentions, DeSantis’ effort seemed doomed out of the gate. Not only did he have lobbying groups supporting Big Tech to contend with, but lawmakers wrestled with four other major issues. 

    One was that the federal government should be addressing the issue, not an individual state. Another issue was whether or not individual Floridians could sue companies like Google and Facebook when they don’t adhere to the law. The third was that in the Florida Senate’s version of the bill, some of the Big Tech companies would’ve been exempted and given safe harbor. The fourth major hurdle was the enormous cost that companies would face in order to comply with the law.

    Will techlash continue?

    While DeSantis’ may not have gotten this wish, he and Florida are not alone in the fight to protect consumers' private data. California was able to pull off a statewide privacy act, and U.S. Senators like Sen. Marco Rubio (R-FL) have launched legislation starters like the American Data Dissemination Act. 

    The day will no doubt come when lawmakers find a way to secure, say, a Facebook user’s data. It’s just a matter of time, but there are already signs that things are turning in the consumer’s favor. As an example, Apple started to distance itself from its Big Tech peers two years ago when the company said it doesn’t want consumers’ personal data. Then, the company followed up on that promise when it rolled out new App Store privacy labels giving users more information about what data apps have on them.

    “Our products are iPhones and iPads,” is the message Apple CEO Tim Cook is preaching. “We treasure your data. We wanna help you keep it private and keep it safe. Privacy in itself has become a crisis -- it’s of that proportion.”

    Efforts to minimize the often-unchecked power of Big Tech to use consumers’ personal data has taken a blow in Florida. Gov. Ron DeSantis’ Consumer Data...

    Article Image

    Advocacy group urges Facebook to abandon idea of Instagram for children under 13

    Critics say the ‘image-obsessed’ app isn’t appropriate for a demographic going through rapid developmental changes

    In a letter addressed to Facebook CEO Mark Zuckerberg, the Campaign for a Commercial-Free Childhood expressed opposition to the idea of an Instagram for children under 13 years old. The group claimed the “image-obsessed” social network would have a negative impact on developing young minds, even if it would be “managed by parents” as Facebook promised. 

    The letter, which was signed by 99 groups and individuals around the world, also took issue with the privacy implications of establishing an Instagram for children. 

    “We agree that the current version of Instagram is not safe for children under 13 and that something must be done to protect the millions of children who have lied about their age to create Instagram accounts, especially since their presence on the platform could be a violation of the Children’s Online Privacy Protection Act (COPPA) and other nations’ privacy laws,” the letter said. 

    The group contended that launching a version of the photo-sharing app for children under 13 is “not the right remedy and would put young users at great risk.” 

    Mental health and privacy risks

    The signatories pointed out that Instagram is already used by those under 13, and those users aren’t likely to “abandon it for a new site that seems babyish.” Moreover, the group said the nature of the platform is not suitable for children who are in the midst of such a formative period. 

    “In the elementary and middle school years, children experience incredible growth in their social competencies, abstract thinking, and sense of self. Finding outlets for self-expression and connection with their peers become especially important,” the letter said. “We are concerned that a proposed Instagram for kids would exploit these rapid developmental changes.” 

    Josh Golin, the CCFC’s executive director, added that Instagram’s business model poses inherent risks to kids’ privacy. 

    "Instagram's business model relies on extensive data collection, maximising time on devices, promoting a culture of over-sharing and idolising influencers, as well as a relentless focus on often altered physical appearance,” Golin said. “It is certainly not appropriate for seven-year olds."

    Potential for exploitation

    Although Facebook has said it believes that creating an Instagram for under 13s would help keep them safe on the platform, the CCFC argued that the opposite would be true. 

    Allowing a younger demographic to use the social media platform would tap into their “fear of missing out and desire for peer approval,” which would undoubtedly encourage children and teens to check their devices excessively and share photos with their followers. 

    “The platform’s relentless focus on appearance, self presentation, and branding presents challenges to adolescents’ privacy and wellbeing,” the group said. "Instagram's focus on photo-sharing and appearance makes the platform particularly unsuitable for children who are in the midst of crucial stages of developing their sense of self.

    "Children and teens (especially young girls) have learned to associate overly sexualised, highly edited photos of themselves with more attention on the platform, and popularity among their peers." 

    The highly commercialized nature of the app could also open kids up to being exploited, the letter added. The CCFC said roughly one in every three Instagram posts is an advertisement, according to an analysis by digital monitoring agency Sprout Social.

    The letter was signed by 35 organizations and 64 individual experts, including the Electronic Privacy Information Center, Global Action Plan, and Kidscape.

    In a letter addressed to Facebook CEO Mark Zuckerberg, the Campaign for a Commercial-Free Childhood expressed opposition to the idea of an Instagram for ch...

    Article Image

    Facebook has reportedly readied argument against splitting up Instagram and WhatsApp

    The company plans to claim that a potential breakup would hurt app users

    Facebook appears to have prepared an argument against breaking up Instagram and WhatsApp, according to a document seen by The Wall Street Journal

    The leaked document suggests that if Facebook were to be ordered by the government to split up its services, the company would argue that a breakup would be a “complete nonstarter.” 

    Lawmakers have contended that Facebook wields too much power in the tech ecosystem, which raises concerns about anticompetitive behavior. The FTC is said to be preparing an antitrust lawsuit before the end of the year, and the House could release its antitrust investigation results in October. 

    Lack of past FTC action 

    If talks of government regulation eventually become an actual plan, Facebook would reportedly argue that splitting up its services would be difficult to do -- and that doing so could hurt the user experience.  

    The company may plan to argue that its acquisitions of Instagram and WhatsApp were approved by the FTC without objections and that it invested a great deal of money in getting the services running as separate-but-integrated systems. 

    “In the paper, Facebook says unwinding the deals would be nearly impossible to achieve, forcing the company to spend billions of dollars maintaining separate systems, weakening security and harming users’ experience,” the Wall Street Journal reported. 

    Facebook would argue that “a ‘breakup’ of Facebook is thus a complete nonstarter,” according to the 14-page document. 

    From a legal standpoint, Columbia University professor and tech policy expert Tim Wu says Facebook’s plan to pin the blame on the FTC for its past approval would be a “weak” defense. Wu said that Facebook’s contention that a breakup would be too difficult would also be a flimsy legal argument. 

    “There is no ‘it’s too hard’ defense,” Wu told the Journal. 

    Facebook appears to have prepared an argument against breaking up Instagram and WhatsApp, according to a document seen by The Wall Street Journal. The...

    Article Image

    Senate committee subpoenas executives of major tech companies

    Lawmakers want tech CEOs to testify about their protections from liability

    The Senate Commerce Committee has voted to subpoena the CEOs of Facebook, Google and Twitter over concerns related to Section 230 of the Communications Decency Act.

    Section 230 acts as a liability shield for online companies. In its current state, websites and online services aren’t held liable for what their users post. 

    The committee voted unanimously to subpoena Facebook’s Mark Zuckerberg, Google’s Sundar Pichai, and Twitter’s Jack Dorsey to testify about Section 230 if they refuse to come voluntarily. Chairman Roger Wicker (R-Miss.), who introduced the subpoena, noted that both presidential candidates support reform to Section 230. 

    President Trump took aim at Section 230 over the summer after Twitter fact-checked two of his tweets. He accused the company of engaging in censorship and announced that he would sign an executive order encouraging the FCC to impose new regulations on the provision.

    Democratic nominee and former Vice President Joe Biden has told The New York Times editorial board that Section 230 “should be revoked” and has said he plans to do just that if elected. This week, Biden accused Facebook of failing to prevent the spread of election misinformation.

    Curbing the power of big tech 

    Political beliefs aside, Senator Ted Cruz (R-Texas) argued that it’s “dangerous” to give too much power to tech companies. 

    “Even if you happen to agree with them on a particular issue right now, ceding the power to the star chamber of Silicon Valley is profoundly dangerous,” the lawmaker said. 

    Democrats supported the subpoena but said Congress should avoid creating a “chilling effect” on tech companies currently battling hate speech and COVID-19 misinformation.  

    “What I don’t want to see is a chilling effect on individuals who are in a process of trying to crack down on hate speech or misinformation about Covid during a pandemic,” said Washington Sen. Maria Cantwell, the top Democrat on the committee. 

    "I welcome the debate about 230," she said. "I think it should be a long and thoughtful process. Not sure that a long and thoughtful process will happen before the election, but I understand my colleagues’ desires here today.”

    The Senate Commerce Committee has voted to subpoena the CEOs of Facebook, Google and Twitter over concerns related to Section 230 of the Communications Dec...

    Article Image

    YouTube turns to AI assistance to place age restrictions on videos

    The platform has struggled to flag all content that may not be safe for children

    YouTube says it will be relying more heavily on artificial intelligence to find videos that may require age restrictions. 

    The Google-owned company has faced criticism over the way it handles content geared toward children. YouTube has maintained that its platform isn’t intended for anyone under the age of 13 due to federal privacy laws. However, young children have continued to use the site, and content creators have continued to create videos aimed at children. 

    Previously, YouTube’s Trust & Safety team was tasked with applying age restrictions when they found a video that they didn’t deem appropriate for viewers under 18 during their reviews. But the process led to some videos slipping through the cracks. 

    Now, YouTube says it will be using AI to weed out videos that warrant an age restriction. This means more viewers will be asked to sign into their accounts to verify their age prior to watching. 

    “Going forward, we will build on our approach of using machine learning to detect content for review, by developing and adapting our technology to help us automatically apply age-restrictions,” YouTube said in a blog post. 

    May be problems to start

    YouTube said it’s preparing for some labeling errors while the AI moderation program gets started. 

    “Because our use of technology will result in more videos being age-restricted, our policy team took this opportunity to revisit where we draw the line for age-restricted content,” the video platform stated. “After consulting with experts and comparing ourselves against other global content rating frameworks, only minor adjustments were necessary.”

    The company added that content creators can appeal an age restriction decision if they think it was incorrectly applied.

    YouTube says it will be relying more heavily on artificial intelligence to find videos that may require age restrictions. The Google-owned company has...