A new report from NewsGuard, a company that tracks misinformation on the internet cites examples of how it says TikTok users are likely to bump into misinformation on the platform. While users of all ages go to the app for recipes, dance routines, and generally funny videos, searching for more serious topics may not always lead to the most accurate information.
Searching for videos on any current events topics, including climate change, COVID-19 vaccines, and Russia’s invasion of Ukraine, among several others, is likely to be met with misinformation. The NewsGuard study found that about 20% of all videos that TikTok suggests after these key searches contain inaccurate information.
The report also points to the terms that auto-populate on TikTok when users are searching for information on COVID-19 vaccines. The search term “COVID vaccine” yielded the searches “COVID vaccine injury” and “COVID vaccine exposed,” both of which may lead to videos with misinformation. On the other hand, searching for “COVID vaccine” on Google led to prompts for booster shots and health care facilities.
This is particularly concerning when thinking about the primary audience on TikTok – young people. It can be difficult for consumers of any age to discern what’s accurate and what’s not, but having access to legitimate information – especially where important topics are concerned – is crucial.
In a statement, representatives from TikTok said that the company plans to remove any misinformation from the platform. The platform’s community guidelines outline that it does not tolerate misinformation of any kind, and any videos containing inaccurate information will be removed.
A bigger social media problem
Since the start of the COVID-19 pandemic, social media has been a breeding ground for misinformation. In the last few years, Facebook, YouTube, and Twitter have all had issues related to spreading misinformation about COVID-19 and the vaccines.
By mid-May 2020, nearly 30% of all YouTube videos contained misinformation about the pandemic. The biggest culprit was entertainment news outlets, which accounted for 30% of these videos, and they had garnered over 62 million views by that point.
After similar instances at Twitter, the company started monitoring all tweets related to the virus. Twitter started flagging tweets with misinformation about COVID-19 and the vaccines, and by early March 2021, the platform had removed over 8,400 tweets and flagged over 11.5 million accounts.
More recently, misinformation has been spread on nearly every social platform about abortion reversal pills. Following the Supreme Court’s decision on Roe v. Wade earlier this summer, posts on social media about abortion reversal treatment were gaining traction.
However, the treatment has yet to be proven safe or effective, and leading health care organizations have spoken out about the dangers of taking such pills. These types of posts make it difficult for consumers to know what’s true and what’s not, which further clouds these important, and potentially life-threatening choices.
Spotting misinformation on social media can be difficult. While typos or grammatical errors can be obvious signs that certain posts aren’t to be trusted, not every inaccurate post is so clearly inaccurate. Experts encourage social media users to report and flag any posts that they think contain misinformation.
Consumers need to be cautious and discerning when scrolling through their feeds and pay close attention to the source of social media posts. Look for those who are experts in a particular field, original creators of posts, or posts that are current – those who re-post items that are several years old aren’t likely to be accurate.
Doing more research is never a bad thing. Because misinformation can be hard to spot, taking the time to search for answers from credible sources outside of social media is the best way to ensure you’re only accessing the facts.