The rise of deepfakes — synthetic media where a person in an existing image, video, or audio is replaced with someone else’s likeness — poses significant challenges to cybersecurity and information exchange. A recent study conducted by Trend Micro between late June and mid-July 2024 sheds light on consumer awareness, experience, and concerns regarding deepfakes, revealing their broader implications for cybersecurity and public perception.
Awareness and Exposure
The study, which surveyed 2,097 participants from the United States and Australia, aimed to understand consumer comprehension, exposure, and concerns related to deepfakes. One of the key findings was the high level of awareness among respondents, though there is a notable generational gap. The study specifically screened for individuals familiar with the term “deepfake”, leading to a high disqualification rate among those aged 55 and older, especially in Australia. This underscores the need for targeted education initiatives to bridge the digital literacy divide.
Exposure to deepfakes is widespread, with 80% of respondents having encountered deepfake images, 64% having seen deepfake videos, and 48% having heard deepfake audios. Social media platforms are the primary source of these encounters, highlighting the pervasive nature of deepfakes in digital spaces. Interestingly, a significant portion of respondents claimed they could identify deepfakes on their own—57% for images, 48% for videos, and 45% for audios—suggesting a growing awareness of manipulated content, at least when it’s of low quality.
The fact is, however, as deepfake technology continues to advance, it’s becoming much harder for the human eye to reliably detect a deepfake — which is precisely why Trend Micro has just released a FREE tool, Deepfake Inspector, which will verify your video calls in real time. At the same time, remember to also follow our best practices in order to protect yourself from deepfakes.
Perceptions and Concerns
Consumer sentiment towards deepfakes is overwhelmingly negative. The study found that 71% of respondents feel negatively about deepfakes, associating them primarily with fraudulent activities and misinformation. This negative perception is driven by the creation and misuse of deepfakes for malicious purposes, such as identity theft, scams, and the spread of disinformation.
Concerns about deepfake scams are particularly high, with 64% of respondents expressing significant concern about being targeted by deepfake scams. This anxiety persists despite relatively low first-hand (36%) and second-hand (41%) experiences with such scams. The gap between actual experience and perceived threat suggests that media coverage and public discourse may be amplifying anxieties about deepfake-related fraud.
It is important to note that the exposure figures will only grow as deepfakes continue to dominate the internet. One worrying development is X/Twitter’s Grok AI which will soon let users create uncensored deepfakes, leading to a disinformation free-for-all. To learn more about deepfake threats, be sure to read and share our “Deepfake Scams to Watch Out for in 2024”.
Verification and Actions
In response to the growing threat of deepfakes, consumers are adopting various methods to verify the authenticity of digital content. The study found that 61% of respondents check other trusted sources to verify if something is a deepfake. This reliance on external verification highlights the importance of having reliable and accessible tools to detect manipulated content. Nonetheless, a worrying 1 in 5 believe they “just know” — despite experiments contradicting this optimism.
There is also a strong desire among consumers for proactive solutions that can alert them to deepfakes and enable them to act. According to the study, 62% of respondents want to be alerted whenever they interact with AI-modified content, and 58% want the ability to report deepfakes to authorities or service providers. These findings indicate a proactive stance among consumers, who are eager to protect themselves and others from the potential harms of deepfakes.
Behavioral Insights and Regional Variations
The study also explored consumer preferences for notifications and security solutions. A significant majority prefer to be alerted whenever they interact with AI-modified content, reflecting a demand for transparency and awareness. Consumers highly value the ability to block scam calls and texts, receive alerts about deepfake content, and distinguish safe from scam communications.
While there were many similarities between US and Australian consumers, some regional differences emerged. US respondents showed a slightly higher preference for solutions from security vendors and ISPs, while Australian respondents placed more emphasis on solutions provided by social media apps. These variations highlight the need for tailored approaches in addressing deepfake concerns across different markets.
Implications and Future Directions
The study’s findings have significant implications for policymakers, technology companies, and consumers alike. The high levels of concern, coupled with widespread exposure to deepfakes, call for enhanced digital literacy programs, especially targeting older demographics, and the development of deepfake detection tools.
As deepfake technology continues to evolve, it is crucial for cybersecurity measures to keep pace. By developing and implementing robust detection and notification systems, we can empower consumers to navigate the digital landscape with greater confidence and security. The findings of this study serve as a call to action for both technology providers and policymakers to prioritize the development of solutions that protect consumers from the growing threat of deepfakes.
Introducing Trend Micro Deepfake Inspector
Deepfake video calls are on the rise, making it hard to know if the person you’re talking with is who they say they are. You could be chatting with a friend, family member, potential partner, or having a job interview online — scammers can use these opportunities to impersonate the person you think you are talking to and trick you into giving away money or sensitive information.
Protect yourself with our FREE tool, Trend Micro Deepfake Inspector. Designed for live video calls on Windows PCs, it scans for AI face-swapping content in real time, alerting you if you’re talking with a potential deepfake scammer and protecting you from harm. To learn more about Deepfake Inspector and how it can help you spot people using AI to alter their appearance on video calls, click the button below.
Don’t risk a deepfake disaster — download Trend Micro Deepfake Inspector today! As ever, if you’ve found this article an interesting or helpful read, please SHARE it with friends and family to help keep the online community secure and protected. Also, please consider clicking the LIKE button or sharing your experience in a comment below. Here’s to a secure 2024!