In 2025, trust is no longer a given. With the rapid proliferation of AI-generated imagery, videos and audio, the line between “real” and “fabricated” is becoming dangerously blurred. Organizations, media outlets and individuals now face a world where seeing or hearing is increasingly not believing. The key question becomes: how can we build and maintain trust in this environment of synthetic content? In this article, we explore the forces reshaping trust, the risks and opportunities, and concrete strategies for businesses and leaders to respond.

Why Trust Is Eroding in the Age of AI-Content
In recent years, several converging trends have undermined trust in digital content:
- The volume of deepfake files has surged from around 500 000 in 2023 to an estimated 8 million by 2025.
- Fraud attempts linked to voice- and video-based deepfakes have seen massive growth for example, a 1 740% spike in North America between 2022 and 2023.
- A global survey of 48 000+ people across 47 countries found that only about half felt willing to trust AI systems, even as adoption increases.
- Oversight and regulation lag behind the technology: the International Telecommunication Union noted that the global ecosystem lacks robust verification standards for AI-generated media.
What this means is that for brands, journalists, public institutions and even interpersonal interactions, the assumption of authenticity can no longer be taken for granted. Trust must be proactively built and guarded.
The Stakes for Businesses and Institutions
For companies, news organisations and public stakeholders, the erosion of trust poses multiple risks and strategic imperatives.
Risk of brand damage and financial exposure
When fabricated or manipulated content circulates and is attributed (rightly or wrongly) to a brand, the cost can be high: reputational damage, legal liability and loss of customer confidence. A widely reported case involved a French woman being defrauded of about €830 000 after being manipulated into a fake online relationship built on deepfake videos.
Businesses must treat the risk of synthetic manipulation as a key part of their enterprise-risk management.
Consumer trust and brand differentiation
As Harvey Lockey of the University of Melbourne and KPMG International study shows, trust in AI systems correlates strongly with whether people believe the technology is designed in their interest (r ≈ .54).
That means companies that are transparent, ethical and trustworthy with AI have a competitive advantage.
Global and regional implications
The challenge is especially acute in regions with lower digital literacy or weaker regulatory infrastructures. A study addressing “low-tech environments” found that communities in the Global South are particularly vulnerable to deepfake-driven rumours and destabilising content.
Thus, multinational enterprises and media organisations must tailor their trust strategies for global contexts.
Key Pillars for Building Trust in AI-Driven Media
Let’s examine four core pillars that organisations should prioritise to build and maintain trust in this complex environment.
1. Transparency & Provenance
One of the most fundamental trust-builders is revealing how content was generated, processed and verified.
- The Content Credentials standard developed by Adobe, Google, Microsoft and others aims to embed metadata showing how media was edited or generated.
- Research finds that “warning labels” on AI-generated content can significantly increase user awareness that the content may be synthetic.
- Best practice: For any AI-generated asset (video, image, audio) attach a “how it was made” tag, timestamp, author and disclaimers.
- Example: A major news outlet publishing a video where a voice-clone is used, clearly marking “AI voice clone used for illustrative purposes”.
Actionable tip: Create and publicise a “media-authenticity policy” that explains how your organisation tags, labels and verifies synthetic content and share that policy with your audience.
2. Verification & Detection Tools
Trust thrives when there are mechanisms to confirm or challenge authenticity.
- The UN-linked ITU has urged deployment of detection tools and media authentication standards.
- For brands, this means investing in systems that can detect manipulated content, monitor brand impersonation (voice/face), and respond swiftly.
- Example: One social-platform (YouTube) rolled out a tool allowing creators to scan for unauthorized deepfakes of their likeness.
Actionable tip: Partner with providers that specialise in deepfake detection and set up a workflow for rapid response if synthetic content mis-uses your brand or executive identity.
3. Education & Digital-Literacy Programs
Even the best technical toolset won’t suffice if audiences cannot interpret signals of authenticity or risk.
- The UNESCO emphasises that education must go beyond detection helping users understand “why” content is misleading and how AI mediates uncertainty.
- Internal training is equally important: a global workforce needs awareness of synthetic-media risks, especially in communications, marketing and compliance functions.
- Example: A financial-services firm runs a quarterly “Deepfake awareness” module for senior executives and brand ambassadors.
Actionable tip: Develop a digital-literacy curriculum for both internal teams and client audiences that includes case-studies of synthetic media misuse and best practices for content verification.
4. Governance, Ethics and Accountability
Trust ultimately depends on the integrity and values of the organisation producing or using AI-driven media.
- The KPMG/University of Melbourne study shows that people’s belief in the intentions of organisations (whether they “use AI for good”) affects trust significantly (r≈.41-.63) depending on the region.
- Regulation is strengthening: e.g., the Digital Services Act (EU) and other frameworks are pushing disclosure and accountability of AI-generated content.
- Example: An international media company sets up an internal “Synthetic Content Review Board” to evaluate use cases of AI media and ensure ethical compliance before publication.
Actionable tip: Establish a governance body charged with oversight of AI media-generation, include clear ethical guidelines, audit trails and public commitment to transparency.
Case Studies – Trusted vs Distrusted Use of AI Media
Case A: Trusted use – a global brand builds credibility
A multinational consumer-electronics brand launched a marketing campaign using an AI-generated video. Rather than hiding the process, they opened a microsite explaining how the video was generated (which models were used, what human edits were applied, metadata attached). They included a “how to verify” link using Content Credentials. Results: higher audience trust scores and fewer complaints about authenticity.
Case B: Distrusted use – brand impersonation scam
In 2024-25, deepfake videos showing prominent public figures endorsing fake investment schemes appeared on social media platforms. These caused investor losses, regulatory scrutiny and brand damage for the platforms where they were hosted. The absence of provenance, delayed takedown and lack of transparency magnified the problem.
Forward Outlook – What to Expect and How to Prepare
Trend 1: Increasing realism and volume
As AI models become more powerful, the threshold for “real-looking” synthetic content drops. Detection will grow harder; the flood will increase.
Trend 2: Proliferation of regulation and disclosure standards
Expect more jurisdictions to require AI-generated content to carry labels or metadata. Organisations must prepare.
Trend 3: Demand for “source certainty” will become a brand premium
Brands that can certify the authenticity of their content and the origins of their data and media will gain a competitive edge.
Trend 4: Global divergence in trust and literacy
Emerging economies may adopt AI at higher rates (88 % in some markets) but may also face greater trust deficits and risk exposure.
Companies operating globally will need region-specific trust strategies.
Conclusion
In an era where deepfakes and AI-generated content proliferate, building trust is no longer optional it is strategic. Trust must be constructed on four pillars: transparency, verification, education, and governance. By adopting clear policies, embedding provenance in media assets, training audiences (including internal stakeholders) and holding to high ethical standards, organisations can not only protect themselves from risk but differentiate themselves through credibility. As the digital landscape evolves, trust will become the new competitive differentiator. The time to act is now.