In an online world flooded with hyper-realistic AI-generated images and deepfake videos, it’s becoming increasingly difficult to distinguish real from fake. As generative AI tools improve, so does the ease with which deceptive or fabricated content can be created and shared. This raises an urgent question: How can we know when a photo or video is real?
Fortunately, a range of industry-backed solutions are emerging to address this problem. These efforts revolve around a common goal: embedding cryptographic, tamper-evident metadata into photos and videos at the point of capture or editing—making them verifiable by anyone, anywhere.
What Is Media Provenance?
"Media provenance" refers to the traceable history of a piece of content—who created it, when, where, and how it may have been altered. Just as a food label shows the ingredients and origin, a media provenance label would show whether an image is an original camera capture, an AI creation, or an edited version of another file.
The idea is simple: if a photo is real, it should be able to prove it.
The Content Authenticity Initiative and C2PA
One of the most advanced efforts to build this capability is Adobe's Content Authenticity Initiative (CAI). Alongside partners like Microsoft, BBC, and Intel, CAI helped launch the Coalition for Content Provenance and Authenticity (C2PA), which developed an open technical standard for attaching cryptographically signed metadata—called Content Credentials—to digital media.
When you capture an image or edit it in a C2PA-compliant app, a manifest is generated describing who created it, what device was used, what edits were made, and whether any AI tools were involved. This information is cryptographically signed and embedded in the file or stored alongside it. Viewers can later verify the signature to confirm authenticity.
Adobe has already integrated Content Credentials into Photoshop, and Microsoft is using them to verify political content and news media during elections. This technology is also being used in cameras.
Cameras That Capture the Truth
In 2023, Leica released the M11-P, the first consumer digital camera with built-in Content Credentials support. When photographers enable the feature, the camera signs each photo at the moment of capture using a secure chip. This signature can later be verified to ensure the image hasn't been tampered with.
Nikon followed suit with plans to integrate similar technology in the Z6 III, in partnership with photo agencies like AFP. Truepic, a startup focused on authenticity, has worked with Qualcomm to embed similar features into smartphones at the hardware level.
The result: we are entering an era where cameras themselves can prove that a photo is real.
Verifying Video and Real-Time Capture
Authenticity isn't just about images. Videos, too, are vulnerable to editing and deepfakes. Some solutions aim to secure video footage at the moment of capture using real-time hashing and timestamping. For example, Amber Authenticatedeveloped a system for hashing each video frame and anchoring the hash on a blockchain, allowing any viewer to verify whether the footage has been altered.
Other efforts, such as Project Origin and Starling Lab, focus on journalistic integrity by ensuring a trusted chain of custody for news footage. This often involves blockchain anchors, cryptographic signatures, and verification portals.
Introducing the Trust Badge
Having this metadata is only part of the solution. Users also need a way to quickly know whether content is authentic. Enter the Content Credentials "nutrition label"—a user-facing badge that indicates whether an image or video comes with verified provenance.
Adobe, Microsoft, and others have developed verification tools where users can inspect this information. For instance, Microsoft’s Content Integrity Portal allows voters and journalists to verify campaign media during elections.
Platforms may soon begin displaying a badge or icon to indicate when media carries verified credentials—a visual cue of trustworthiness.
JPEG Trust and ISO Standardization
To ensure interoperability, the JPEG Committee (part of ISO/IEC) has developed a framework called JPEG Trust. This emerging international standard outlines how to link secure metadata with images to detect tampering. It's designed to complement C2PA and ensure that a unified system of trust can span different devices and formats.
A Path Toward Global Adoption
These initiatives represent a coordinated effort across industries: tech companies, camera manufacturers, media outlets, and standards bodies are aligning on how to authenticate digital content. The vision is that within a few years, most new photos and videos will come embedded with verifiable authenticity metadata by default.
The benefits are enormous. For journalists, it safeguards credibility. For citizens, it provides clarity amid misinformation. For history, it ensures that the digital record remains trustworthy.
In a future dominated by synthetic media, authenticity will be a feature, not an assumption. Thanks to these new standards and technologies, we may soon regain our ability to trust what we see.
To learn more, visit contentauthenticity.org or explore C2PA.org for technical details and tools.