As artificial intelligence (AI) continues to advance, distinguishing between real and AI-generated images and videos is becoming increasingly challenging. Microsoft is attempting to address this issue with new media provenance features, which were showcased at the annual Build conference. These features aim to enable users to verify if visual content has been generated by AI.
The media provenance capabilities will be integrated into Bing Image Creator and Designer, Microsoft's web app similar to Canva that allows users to create designs for presentations, posters, and other content for social media and various channels. Microsoft claims that through cryptographic methods, AI-generated images and videos will be marked with metadata about their origin. However, this won't be as obvious as a visible watermark.
To access the metadata signature, websites adopting the Coalition for Content Provenance and Authenticity (C2PA) interoperable specification will be required. C2PA is a collaborative effort involving Adobe, Arm, Intel, Microsoft, and the visual media platform Truepic. With the implementation of this specification, websites can notify users when images have been generated by AI or created using Designer or Image Creator tools.
Due to roll out in the coming months, these new capabilities emphasize Microsoft's commitment to ensuring transparency in digital media content generated by artificial intelligence. This development is crucial in an era marked by concerns about deep fake technology and manipulated imagery that can have severe consequences on individuals' reputations or even hamper credibility in journalism.
In conclusion, Microsoft's introduction of media provenance features signifies an important step in combating the threats posed by AI-generated content. By allowing users to identify such material through metadata signatures from adopting websites on their platforms, they contribute to fostering trustworthiness in digital media resources while keeping up with technological advancements.