For media companies and content producers, the growth of AI presents a lot of crucial concerns. How can we tell artificial intelligence-generated photographs, videos, or musical compositions apart from human-made ones? is one of the most essential questions. Digital watermarking may be the solution after President Biden said yesterday that seven significant technology companies were taking voluntary action to control their AI technologies.
with particular, GoogleGOOG +0.7% and OpenAI agreed to creating watermarking systems to aid with the identification of information produced with their AI capabilities.
Invisible patterns that might be seen if the paper was submerged in water are what give digital watermarking its name. These markings are used to identify the provenance and authenticity of paper and have been used for millennia. For instance, most paper money nowadays has some kind of watermark.
Google And OpenAI Plan Technology To Track AI-Generated Content
Neither new nor uncommon, this technology is. Techniques for digital watermarking initially became popular in the 1990s. Today, they are often utilized in a wide range of content categories, including movies screened in cinemas, stock pictures, e-books, and digital music files sold online.
They are often employed to track out the sources of allegedly pirated content. Today’s content production programs, like AdobeADBE +0.6% Photoshop, include add-ons that let users incorporate undetectable watermarks.
A strong watermarking method makes it exceedingly challenging to remove the payload without negatively impacting the material and durable enough to withstand changes like picture screen captures or analog recordings of digital music. As a result, a watermark payload’s data capacity is quite limited—usually just a few hundred.
This results in the application of watermarking in AI. It is simple to modify generative AI technologies such that they incorporate a watermark while creating material. The payload can direct users to an entry in an online registry that contains data such as the name of the AI tool, the date and time, the person’s identity who used it, and possibly details about how or not the user was involved in the content production.
The latter data is crucial, for instance, in figuring out if the user meets the requirements to be considered a “author” of the material under copyright law. Watermark removal software can be made publicly available by AI tool manufacturers so that anybody can review any piece of information they want.