Charlie Puth is no stranger to the world of artificial intelligence (AI). As a participant in a Google AI incubator program, Puth had the opportunity to collaborate with an AI-assisted tool to enhance his creative process. The experience was described as “profound” by Puth, as the system not only provided suggestions and styles but even sang back the lyrics in his own voice.
Generative AI, the technology behind these powerful creative tools, is poised to go mainstream. Companies like YouTube and Meta are preparing to release AI-driven tools to the masses. This includes an AI tool that recommends video ideas to creators and an AI “character” that users can chat with, featuring the voices and likenesses of real people.
However, as these AI tools become more accessible, concerns abound. Artists are worried about how their work is being used and whether they will be compensated or protected. The Writers Guild of America strike highlighted these concerns, with AI use being a sticking point in negotiations.
According to YouTube CEO Neal Mohan, the company is focused on addressing these concerns. They have three core principles: ensuring AI is harnessed in a way that benefits the creative community, giving creatives control and monetization opportunities, and acting responsibly. Mohan acknowledges not only the issues of compensation and copyright but also the potential for misuse and manipulation.
The dichotomy of generative AI, with its potential and risks, is evident. While artists like Charlie Puth find value in collaborating with AI, others prefer to keep their voices far from the training sets. Robert Kyncl, CEO of Warner Music Group, stresses the importance of protecting artists who choose not to engage with AI.
There is a push for regulations as Big Tech encroaches on Hollywood. Music publishers, for example, are advocating for a federal right of publicity law to combat voice mimicry in AI tracks and protect artists’ brands.
Courts are already grappling with the legal implications of AI and copyright. Lawsuits have been filed against AI companies like OpenAI, Meta, and Stability AI, alleging mass-scale copyright infringement over their use of copyrighted works as training data. The question of fair use versus commercial nature is at the forefront of these cases.
Transparency is also an issue, as artists are often unable to prove that their works were used to train AI systems. OpenAI and Meta no longer disclose information about the sources of their datasets, making it difficult for plaintiffs to establish their claims.
Liability falls on users as well. YouTube does not offer indemnity for the use of its AI tools, leaving users at risk for copyright infringement. Getty Images, on the other hand, provides indemnification and compensation for photographers whose work was used in their AI tool.
Prominent authors, represented by the Authors Guild, have also entered the legal battle against OpenAI, seeking damages and a requirement to destroy systems trained on copyrighted works.
As AI tools become more prevalent, the industry must navigate the balance between innovation and responsibility. Artists deserve protection and compensation for their work, and regulations may be necessary to ensure their rights are upheld. It is a period of change, and how the industry adapts will shape the future of AI-driven creativity.