How to ethically train AI is a question that has emerged the past several years as artificial intelligence has become a larger part of the media landscape.
To say that there is no easy answer it to put it lightly. Creators, understandably, are wary of giving their hard work over to a machine that ostensibly seeks to replace them.
On top of that, companies are somewhat unwilling it seems to shell out the big bucks that might be necessary to train AI on professional-grade creative work. But that hasn’t stopped some from going on ahead and doing it anyway. We’ve covered a couple of the attempted lawsuits, most notably Getty’s, and we think this kind of thing will become more common over the next year or so as the curtain is pulled back on how these models are trained.
In the meantime, creators in the United Kingdom have resoundingly rejected a motion that would have them share their work with AI development in the pursuit of a better form of artificial intelligence, The Guardian reports.
In a statement quoted on the news website: “ Rights holders do not support the new exception to copyright proposed. In fact, rights holders consider that the priority should be to ensure that current copyright laws are respected and enforceable. The only way to guarantee creative control and spur a dynamic licensing – and generative AI – market is for the onus to be on generative AI developers to seek permission and engage with rights holders to agree licences.”
Essentially, if AI models want to use material for training, they need to license it out just as anyone would currently or what some might call the status quo with regard to creative rights.
Any thoughts that you might have on how to ethically train AI models are welcome in the comments.
We have some more photography news for you to read at this link.