It will be interesting to see if Getty’s business model works out for content creators, and if the generated images become generally usable. It’s early days yet, with more training and refinement needed.
How AI image generation stacks up in terms of copyright is quite murky and subject to differing indemnity promises at the moment.
As a general principle, only humans and not machines can be granted copyright, and it has been reinforced in the United States in recent rulings. Ditto patenting inventions.
However, a human could get creative with an AI, and be granted copyright for the effort. Provided the use of AI is disclosed, and doesn’t comprise a significant part of creating the work. How that could be asserted and contested is a question best left for our fee-earning learned intellectual property lawyer friends to answer if they’re not too busy suing AI companies for rights infringement.
On that topic, the writer of this piece would like to apply for journalistic protection for accidentally creating a C3PO-like image while experimenting with the “could you generate an image of a protocol droid from a classic sci-fi movie?” prompt in an openly accessible AI from a large tech company. Star Bores really isn’t worth getting sued over.
A palpable fear in the last few years around AI image generators is that they can create “non-consensual intimate imagery” which if you think about it is a euphemism for an absolutely terrifying nightmare of abuse and harassment.
It’s an AI-generated dystopia that we didn’t need but humans being human, it’s booming business now.
A side-effect of that is that prompt engineering on the public-facing AIs is extremely cautious as to what input it accepts. It is based on American cultural norms and intellectual property regulation, making it quite difficult at times to work around excessive bowdlerisation, or even understand why seemingly innocuous prompts trigger a “new topic” reset in AIs.
Chief executive Jensen Huang of Nvidia touched on that when he earlier this month suggested nations should codify their own culture, history and common sense in the data to train AIs. As the leader of the third-largest company in the world, currently riding high on the AI boom, Huang definitely has skin in the game. Even so, the idea of “AI sovereignty” will resonate with many countries.
Which brings us in a roundabout way to this month’s Big Thing in AI, which is Sora. Developed by ChatGPT maker OpenAI, Sora can generate very realistic videos from text prompts, up to a minute long.
Sora isn’t publicly accessible, and we don’t know what data was used to train it. OpenAI staffers have been posting sample Sora clips on social media over the past few days that look seriously good.
Depending on how you feel about these things, that’s either brilliant marketing or utterly frightening.
Since Sora’s videos are generated content, there’s no understanding of the real world in it as Meta AI researcher Yann LeCun pointed out. It means Sora isn’t going to replace movie makers; for that, predictive tech that can suss out the physical world to map out what should happen in longer scenes for even better realism is needed.
And look, there’s the Video Joint Embedding Predictive Architecture being released a few days ago, as open source by Meta, which does just that.
Thinking about the above and adding that to improve, AI will need fresh content for training, material that’s ideally created and vetted by humans who look set to be displaced by the technology, where is all this headed? Nobody seems to know the answer to that, least of all the AIs themselves.