- Hollywood Tech Nerds
- Posts
- Adobe’s AI Disaster
Adobe’s AI Disaster
Plus: The Mystery of the Weird Freevee Thumbnail
Greetings Hollywood tech nerds!
In this week’s post:
Subscribe to get Hollywood Tech Nerds magically delivered to your inbox every Tuesday!
I Need Your Help!
Look into my eyes (and not at the missing temples on my glasses. Thanks Midjourney! Generative AI strikes again).
Before we get to this week’s stories, I would love if you, Beloved Reader, would take a few seconds and fill out a very brief reader survey. When I say “a few seconds” and “brief,” I mean it! It’s only four questions. And three of them are multiple choice! How easy is that?
If you haven’t clicked the first two links to the survey, here’s a third one just to be safe:
This will help me focus on the topics most important to YOU, and not the ones only important to me. Otherwise I’ll be forced to write about my local community theater production of “Waiting for Godot.” I play Godot!
Adobe’s AI Disaster
Another week, another tech company getting themselves into an AI-related controversy. This time it’s Adobe, whose creative suite is, if not a total industry standard, pretty darn close!
Sharp-eyed Adobe customers recently noticed a change to Adobe’s terms of service. Writing for Creative Bloq, Daniel John notes:
…users received a notification to that terms had been updated to include clarification that the company can access users content "through both manual and automated methods," leading many to speculate that Adobe could access anything we create – and potentially use it to train its AI. Adobe quickly back-pedalled… telling Creative Bloq the policy had been in place for "several years" and that the tweaked wording was all in the name of transparency.
Adobe users are concerned that any cloud-stored work will be accessed by Adobe and used to train their generative AI model Firefly. Adobe claims Firefly is only trained on licensed and public domain work.
However, artist Karla Ortiz is concerned that Adobe is getting around this promise with vague language:
@JussiKemppainen@SamSantala@Adobe@Photoshop Hey Scott.
Adobe has ALSO been trained on “Openly Licensed Work” and that has to my knowledge never been addressed by Adobe, only removed from the FAQ and official communications.
“Openly Licensed Work” is a vast term that can mean quite a lot. Can Adobe show what that data is?
— Karla Ortiz (@kortizart)
5:34 PM • Jun 7, 2024
Other Twitter users responded that “Openly Licensed Work” likely refers to material with an appropriate Creative Commons license, which could indeed be the case. That said, and as Daniel John suggests in his article, trust or the lack of it is the key factor in the relationship between Adobe and its customers. The tech industry’s motto might as well be Barbara Kingsolver’s classic line from The Lacuna: “Penitence is more attainable than permission.” Who trusts these folks to ever do what they promise?
For artists and other creatives concerned about their work being used to train AI models by unscrupulous tech companies, researchers at the University of Chicago have developed Glaze and Nightshade. These are apps that, respectively, protect artwork against generative AI and “poison” the models that attempt to utilize them as training data. How cool is this:
In an effort to address [unscrupulous AI training models], we have designed and implemented Nightshade, a tool that turns any image into a data sample that is unsuitable for model training. More precisely, Nightshade transforms images into "poison" samples, so that models training on them without consent will see their models learn unpredictable behaviors that deviate from expected norms, e.g. a prompt that asks for an image of a cow flying in space might instead get an image of a handbag floating in space.
Used responsibly, Nightshade can help deter model trainers who disregard copyrights, opt-out lists, and do-not-scrape/robots.txt directives. It does not rely on the kindness of model trainers, but instead associates a small incremental price on each piece of data scraped and trained without authorization. Nightshade's goal is not to break models, but to increase the cost of training on unlicensed data, such that licensing images from their creators becomes a viable alternative.
The Mystery of the Weird Freevee Thumbnail
Speaking of AI art controversies, Amazon’s Freevee service found itself embroiled in its own, as a Twitter user noticed its poster art for the movie 12 Angry Men appeared to be AI-generated:
Just watching that classic movie on Prime Video, 19 Terrifying Men.
— Andy Kelly (@ultrabrilliant)
9:42 PM • Jun 5, 2024
Though they’re owned by the same parent company, Prime Video has a real poster for the Henry Fonda-fronted film. This discrepancy between the two posters seems to be a licensing rights issue. A source explained to The A.V. Club that the service licenses 12 Angry Men from a third party, which is responsible for the images that accompany the film. Though MGM (which is owned by Amazon) seems to have some form of distribution rights for 12 Angry Men, older movies have more complicated licensing stemming from third-party distribution deals and Blu-ray and DVD remasterings. Still, it’s puzzling that in this situation the licensing would provide rights for the movie, but not the poster.
I think what’s going on here is less complicated than the article suggests. What’s likely is that for whatever reason, the artwork assets for 12 Angry Men did not include a 16×9 image of the poster art, and somebody in the pipeline opted to use generative AI to create one. Whether this was Amazon or the film’s distributor is known only to them.
Why anyone would do this instead of just opening Photoshop for a quick 3 minute job is anyone’s guess, but to me it points to the problems with the reliance on the idea of AI as a sort of fix-everything solution. Why bother having someone skilled at Photoshop make movie artwork when AI can just do it for you? Well… this is why.
Kernels (3 links worth making popcorn for)
Here’s a round-up of cool and interesting links about Hollywood and technology:
Why Google’s AI Overviews will always be broken. (link)
How content spending will grow post-peak TV. (link)
Who’s afraid of “Skibidi Toilet”? (link)