General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region Forums'Nonfiction' Book About Maui Wildfire 'Smells of AI,' Gets Pulled From Amazon
https://gizmodo.com/ai-book-about-maui-wildfire-pulled-from-amazon-1850751250The book, titled Fire and Fury: The Story of the 2023 Maui and its Implications for Climate Change, is an 86-page narrative of the recent wildfires in Hawaii that has a tinge of generative AI. The book has since been removed from the Amazon store, but an archived version of the listing shows 22 reviews, all of which are one star. Some reviewers believe that the book was written with generative AI like ChatGPT, with one such review claiming that the book smells of AI.
-snip-
That very well may be the case, since, as The Register notes, there are some bizarre circumstances surrounding the publication. The books description on its Amazon page uses the phrase the book to begin five of its seven sentences. The description also mentions that the book covers the timeframe of August 8th to 11th, despite the book itself being listed on Amazon on August 10th.
-snip-
But the evidence that the book was simply churned out by a machine is overwhelming. Fire and Fury is credited to Dr. Miles Stones, whose Amazon bio reads Id rather not say. Stones also has a profile on GoodReads, with all of their books having been published in or after June 2023. A majority of these books are nonfiction and feature clearly AI-generated art as well as overwhelmingly negative reviews.
-snip-
mopinko
(73,116 posts)it may, at some point, churn out something actually useful. but i doubt any of it ever hits the best seller lists.
joshcryer
(62,534 posts)Otherwise all LLMs generate text like a student trying to prepare for some writing thing and make up stuff to hit a word count. It can help with the creative process, but the prompting has to be very specific, and most will still generate non-useful stuff. I do think it can help writers with their workflow but you will be able to sniff out a non-writer who uses them to generate garbage pretty easily.
mopinko
(73,116 posts)i think it will do great things.
highplainsdem
(58,824 posts)mopinko
(73,116 posts)i bet theres some smart cookie rn using only public domain and stuff they have permission for.
disney could create quite a database using just their own catalogue and stuff they have rights to.
ethical ai. ya heard it here 1st.
highplainsdem
(58,824 posts)for the uses specified in the old contracts. NOT to train AI to generate similar scripts, have AI copy actors' voices or images, or any other new work exploiting the work they paid for certain rights to. The unions would sue Disney.
mopinko
(73,116 posts)theres probably existing clauses in contracts that wd at least provide a framework.
there COULD b ethical ways to do it.
highplainsdem
(58,824 posts)whether that's stolen text or images.
And that includes ChatGPT, Midjourney, and most if not all well-known generative AI models.
I believe Adobe claims to have legal right to all the images and photos from its users that service agreements gave it rights to for some business uses in the past, but those service agreements predate its business use of generative AI by many years.
joshcryer
(62,534 posts)And if the courts do find against those datasets there's still plenty of public domain stuff out there, with reenforcement learning with human feedback (RLHF), that you can still generate an LLM that is as useful. Open Assistant for instance is trained on public data such as Reddit posts and Wikipedia data, it does not contain any books or any intellectual property that isn't on a dataset that was openly available (a Reddit poster might say their posts are their own intellectual property though). Open Assistant I have found is vastly superior to ChatGPT and others when it comes to **writing process** because it has over 200,000 HLRF submissions in its dataset, people taking the time out of their day to write things for it. And because of that it has a writers touch. (On other tasks Open Assistant is poor though.)
I find intellectual property to itself be unethical, though.
highplainsdem
(58,824 posts)is unethical?
joshcryer
(62,534 posts)Not a fan of the things large corporations do with IP (Monsanto, Disney, Sony, Intel, etc).
highplainsdem
(58,824 posts)intellectual property gets ripped off by AI? Or by other types of corporations, or by other individuals?
Corporations and other businesses are often legal owners of IP, and if they ripped it off, they should be sued. In almost all cases I've heard about, AI companies just helped themselves, not compensating any of the owners.
Big Tech is trying to get laws changed to make what they did legal. I hope they fail.
joshcryer
(62,534 posts)90% of coders (on Github) are currently using the AI tools to help them code more efficiently. The same will be true of writers and artists (Adobe's tools for example).
Here's the point you're missing. The gatekeepers still exist. The gatekeepers are those who own the intellectual property for a given thing. You can have all the power in the world to create Mickey Mouse with AI. Disney will sue the living crap out of you if you do it.
Anyway, like I said, you can make AIs that don't use this data and will work just as well. While image AIs are going to be more difficult to make work well with open data, it is not hard for those datasets to be built (in particular by people photographing / videoing real life), in which case the gatekeepers will say "you can't even take a real life picture." The arguments against using free data that is available on the internet becomes really weak from that perspective.
MorbidButterflyTat
(3,844 posts)in the world to create Mickey Mouse with AI. Disney will sue the living crap out of you if you do it."
Doing something illegal and unethical until you get caught and sued, is unethical.
Copyright infringement is illegal.
highplainsdem
(58,824 posts)and thinking.
And intellectual property is owned by the creator, UNLESS they choose to sell or otherwise transfer all rights and ownership.
People using AI to rip off a Disney character they have no right to SHOULD be sued, or at least shut down with cease-and-desist letters.
People are always free to create their own cartoon mice. Helping themselves to Disney's is both uncreative and theft.
Yes, you can make AI with any data set. It may produce some real crap.
I have never heard owners of intellectual property called gatekeepers. I run across that term most often in complaints from artists of different types who haven't become professionals yet and resent that fact and look for others besides themselves to blame.
Gatekeepers can be very useful in any field where people want to have some idea where and how to find the best - because it's impossible to sort through everything out there. So publishers have editors to sort through submissions, including from agents who also sort. Record labels have A&R people. The visual arts have gallery owners. Films and TV rely on agents. All are helped by reviews and articles about talent.
There is nothing wrong with that. And the artists own the intellectual property. The exception would be work for hire, where the artist has been hired to create something that will be owned by their employer. A commercial jingle, for instance.
MorbidButterflyTat
(3,844 posts)highplainsdem
(58,824 posts)70sEraVet
(5,059 posts)yonder
(10,176 posts)Progressive dog
(7,547 posts)many non A's. So will this finally stop the ridiculous hype about AI? I doubt it.
yonder
(10,176 posts)we can expect it to insidiously overtake human input in arts and culture. Right now, it is easy enough to discover as it is still in its infancy. What happens as this technology matures? Before we know it, only the most discerning eye might be able to determine its subtle origin. The rest of us meanwhile, will be unwittingly basking in its easy allure as we call it's product, our culture. Maybe that is when we become the machine?