#AI #CopyrightedArt #Transparency #Legislation
🤖🎨 Are you worried about AI companies using copyrighted art without permission? A new bill is aiming to bring transparency to the use of copyrighted material in the world of artificial intelligence. Read on to find out more about this important legislation and why it matters.
## What is the new bill all about?
Under the proposed legislation, AI companies would be required to disclose to the public how they are using copyrighted art in their algorithms. This information would need to be readily accessible and easily understood by the general population.
### Key points of the bill:
– Aimed at increasing transparency in the use of copyrighted material by AI companies
– Requires disclosure of copyrighted art usage to the public
– Intended to protect the rights of artists and creators
## Why is this bill important?
The use of copyrighted art by AI companies has been a source of concern for artists and creators who fear that their work is being exploited without their consent. This bill aims to address these concerns by increasing transparency and holding AI companies accountable for their use of copyrighted material.
### Benefits of the bill:
– Protects the rights of artists and creators
– Helps to prevent unauthorized use of copyrighted material
– Promotes ethical practices in the field of artificial intelligence
## What can you do to support the bill?
If you believe in the importance of transparency and protecting the rights of artists and creators, there are several ways you can support this bill:
1. Contact your local representatives and urge them to support the legislation
2. Spread awareness about the bill on social media using the hashtag #CopyrightedArtTransparency
3. Educate others about the importance of protecting copyrighted material in the age of AI
## In conclusion
The proposed bill to force AI companies to reveal their use of copyrighted art is an important step towards increasing transparency and protecting the rights of artists and creators. By supporting this legislation, we can help ensure that the use of copyrighted material in artificial intelligence is done ethically and responsibly. Let’s work together to make sure that artists are given the respect and recognition they deserve in the digital age. #ProtectArtistsRights
Remember, every voice matters in the fight for copyright protection! 🎨🔒
Source: https://www.theguardian.com/technology/2024/apr/09/artificial-intelligence-bill-copyright-art
Excellent first step. [Here’s the bill](https://schiff.house.gov/imo/media/doc/the_generative_ai_copyright_disclosure_act.pdf), it’s simple and straightforward.
Except they already don’t hide it.
The point is that generative AI is so transformative that it doesn’t matter, unless some bad actor is trying to get a specific overfit.
And those bad actors are the ones who should be liable if they try earning money with copyrighted content.
Everything is a creative work. Companies like this are downloading youtube videos and transcribing them to text to input to their models.
Surely they pump every podcast they can find through also.
Those aren’t books, movies, etc. but certainly are also creative works.
These companies already said that without use of others copyrighted works they couldn’t do their business. We shouldn’t be looking for loopholes to make it okay but instead finding ways to compensate those who create the works which the companies (OpenAI in this case) indicate are a core of their business without which they couldn’t function.
Given how broken copyright is (the publishers profit the most where creators get the droppings and no power), any legislation to this regard is the product of lobbyists for the publishing and rights-holding enterprises.
I’d play devils advocate here and say if your work is NOT part of the training data you will be at a disadvantage. It’s like saying being on Napster was actually good for Metallica
Are they going to check the artist’s brain to see if they ever saw a copyrighted image?Â
All the anti-AI bills the government passes are ultimately pointless.
Aside from there being a level obscurity where you have no way to know for sure what art was used to train an AI Model unless the person who trained it actually reveals that information.
None of these laws mean a damn thing when it comes to other countries and the internet.
Do you honestly think countries like China and India are going to give a fuck about someone on deviantart getting upset because their artwork was used?
To assess if the model is the right one to use, we’ll need full access to all training data of any model whatsoever.
AI is a global arms race. The US government will not pass a bill that limits US-created AI.
That will just give the rogue state a head start on propaganda bots.
How is it going to give credit to every single piece of art in existence online. If the company is making money off of intellectual property…
Copyright should be rewritten. AI should not be limited if can improve society in the long run.
Other non-AI apps used copyright images for training.
How come there was no uproar over the “Not Hotdog” & “Seefood” apps?
The problem isn’t theft… The results of AI is as fair use as it gets. If a person studies another’s style, then copies that style to make their own work, it is not theft. You cannot copyright a “style” of art.
The problem is speed and scale, and how that can undermine the original works. Of course, hindering those two things completely defeats the purpose of AI.
I know there are a lot of artists that get pissed at this viewpoint, but its simply a matter of truth. (Which is why it gets downvotes, but rarely counterarguments.) If you want to solve a problem, you need to first acknowledge *the actual problem* rather than try to attach a false one to it.
As long as the copyrighted material used for training the models is legally obtained who cares? Every human creator is also influenced by copyrighted works that came before them. Do humans influenced by other works need to pay a license fee for the works they were influenced by?
The law already says that to be fair use you must consider Â
>the effect of the use upon the potential market for or value of the copyrighted work. Â
Using the work in a model to then be able to create works that would compete against it and “learn” thousands of times faster than any human can is affecting the market, and hurting the value of the work.Â
The fact that the same companies making DALL-E said they can’t do it for music cause of copyright, shows they know it’s not fair use.Â
The music industry is just more litigious and has large companies and interest groups to represent them, while visual artists don’t.Â
I’m having a hard time, understanding how this will be implemented.
The AI model can’t exactly say “I remember being trained on an image” and it doesn’t seem feasible to do so
They wouldnt be able to just scrape the net, as other people will post copyrighted work without any mention of it being copyrighted or even make illegal changes or no changes and call it their own. It wouldnt be functional for google to find and not use, all these, except for the big guys, similar to how DMCA bots work. this might force them to buy datasets.
companies will all state “We hoovered up every piece of media we found on the internet and didn’t pay attention if it was copyrighted or not. We just used it all.”
The fucking dinosaurs we have in charge can barely enforce laws that have been in place for a hundred years. I’m sure they’ll be all up to date on the nuances of bleeding edge AI art copyrights and aren’t just trying to wet their beaks.
This seems pretty akin to sampling. The courts have made a somewhat arbitrary, but clear bright line rule about the length of a sample. I wonder if there will be an attempt at consistency, or b/c of the differences in potential money, views about who is doing it, etc. will create a divergence.
People really don’t understand how this technology works, huh?
How dumb. Copyright exists to enrich companies, not protect people. This is just Getty images wanting a piece of the AI pie.
I’d also like to see this applied to AI images and AI voices.
Like if you are on twitter and someone uploads an AI image of a <enter political party here> senator or congressperson that is faked. I’m tired of seeing the faked images that are designed to sow discord. There needs to be some AI watermark over these images to show that it was a rendition and not real.
If it’s not actually copyright infringement to make outputs based on “inspiration” of prior work – i.e., transformative – then I don’t see the point of this.
Unless copyright’s definition is due for yet another retelling.
AI can take inputs and create new outputs based on an understanding of what it has seen – and, when enabled for dynamic learning from interaction with the tool itself, become something even further from the original inputs. People are going to need to accept this is only going to increase automation in society over time and figure out a better way to be compensated for helping to seed the engine(s).
Sounds idiotic it’s training the AI, if I look at your art and get inspired it have I breached copy right? No. Ai doesn’t regurgitate your same art it uses it to learn along with millions of other pieces of art how to create art and then creates new pieces.
On a side note limiting our ai development is highly dangerous when other nations won’t show the same restraint. AI will hit a breaking point in the future where it spirals upward rapidly in effectiveness.
Just as effective as the “do not call” lists. Sure, we NEED things like this, but we can only expect them to be followed so far. Especially without globally-enforced jurisdiction.
Reading over the bill, I’m not sure what the goal is. It just shows which media is included in US training sets. This transparency would allow people to choose which models align with what they want included. That’s if they actually can search such data and analyze the possibly millions of items. The actual influence of each item is generally quite small though, so I’m not sure if people will understand this.
Is this where the regulation ends? Enforcing moral policies over what can be included in datasets has been brought up a lot in these discussions which has me wondering if that’s being planned next. Like the first articles we’ll see are clickbait for every dataset that it includes X images or Y videos (or doesn’t include Z). Is this part of the goal or an unintended side-effect of releasing such information for scrutiny.
If Spotify has to pay an artist every time someone accesses the bits which comprise a song, why doesn’t an AI model have to pay every time it runs a learning round that involves the same bits.
Doesn’t mean I have to be nice about it. Couldn’t I make say Yugioh NFTs, than put a little watermark on the right that says
“This art is a trademark of Konami who are buttholes and probably a Yakuza front.”
The way I understand it, the training data is to be analyzed, and the analysis produces a neural network and certain weights about how branches in that neural network operates. Couldn’t someone just release the neural network or weights without the training data? Wouldn’t that get around the disclosure requirements, since they’re not releasing the training data itself?
I don’t agree that they should be *forced* to reveal the sources their AI have been trained on. AI can produce something much more rapidly than I could, yes, but aside from that it’s not really much different from a human being inspired by art and then creating something similar.
That being said, I *am* still interested in seeing what the ‘inspirations’ were for any given piece of art an AI produces, simply because I think it would be interesting to see how it interpreted its sources of ‘inspiration’, and I would like to see more AI’s listing them. Just… not being *forced* to list them…
Like, I wanna know what music sources ‘inspired’ [Butt Chuggin’ Beer](https://suno.com/song/d319a923-921b-4ff3-b8c2-a232f44136d2)
Reminder that EVERYTHING is copyrighted as created. You have to sell the copyright to lose it. Or it has to age out after 75 years…or whatever Disney has paid to change it to now.
Now, what about my personal image or photos that I have taken?! Just because someone posted them for their friends to see on social media doesn’t mean they are giving the copyright to anyone for free, even though the social media corporate EULA tries to steal the copyright for everything.
Please and thank you, while we’re at it make also a law that anything made with AI need to be declared and made clear it wasn’t made by people.
and then? if its on google, they used it for the training
Please. The damage has been done. We are never getting this fixed.
Proverbial Pandora’s box.
This is nonsense.
Can you make me a list of all copyrighted images, movies and music you saw in your life ? Or even just during your education ?
No, there’s too much bits of too many content and you can’t keep track and sort everything at that scale (or at least it’s clearly not worth the ressources when there are far bigger priorities than lining up the pockets of greedy “copyright oligarchs” assholes). Assume that yes AI research involves copyright materials, typically bought from online stores, but that no it’s not a copyright breach.
No more than a student watching a movie is (btw, fuck off MPAA).
Culture is part of society, and drawing imaginary arbitrary intellectual lines around the so called intellectual property claim of greedy oligarchs is nonsense.
Coming from the same type of people who asked Zuckerberg “how does Facebook make money if it’s free?” and needed the internet explained to them in 2022. They’re in over their heads.