I think it’s fair to require disclosure. However, they don’t really go into much detail about what is considered AI. So for example, if I use Photoshop’s ‘content aware fill’ tool on a piece of artwork, would I need to disclose that the project used AI? What about if I simply upscale a piece of artwork that I drew? As tools involving some sort of machine learning become more common, it’s eventually going to just become a given that AI has been used at some point in a workflow.
So for example, if I use Photoshop’s ‘content aware fill’ tool on a piece of artwork, would I need to disclose that the project used AI?
Let me predict the response:
“Photoshop… oh, that’s just a standard piece of software! Everybody uses Photoshop!”
“Stable Diffusion?! Commie open-source AI detected! OPEN REVOLT! TEY TAK ER JERBS!”
Remember, it’s not about protecting the consumer from AI. It’s about the illusion that they are doing something about it.
Very good point. I can imagine that they are currently just trying to prevent a deluge of fully AI generated content with this policy and will then adjust accordingly.
In my opinion, that more resembles a tool rather than a black box with minimal interaction, which is only good for spamming.
Big bro was trying to sign up for ChatGPT to play with it. The new signup has a difficult captcha that my bro could not solve - it had 9 puzzles and wouldn’t tell him he failed until he finished all 9. Finally after four or five times I told him to just do the audio test and he passed that one on the first try. But damn it’s getting harder and harder to prove you’re a real human!
He was trying to hard.
Should be done as though you’re a monkey. Or drunk.