BrooklynMan

designer of experiences, developer of apps, resident of nyc, citizen of earth

he/him

tip jar

  • 2 Posts
  • 82 Comments
Joined 1 year ago
cake
Cake day: June 2nd, 2023

help-circle












  • You’re getting lost in the weeds here and completely misunderstanding both copyright law and the technology used here.

    you’re accusing me of what you are clearly doing after I’ve explained twice how you’re doing that. I’m not going to waste my time doing it again. except:

    Where copyright comes into play is in whether the new work produced is derivative or transformative.

    except that the contention isn’t necessarily over what work is being produced (although whether it’s derivative work is still a matter for a court to decide anyway), it’s regarding that the source material is used for training without compensation.

    The problem is that as a consumer, if I buy a book for $12, I’m fairly limited in how much use I can get out of it.

    and, likewise, so are these companies who have been using copyrighted material - without compensating the content creators - to train their AIs.


  • Of course it is. It’s not a 1:1 comparison

    no, it really isn’t–it’s not a 1000:1 comparison. AI generative models are advanced relational algorithms and databases. they don’t work at all the way the human mind does.

    but the way generative AI works and the we incorporate styles and patterns are more similar than not. Besides, if a tensorflow script more closely emulated a human’s learning process, would that matter for you? I doubt that very much.

    no, the results are just designed to be familiar because they’re designed by humans, for humans to be that way, and none of this has anything to do with this discussion.

    Having to individually license each unit of work for a LLM would be as ridiculous as trying to run a university where you have to individually license each student reading each textbook. It would never work.

    nobody is saying it should be individually-licensed. these companies can get bulk license access to entire libraries from publishers.

    That’s not materially different from how anyone learns to write.

    yes it is. you’re just framing it in those terms because you don’t understand the cognitive processes behind human learning. but if you want to make a meta comparison between the cognitive processes behind human learning and the training processes behind AI generative models, please start by citing your sources.

    The difference is that a human’s ability to absorb information is finite and bounded by the constraints of our experience. If I read 100 science fiction books, I can probably write a new science fiction book in a similar style. The difference is that I can only do that a handful of times in a lifetime. A LLM can do it almost infinitely and then have that ability reused by any number of other consumers.

    this is not the difference between humans and AI learning, this is the difference between human and computer lifespans.

    There’s a case here that the renumeration process we have for original work doesn’t fit well into the AI training models

    no, it’s a case of your lack of imagination and understanding of the subject matter

    and maybe Congress should remedy that

    yes

    but on its face I don’t think it’s feasible to just shut it all down.

    nobody is suggesting that

    Something of a compulsory license model, with the understanding that AI training is automatically fair use, seems more reasonable.

    lmao



  • i admit it’s a hug issue, but the licensing costs are something that can be negotiated by the license holders in a structured settlement.

    moving forward, AI companies can negotiate licensing deals for access to licensed works for AI training, and authors of published works can decide whether they want to make their works available to AI training (and their compensation rates) in future publishing contracts.

    the solutions are simple-- the AI companies like OpenAI, Google, et al are just complaining because they don’t want to fork over money to the copyright holders they ripped off and set a precedent that what their doing is wrong (legally or otherwise).



  • Isn’t learning the basic act of reading text?

    not even close. that’s not how AI training models work, either.

    if your position is that only humans can learn and adapt text

    nope-- their demands are right at the top of the article and in the summary for this post:

    Thousands of authors demand payment from AI companies for use of copyrighted works::Thousands of published authors are requesting payment from tech companies for the use of their copyrighted works in training artificial intelligence tools

    that broadly rules out any AI ever

    only if the companies training AI refuse to pay