• flatbield@beehaw.org
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    10 months ago

    Step back for a moment. You put the data in, say images. The output you got depended on putting in the data. It is derivative of it. It is that simple. Does not matter how you obscure it with mumbo jumbo, you used the images.

    On the other hand, is that fair use without some license? That is a different question and one about current law and what the law should be. Maybe it should depend on the nature of the training for example. For example reproducing images from other images that seems less fair. Classifying images by type, well that seems more fair. Lot of stuff to be worked out.

    • FaceDeer@kbin.social
      link
      fedilink
      arrow-up
      3
      ·
      10 months ago

      It is that simple.

      No, it really isn’t.

      If you want to step back, let’s step back. One of the earliest, simplest forms of “generative AI” is the Markov Chain algorithm. What you do with that is you take a large amount of training text and run it through a program to analyze it. What the program is looking for is the probability of specific words following other words.

      So for example if it trained on the data “You must be the change you wish to see in the world”, as it scanned through it would first go “ah, the word ‘you’ is 100% of the time followed by the word ‘must’” and then once it got a little further in it would go “wait, now the word ‘you’ was followed by the word ‘wish’. So ‘you’ is followed by ‘must’ 50% of the time and ‘wish’ 50% of the time.”

      As it keeps reading through training data, those probabilities are the only things that it retains. It doesn’t store the training data, it just stores information about the training data. After churning through millions of pages of data it’ll have a huge table of words and the associated probabilities of finding other specific words right after them.

      This table does not in any meaningful sense “encode” the training data. There’s nothing you can do to recover the training data from it. It has been so thoroughly ground up and distilled that nothing of the original training data remains. It’s just a giant pile of word pairs and probabilities.

      It’s similar with how these more advanced AIs train up their neural networks. The network isn’t “memorizing” pictures, it’s learning concepts from them. If you train an image generator on a million images of cats you’re teaching it what cat fur looks like under various lighting conditions, what shape cats generally have, what sorts of environments you usually see cats in, the sense of smug superiority and disdain that cats exude, and so forth. So when you tell the AI “generate a picture of a cat” it is able to come up with something that has a high degree of “catness” to it, but is not actually any specific image from its training set.

      If that level of transformation is not enough for you and you still insist that the output must be considered a derivative work of the training data, well, you’re going to take the legal system down an untenable rabbit hole. This sort of learning is what human artists do all the time. Everything is based on the patterns we learn from the examples we saw previously.