A Photographer Tried to Get His Photos Removed from an AI Dataset. He Got an Invoice Instead.
A German stock photographer tried to get his photos removed from the AI-training LAION dataset. Lawyers replied that he owes $979 for making an unjustified copyright claim.

The German photographer Robert Kneschke found out in February that his photographs were being used to train AI through LAION-5B, which is a dataset of over 5.8 billion images owned by the non-profit Large-scale Artificial Intelligence Open Network (LAION) that has been used by companies like Stability AI.

Here is the photographer’s blog post (in German): https://www.alltageinesfotoproduzenten.de/2023/04/24/laion-e-v-macht-ernst-schadensersatzforderung-an-urheber-fuer-ki-trainingsdaten/


This is such a one sided telling of the story.

The dataset in question AFAIK does not including any of the artists work, but only web-links to publicly accessible works and they warned the artist that if he was to proceed with this via legal means that costs for lawyers would occur.

Maybe in this specific case it was just a badly informed person that thought they were doing the right thing, but in general such copyright trolling is a real problem and IMHO effected parties are completely correct in asking for legal fees to be covered by the person making fraudulent copyright claims.

How do you even begin to check if your images are among billions of images? They’re just taking without asking and no one can do anything about it.

Someone set up HaveIBeenTrained.com to check this: For any of it’s faults, the LAION people have been very transparent on their sources of training data, their whole dataset is available. So, you can check if your stuff is inside it.

They’re getting bolder

Create a post

Rumors, happenings, and innovations in the technology sphere. If it’s technological news, it probably belongs here.

  • 0 users online
  • 5 users / day
  • 47 users / week
  • 112 users / month
  • 211 users / 6 months
  • 10 subscribers
  • 749 Posts
  • Modlog