Artificial intelligence is spurring a new type of identity theft — with ordinary people finding their faces and words twisted to push often offensive products and ideas

    • Maeve
      link
      fedilink
      717 days ago

      And address conditions of despair which push people into crimes.

    • @PhlubbaDubba@lemm.ee
      link
      fedilink
      English
      317 days ago

      To a certain extent that’s just impossible since the degrees you’d have to go to for enforcement would inevitably lead to someone genuinely talking about a product they prefer getting penalized. Think of the episode of south park where the one girl turns out to be a living advert.

      There has to be serious limitation on bandwidth usage. First to ban targeted advertisement, second to make national, state level, and local adverts all get equal air time, and third to cut the amount of broadcast time that can be used for advertisement and page space that can have advertisements.

      • Maeve
        link
        fedilink
        517 days ago

        There’s a notable difference and good lawyers should be able to write the law around it.

        • @PhlubbaDubba@lemm.ee
          link
          fedilink
          English
          117 days ago

          The lawyers aren’t who I’m concerned about, it’s the social media corps who’ll just take to banning anyone who talks about any brand ever.

          You’ve basically silenced brand criticism because it can be seen as a backhanded advert for the competition.

          All the shittalk about the battlefront classic collection would get people banned for advertising BG3 by speaking negatively about its competition on the games market.

          Sounds stupid, is stupid, would happen anyways, especially if the corpos caught onto it and started paying for ban waves in their favor.

          • Maeve
            link
            fedilink
            217 days ago

            It’s a mad world, in every sense of the world, isn’t it?

  • Jesus
    link
    fedilink
    English
    5317 days ago

    Google has decided to build a platform where advertisers are minimally vetted. They’re intentionally taking on the risk and should be liable.

    If you decide to increase attendance in your club by getting rid of the bouncer, expect the fire marshal and cops to issue fines when your place is overcrowded and full of minors.

    • @abhibeckert@lemmy.world
      link
      fedilink
      English
      11
      edit-2
      17 days ago

      This. The lack of vetting sucks and it goes both ways. Sometimes the algorithm incorrectly flags perfectly legitimate content as fraudulent with no way to recover from that.

      • @pup_atlas@pawb.social
        link
        fedilink
        English
        216 days ago

        There absolutely is though. Implement a dispute process that loops in an actual human once a detection is triggered. Will that cost a lot of money, and require a lot of people? Yea. But that’s just the cost of doing business at the scale of a company lime Google, it (should be) their duty.

    • @elshandra@lemmy.world
      link
      fedilink
      English
      416 days ago

      Which is interesting in itself, what if AI by chance produces a likeness of you, unintentionally. Is there an AI that has a database of all of us to know that? I’m sure they’re trying, for whatever reason.

      Now, if you’re someone famous, like a pop star or president, chances are there are a lot more images of you in those databases, which could also skew the resulting images.

      So I guess, what we really need is some way to trust the image, otherwise … I really don’t know how this can be avoided, maybe a smarter entity does.

    • @silence7OP
      link
      English
      11
      edit-2
      17 days ago

      In the US, kinda sorta.

      Advertisers are liable if they use your likeness to promote a product, imply endorsement, or otherwise make commercial use of it without your consent. This gives you the right to sue, which is worth absolutely nothing when you’re dealing with a shady overseas shell company hawking fake Viagra.

      News organizations, artists, and random private individuals can publish a photo or other image of you taken in a place where you do not have a reasonable expectation of privacy without having to contact you or have your consent. This is important: think of trying to share a photograph of a public event, and having to track down people in the background, or create public awareness when you photograph politician committing a crime.

      • @paridoxical@lemmy.world
        link
        fedilink
        English
        216 days ago

        In your example at the end, why can’t the other people’s faces be blurred out before releasing the photo? Just playing devil’s advocate on that point.

        • @silence7OP
          link
          English
          416 days ago

          Because it’s a pain to go do (and was especially so in the film era) and it change what the photo conveys in a meaningful way.

          Think of for example a photo like this, showing anti-civil-rights protesters in 1969:

          Blurring the faces would meaningfully obscure what was going on, and confuse people about who held what kinds of views.

          • @paridoxical@lemmy.world
            link
            fedilink
            English
            216 days ago

            Historically, that is correct. However, the technology to automate this is extremely accessible now and low/no cost. Also, there was no widespread threat of misuse via AI in the past, so I get that there was no need in the past. Going forward, I think it’s something we need to think about.

            Today, the same photo you presented could be misused with AI to meaningfully obscure what is going on and confuse people about who held what kind of views. So there’s a double-edged sword here.

            Just to be clear, I do believe in the right to photograph anyone and anything in public, at least in the United States and any other countries that respect that freedom. I’m just trying to point out that the issue is complicated.

  • @NeoNachtwaechter@lemmy.world
    link
    fedilink
    English
    817 days ago

    No federal deepfake law exists

    Does it need a deepfake law??

    “Federal” tells us that this has happened in the US and A.

    So, don’t you Usamericans have any basic human rights that tell everybody that this is illegal right from start?

    • @silence7OP
      link
      English
      8
      edit-2
      17 days ago

      It’s considered a civil dispute. You can sue to those using your face in an ad for monetary damages, which in practice means you’re trying to sue an overseas shell corporation with no assets, and can’t get anything, so no lawyer will represent you.