• FuzzyDunlop
    link
    fedilink
    English
    arrow-up
    27
    arrow-down
    1
    ·
    1 year ago

    When the only thing that continues to work on you ad-filled web site is the captcha, I’m not interested in supporting your journalism any more.

    Protip: You can crash self-driving cars by purposefully misclicking during Captcha checks when they ask you to identify what is a bicycle, a car, a pedestrian, etc. Keep misclicking, your are poisoning the AI with each misclick. Just stay safe on the sidewalk.

    • juergen@feddit.deOP
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      1 year ago

      don’t think you can make a big difference as an individual, given that captchas are used by probably billions of users per day

        • Andreas@feddit.dk
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          1 year ago

          Given the number of bots on the internet trying to crack captchas, this is already happening. I don’t think captchas are being used for AI training that much, since hCaptcha uses AI-generated images with prompts like “Select the images with a hamster eating a watermelon” for its tests. All of the reCaptcha road captchas I receive also have answer validation and won’t let me pass if I answer incorrectly because of a misclick.

        • GreyBeard@lemmy.one
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          I don’t think it would have the intended effect. What would happen if that captchas wouldn’t be useful for AI training, but it’s not like a car is sitting at a stoplight waiting for a person to identify if something is a bus or not.

        • JustEnoughDucks@lemmy.ml
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          1 year ago

          Lol, getting 10 thousand users to slightly inconvenience themself even to stand against things that directly effect them is difficult. Imagine trying to get billions to do it for a slightly indirect possible effect on megacorps.

          There are probably half a billion people alone that would gladly lick the boot of any mega corporation that demanded it.

    • someRandomRedneck@beehaw.org
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      I’m sorry what? And by that I mean what the hell is wrong with you and the people that think it’s a good idea? If it works that way and enough people did that then it would be intentionally endangering people’s lives.

      • Puls3@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        1 year ago

        It’d be the business’ fault not ours, you shouldn’t use unreliable user data for something so important.

        • someRandomRedneck@beehaw.org
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          Legally you are correct but ethically you are wrong. If they include false data that causes a crash, everyone that intentionally contributed that data is morally at fault. You don’t get to wash your hands of it just because the business is the one legally liable for it.

          • Puls3@lemmy.ml
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            1
            ·
            1 year ago

            I mean ethically its a debatable topic, if I don’t help fix someone’s car and then he crashes it, its not my fault, he shouldn’t have driven it while it was broken.

            Same with user generated or AI data, it works 99.9% of the time, but that 0.1% is too dangerous to deploy in a life endangering situation.

            • someRandomRedneck@beehaw.org
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              1 year ago

              You’ve got a bit of a point there I’ll give you that but it’s an apple to oranges comparison, unless you’re intentionally trying to cause them to crash by not helping them fix their car. The person I originally replied to is advocating intentionally trying to cause a crash.

              • Puls3@lemmy.ml
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                1 year ago

                I think it was a more tongue in cheek reference to the incompetence of the companies and how they will use that data in practice, but I might have read too much into it. Regardless, intentionally clicking the wrong items on captchas shouldn’t cause a crash unless the companies force it to by cutting corners.

                • someRandomRedneck@beehaw.org
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  1 year ago

                  It doesn’t matter if it was tongue in cheek, if my dumbass took it seriously then you know other dumbass people will take it seriously. And I guess my main issue is about the vocal intent to cause harm which is demonstrated by their mention of making sure to stay safe on the sidewalk.

    • comfy@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Hah, it’s possible in theory but would require co-ordination that we are almost never going to see.

      Most people will just do them correctly to pass, and if 997 responses say yes and 3 say no, they’re probably confidently right.