Black Mirror creator unafraid of AI because it’s “boring”::Charlie Brooker doesn’t think AI is taking his job any time soon because it only produces trash

    • lloram239@feddit.de
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      7
      ·
      edit-2
      9 months ago

      AI is nowhere near the point where it can…

      ChatGPT is 10 months old, not even a whole year. And it was never fined tuned for story writing in the first place. A little bit premature to proclaim what AI can and can’t do, don’t you think?

      • currycourier@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        arrow-down
        1
        ·
        9 months ago

        ChatGPT isn’t the entirety of AI, AI research has been going on much longer than ChatGPT has been around

          • NoMoreCocaine@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            9 months ago

            Yes. Honestly it’s crazy how much people read into ChatGPT, when in practice it’s effectively just a dice roller that depends in incredibly big dataset to guess what’s the most likely word to come next.

            There’s been some research about this, the fact that people are assigning intelligence into things that ML does. Because it doesn’t compute for us that something can appear to make sense without actually having any intelligence. To humans, the appearance of the intelligence is enough to assume intelligence - even if it’s just a result of a complicated dice roller.

        • lloram239@feddit.de
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          9 months ago

          And that’s exactly why we should be scarred. ChatGPT is just the popular tip of the AI iceberg, there is a whole lot of more stuff in the works across all kinds of domains. The underlying AI algorithms is what allows you to slap something like ChatGPT together in a few months.

      • matter@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        9 months ago

        AI has been being developed for 50 years and the best we can do so far is a dunning-kruger sim. Sure, who knows what it “can do” at some point, but I wouldn’t hold my breath.

        • lloram239@feddit.de
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          3
          ·
          edit-2
          9 months ago

          The recent deep learning AI efforts only started around 2012 with AlexNet. They were based on ideas that were around since the 1980s, but they had been previously abandoned as they just didn’t produce any usable results with the hardware available at the time. Once programmable consumer GPUs came around that changed.

          Most of the other AI research that has been happening since the 1950s was a dead end, as it relied on hand crafted feature detection, symbol logic and the like written by humans, which as the last 10 years have shown performs substantially worse than techniques that learn from the data directly without a human in the loop.

          That’s the beauty of it. Most of this AI stuff is quite simple on the software side of things, all the magic happens in the data, which also means that it can rapidly expand into all areas were you have data available for training.

          You smug idiots are proud of yourself that you can find a hand with an additional finger in an AI image, completely overlooking that three years of AI image generation just made 50 years of computer graphics research obsolete. And even ChatGPT is already capable of holding more insightful conversations than you AI haters are capable of.

        • lloram239@feddit.de
          link
          fedilink
          English
          arrow-up
          1
          ·
          9 months ago

          (it’s not, il the underlying tech is much older than that).

          ChatGPT was released Nov 2022. Plain GPT1/2/3 neither had the chat interface nor the level of training data and fine tuning that ChatGPT/GPT-3.5 had and in turn were much less capable. They literally couldn’t function in the way ChatGPT does. Even the original Google paper this is all based on only goes back to 2017.

          LLMs are physically incapable

          Yeah, LLM won’t ever improve, because technology improving has never happened before in history… The stupid in your argument hurts.

          Beside GPT-4 can already handle 32768 tokens, that’s enough for your average movie, even without any special tricks (of which there are plenty).

    • danque@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      2
      ·
      edit-2
      9 months ago

      Depends on the ai though. With koboldcpp you can make memories for the ai to come back with. Even text personalities (like bitchy and sassy responses) when using tavernai together with kobold.

    • jandar_fett@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      9 months ago

      This. You have to baby it and then if you want it to do something different you have to tell it a hundred times in a hundred different ways before it stops producing the same stuff with the same structure with slight differences. It is a nightmare.

    • Flying Squid@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      9 months ago

      I agree, but at some point it will advance to the level where it can write boring, predictable scripts.