• coleseph@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    1 year ago

    I mostly agree with you but think it’s important to clarify that even with machine learning many humans can be replaced.

    To extend your metaphor, that library has always had a bunch of clerks sitting inside of it. They’ve been handling requests, finding books, and organizing them into a system that works to best serve that information.

    Now with machine learning, instead of having all of those clerks making the library run smoothly, they’ve effectively replaced 99% of all of the humans with an organizational system that serves content and helps find books even faster than a human would be able to.

    Slightly deeper: this machine learning replacement can also now mix and match bits of content. The human system before might have a request that looks like this - “I want information on Abrahamic Religion in Western Culture” so they’d gather up a ton of books and pass them to the person that requested info.

    In the new replacement system, the request could take bits and pieces from all of those books and present a mostly comprehensive overview of Abrahamic Religion in the West without having to run and fetch all of the books.

    Deeper yet, and the scary iceberg - today, someone still needs to write all of those books and we as a society tend to trust information gotten from those books (cited sources and all that) so humans are safe as the content authors right? We’ve basically just made a super efficient organizational and content delivery system. But as we start to trust the new system and use it more, we’re potentially seeing the system reference its own outputs as opposed to the source material…which creates a recursive, negative feedback loop.

    We still need human content creation today, but the scary part (IMO) is when we treat these LLMs as generative general AI. The LLMs are fallible and can be incorrect and often hallucinate - so when most people start blindly trusting these systems (they already do - look no further than general confusion on the terms AI and machine learning and LLMs), we’re going to get increasingly further away from new knowledge generation.

    • Mereo@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I think we are on the same page. There is a sociological concept of Generic Worker and Self-Programmable worker by the sociologist Manuel Castell. The self-programmable workforce is endowed with the ability to retrain and adapt to new tasks, new processes and new sources of information, as technology, demand and management accelerate their pace of change. Generic labor, on the other hand, is exchangeable and disposable, and coexists in the same circuits with machines and unskilled labor from all over the world.

      Generic workers are already being replaced by automation (robots), but now LLMs are threatening self-programmable workers. The only way to adapt to the new reality is to become indispensable in training LLMs. It will completely upend the current job market as we know it. And as you said, the danger is if we treat LLMs as generative AIs.