• QuadratureSurfer@lemmy.world
    link
    fedilink
    English
    arrow-up
    22
    ·
    1 month ago

    I’m just glad to hear that they’re working on a way for us to run these models locally rather than forcing a connection to their servers…

    Even if I would rather run my own models, at the very least this incentivizes Intel and AMD to start implementing NPUs (or maybe we’ll actually see plans for consumer grade GPUs with more than 24GB of VRAM?).

    • suburban_hillbilly@lemmy.ml
      link
      fedilink
      arrow-up
      27
      ·
      1 month ago

      Bet you a tenner within a couple years they start using these systems as distrubuted processing for their in house ai training to subsidize cost.

      • 8ender@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 month ago

        That was my first thought. Server side LLMs are extraordinarily expensive to run. Download to costs to users.

      • QuadratureSurfer@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 month ago

        Similar use cases to what I’m doing right now, running LLMs like Mixtral8x7B (or something better by the time we start seeing these), Whisper (STT), or Stable Diffusion.

        I use a fine tuned version of Mixtral (dolphin-Mixtral) for coding purposes.

        Transcribing live audio for notes/search, or translating audio from different languages using Whisper (especially useful for verifying claims of translations for Russian/Ukrainian/Hebrew/Arabic especially with all of the fake information being thrown around).

        Combine the 2 models above with a text to speech system (TTS), a vision model like LLaVA and some animatronics and then I’ll have my own personal GLaDOS: https://github.com/dnhkng/GlaDOS

        And then there’s Stable Diffusion for generating images for DnD recaps, concept art, or even just avatar images.

        • Alphane Moon@lemmy.ml
          link
          fedilink
          arrow-up
          2
          ·
          1 month ago

          Thank you! I currently use my 3080 dGPU for Stable Diffusion. I wonder to what extent NPUs will be usable with Stable Diffusion XL.