• bushvin@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    6
    ·
    23 days ago

    Oh cool, implementing mediocre algorithms. What could possibly go wrong?

    • warmaster@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      23 days ago

      Local LLMs have been supported via the Ollama integration since Home Assistant 2024.4. Ollama and the major open source LLM models are not tuned for tool calling, so this has to be built from scratch and was not done in time for this release. We’re collaborating with NVIDIA to get this working – they showed a prototype last week.

      Are all Ollama-supported algos mediocre? Which ones would be better?