• Solar Bear
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 year ago

    They will never be capable of filling the role Mozilla has shoehorned them into.

    You’re probably right that generative AI on its own, even if improved, can never fundamentally solve the truth problem. A probability engine is exactly just that, merely testing the probability of an output given the dataset. But for such a specific use-case as this, I don’t think it’s outside the realm of possibility to build some sort of reverse-lookup system that sanity checks the output before sending it. It’ll probably never be suitable for extremely advanced applications, though. But I’m just not thoroughly convinced that this is entirely useless and needs be abandoned just yet.

    The behavior of the Mozilla representatives strongly implies it. I have no idea how they intend to make money with this, and they may or may not succeed, but people don’t generally act like this unless they think they can strike it rich by doing so (and don’t care about the harm they’ll cause in the process).

    I don’t like to assume ill intent just to fill in an unexplained gap. It’s entirely possible for someone to just be wrong. Just like I might be wrong, and this is in fact a technological dead end.

    • argv_minus_one@beehaw.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      But for such a specific use-case as this, I don’t think it’s outside the realm of possibility to build some sort of reverse-lookup system that sanity checks the output before sending it.

      What kind of reverse-lookup system, exactly? As far as I know, that’s impossible without AGI.

      I don’t like to assume ill intent just to fill in an unexplained gap. It’s entirely possible for someone to just be wrong.

      That doesn’t explain the evasiveness. Something’s up. Something we won’t like when it’s revealed.