Over just a few months, ChatGPT went from correctly answering a simple math problem 98% of the time to just 2%, study finds. Researchers found wild fluctuations—called drift—in the technology’s abi…::ChatGPT went from answering a simple math correctly 98% of the time to just 2%, over the course of a few months.

  • 0x01@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Someone with more knowledge may have a better response than me, but as far as I understand it GPT-x (3.5 or 4) is what’s called a “large language model” it’s a neural network that predicts natural language. I don’t believe AGI is the goal of OpenAI’s product, I believe natural language processing and prediction is.

    ChatGPT in particular is a product simply demonstrating the capability of the GPT models, and while I’m sure openai themselves could build out components of the interface to interact with discrete knowledge like math, modifying the output of the LLM to be more accurate in many cases, it’s my opinion that it would defeat the entire purpose of the product.

    The fact that they have achieved what they have already is absolutely mind boggling, I’m sure that the precise solution you’re talking about is on the horizon, I personally know several developers actively working on systems that mirror the thoughts you’ve expressed here.