![](/static/253f0d9b/assets/icons/icon-96x96.png)
![](https://slrpnk.net/api/v3/image_proxy?url=https%3A%2F%2Flemmy.world%2Fpictrs%2Fimage%2F8aead832-799f-4d34-a20d-eae5b621a9b1.jpeg)
You are not wrong: https://arstechnica.com/information-technology/2023/07/is-chatgpt-getting-worse-over-time-study-claims-yes-but-others-arent-sure/ and also https://duckduckgo.com/?q=chat+gpt+4+getting+worse
The more LLMs get exposed to data, the more they get exposed to wrong data. There’s also a vicious cycle problem that once LLMs spit out bad information, that bad information gets incorporated into LLMs new data sets, which makes them more wrong, so on and so forth.
~Great White Buffalo~