• 0x0@programming.dev
    cake
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    17
    ·
    2 months ago

    So the LLM answers what’s relevant according to stereotypes instead of what’s relevant… in reality?

    • Grimy@lemmy.world
      link
      fedilink
      English
      arrow-up
      23
      arrow-down
      2
      ·
      edit-2
      2 months ago

      It just means there’s a bias in the data that is probably being amplified during training.

      It answers what’s relevant according to its training.