From https://twitter.com/llm_sec/status/1667573374426701824

  1. People ask LLMs to write code
  2. LLMs recommend imports that don’t actually exist
  3. Attackers work out what these imports’ names are, and create & upload them with malicious payloads
  4. People using LLM-written code then auto-add malware themselves
  • Margot Robbie@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Definitely not. LLMs just make things up that sounds right, for anything other than the simplest code you pretty much always have to fix the output.

    LLMs are only useful as rubber ducks to figure out what might be wrong with your code, and it’s honestly easier for me to read the documentation/Stack Overflow instead when trying to write code from scratch.