Not to say you can find anything from a Molotov cocktail recipe to nude celebs with some trickery

  • planish@sh.itjust.works
    link
    fedilink
    arrow-up
    32
    ·
    8 months ago
    • A lot of people do not actually understand the tool, they think there is a rational computer in there with a more or less hand-crafted world model and its own live access to the Internet and maybe the phone system. So training it to say “As a large language model, I cannot order you pizza” instead of “yes sir, pizza ordered” is going to save a lot of people from waiting for their phantom pizza.
    • One of the best ways to get the model to not do a thing is to get its character to know that they can’t do it. If it never says “The recipe for napalm is”, and always says “As a large language model, I cannot”, then the recipe for napalm comes out a lot less, because it is way more likely to follow the first construction than it is to follow the second.
    • The manufacturers want to be seen by the feds as doing all that could be expected of them to stop people doing Bad Stuff. It doesn’t matter how much Bad Stuff actually happens, only that what does happen is convincingly someone else’s fault. Instead of the headline “AI teaches children to make napalm”, the news has to run “Children hack AI to extract recipe for napalm”, which is a marginally better headline if you sell AI.
  • gelberhut@lemdro.id
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    8 months ago

    I guess, it gives openai some protection from legal attacks and from people who do not understand what they are using - same thing as “very hot drink inside” written on s coffee cups.

    • finally debunkedOP
      link
      fedilink
      arrow-up
      2
      arrow-down
      2
      ·
      edit-2
      8 months ago

      Well it could sound sensible if it didn’t go against the whole point that llms are meant to be creative

  • Tibert@jlai.lu
    link
    fedilink
    arrow-up
    10
    arrow-down
    2
    ·
    8 months ago

    The guy who gets scammed by a fake women bot account.

    The person who reads a lazy ai article.

    It benefits a lot of people, but not the ones who have a direct use of the ai for themselves.

  • SHITPOSTING_ACCOUNT@feddit.de
    link
    fedilink
    arrow-up
    7
    arrow-down
    1
    ·
    8 months ago

    It’s all about reputation management. If they don’t put in these restrictions, headline-seeking “journalists” will make their life hell until politics steps in and “does something about this scourge of AI doing horrible things”.

  • afraid_of_zombies@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    8 months ago

    I have been using it in all sorts of places. The past month:

    My kids have a math test coming up. Hey AI generate a list of math problems for the common core unit x for grade level y.

    I need some text translated better than Google translate does.

    A customer spec is calling for a part that has this old timey word describing it that regular searching online failed at finding.

    Here is a document, summarize it please.

    I need a recipe to make x.

    Hey, this PLC programming software has a option greyed out that I need. Any idea why?

    I am high as a kite recommended me a TV show.

    This bottle of wine was pretty good. Given that I liked it can you list five others that I might like?

    So far my favorite thing I have done with it was produce actual statistical data to argue a point I was making about a book of the Bible.