• Xanvial@lemmy.one
    link
    fedilink
    arrow-up
    32
    arrow-down
    1
    ·
    1 year ago

    Most of the codes I copied from GPT doesn’t even work Seems I spent more time fixing it compared to thinking it myself

    • Pons_Aelius@kbin.social
      link
      fedilink
      arrow-up
      45
      arrow-down
      2
      ·
      1 year ago

      Seems I spent more time fixing it compared to thinking it myself

      AI code is just bad code written by someone else that I now have to fix, and we all know the one job every coder loves is fixing code written by someone who you cannot ask: “why did you do it this way?”

    • FaceDeer@kbin.social
      link
      fedilink
      arrow-up
      14
      arrow-down
      1
      ·
      1 year ago

      Often when I tell ChatGPT what error its code produced it will immediately figure out what the bug was and fix it.

      • lackthought@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 year ago

        interacting with chatgpt is a learned skill

        i’ve used it several times and while the initial code may have some issues you can get them cleared up with a few direct follow ups

        • FaceDeer@kbin.social
          link
          fedilink
          arrow-up
          7
          ·
          edit-2
          1 year ago

          I recall reading a while back of one person’s strategy, whenever ChatGPT generates code for him he immediately tells ChatGPT “there’s a bug in that code” (without checking or specifying). It’ll often find one.

          Another approach I’ve heard of is to tell ChatGPT that it’s supposed to roleplay two roles when generating code, a programmer and a code reviewer. The code reviewer tidies up the initial code and fixes bugs.

          Since often ChatGPT’s code works fine for me I don’t usually bother with these steps initially, since I’m usually just wanting a quick and dirty script for a one-off task the quality doesn’t matter much in my case.

          • jcg@halubilo.social
            link
            fedilink
            arrow-up
            4
            ·
            1 year ago

            And you know what you call changing words around to get a computer to do what you want? That’s programming, baby! We are programming programmers!

      • philm@programming.dev
        cake
        link
        fedilink
        arrow-up
        4
        ·
        1 year ago

        Yeah, this is the way how to interact with it. It makes sense as well, because it’s only predicting the next word based on the previous words, so it had can in hindsight find a lot more stuff and in general be smarter about it.

      • orca@orcas.enjoying.yachts
        link
        fedilink
        arrow-up
        3
        ·
        edit-2
        1 year ago

        I do this with TypeScript error codes. It’s great at breaking down the problem. I never just copy paste code from it and I don’t think anyone should do that anyway.

    • KevonLooney@lemm.ee
      link
      fedilink
      arrow-up
      9
      ·
      1 year ago

      The art is the same. AI is just like asking an art student to draw you a picture. Might be good, might look terrible. Don’t ask for hands.

  • BasicTraveler@kbin.social
    link
    fedilink
    arrow-up
    20
    arrow-down
    1
    ·
    1 year ago

    I’ve had success with trivial things, like write a log file parser with this pattern, or give me a basic 3 part left-right-center header in html. Works ok for trivial side projects. I would never trust it in production. Its a tool, nothing more at this point. Like an electric drill, better than a hand crank, but you still need to know how to use it.

  • tatterdemalion@programming.dev
    link
    fedilink
    arrow-up
    14
    arrow-down
    1
    ·
    1 year ago

    Yea I genuinely hope we get to the point where I don’t have write code. Let me describe the architecture and algorithms with my voice in English. I’d much rather spend my time solving abstract problems than typing syntax. If I have to essentially “teach” the AI what I want by dropping down the ladder of abstraction sometimes, that’s OK.

  • Dept@lemmy.sdf.org
    link
    fedilink
    arrow-up
    7
    arrow-down
    1
    ·
    1 year ago

    Me a CS student that bet all my future in the field: yes totally happy about this

    • dexx4d@lemmy.ca
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      Me, with 20 years experience making software: yes, totally happy about this. (This makes it much easier to keep up with the latest newfangled bullshit.)

  • Ronno@kbin.social
    link
    fedilink
    arrow-up
    7
    arrow-down
    3
    ·
    1 year ago

    It won’t, because AI will generate based on existing content, but will not make something new

    • Hazzia@discuss.tchncs.de
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      Clearly we just need to feed it the entire documentation for every single piece of technology in use.

      Surely nobody has ever encountered an issue that isn’t addressed in the documentation, right guys???

    • TimeSquirrel@kbin.social
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      Wonder if human knowledge is now going to start degrading like a reposted JPEG as AI generated information is recycled again and again into more AI systems.

  • blackstrat@lemmy.fwgx.uk
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    If AI can write code based on English input then we should be able to feed it a spec and then just deploy to production the output.

    • jadero@programming.dev
      link
      fedilink
      arrow-up
      4
      ·
      1 year ago

      And, as always, attempting to code to that spec will expose contradictions, inconsistencies, and frequently produce something that the customer judges as unfit for purpose.

      Coding has never been the toughest problem, except in the matter of security attacks.