Don’t learn to code: Nvidia’s founder Jensen Huang advises a different career path::Don’t learn to code advises Jensen Huang of Nvidia. Thanks to AI everybody will soon become a capable programmer simply using human language.

  • kescusay@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    ·
    9 months ago

    Well. That’s stupid.

    Large language models are amazingly useful coding tools. They help developers write code more quickly.

    They are nowhere near being able to actually replace developers. They can’t know when their code doesn’t make sense (which is frequently). They can’t know where to integrate new code into an existing application. They can’t debug themselves.

    Try to replace developers with an MBA using a large language model AI, and once the MBA fails, you’ll be hiring developers again - if your business still exists.

    Every few years, something comes along that makes bean counters who are desperate to cut costs, and scammers who are desperate for a few bucks, declare that programming is over. Code will self-write! No-code editors will replace developers! LLMs can do it all!

    No. No, they can’t. They’re just another tool in the developer toolbox.

    • paf0@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      9 months ago

      I’ve been a developer for over 20 years and when I see Autogen generate code, decide to execute that code and then fix errors by making a decision to install dependencies, I can tell you I’m concerned. LLMs are a tool, but a tool that might evolve to replace us. I expect a lot of software roles in ten years to look more like an MBA that has the ability to orchestrate AI agents to complete a task. Coding skills will still matter, but not as much as soft skills will.

      • kescusay@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        9 months ago

        I really don’t see it.

        Think about a modern application. Think about the file structure, how the individual sources interrelate, how non-code assets are stored, how applications are deployed, and all the other bits and pieces that go into an application. An AI can’t know any of that without being trained - by a human - on the specifics of that application’s needs.

        I use Copilot for my job. It’s very nice, and makes my job easier. And if my boss fired me and the rest of the team and tried to do it himself, the application would be down in a day, then irrevocably destroyed in a week. Then he’d be fired, we’d be rehired, and we - unlike my now-former boss - would know things like how to revert the changes he made when he broke everything while trying to make Copilot create a whole new feature for the application.

        AI code generation is pretty cool, but without the capacity to know what code actually should be generated, it’s useless.

        • paf0@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          9 months ago

          It’s just going to create a summary story about the code base and reference that story as it implements features, not that different that a human. It’s not necessarily something it can do now but it will come. Developers are not special, and I was never talking about Copilot.

          • kescusay@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            9 months ago

            I don’t think most people grok just how hard implementing that kind of joined-up thinking and metacognition is.

            You’re right, developers aren’t special, except in those ways all humans are, but we’re a very long way indeed from being able to simulate them in AI - especially in large language models. Humans automatically engage in joined-up thinking, second-order logic, and so on, without having to consciously try. Those are all things a large language model literally can’t do.

            It doesn’t know anything. It can’t conceptualize a “summary story,” or understand parts that it might get wrong in such a story. It’s glorified autocomplete.

            And that can be extraordinarily useful, but only if we’re honest with ourselves about what it is and is not capable of.

            Companies that decide to replace their developers with one guy using ChatGPT or Gemini or something will fail, and that’s going to be true for the foreseeable future.

            • paf0@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              9 months ago

              Try for a second to think beyond what they’re able to do now and think about the future. Also, educate yourself on Autogen and CrewAI, you actually haven’t addressed anything I said because you’re too busy pontificating.

              • kescusay@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                9 months ago

                Try for a second to think beyond what they’re able to do now and think about the future.

                I am. In the future, they will need to be able to perform tasks using joined-up thinking, second-order logic, and metacognition if they’re going to replace people like me with AI. And that is a very hard goal to achieve. Maybe not P = NP hard, but by no means trivial.

                Also, educate yourself on Autogen and CrewAI, you actually haven’t addressed anything I said because you’re too busy pontificating.

                I have. My company looked at Autogen. We concluded it wasn’t worth it. The solution to AI agents not being able to actually understand what they’re doing isn’t to amplify the problem by creating teams of them.

                Every few years, something new comes along driven by incredible hype, and people declare programming to be dead. They insist a robot will be able to do my job. I have yet to see a technology that will plausibly do that in ten years, let alone now. And all the hype is built on a foundation of ignorance over how complicated a modern, enterprise-ready application is, and how necessary being able to think about its many moving parts is.

                You know who doesn’t suffer from that ignorance? Microsoft, the creators of Autogen. And they’re currently hiring developers, not laying them off and replacing them with Autogen.

      • rottingleaf@lemmy.zip
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 months ago

        Well, I sometimes see a few tools at my job, which are supposed to be kinda usable by people like that. In reality they can’t 90% of time.

        That’d be because many people think that engineers deal in intermediate technical details, and the general idea is clear for this MBA. In fact it’s not.