Over half of all tech industry workers view AI as overrated::undefined

  • eestileib@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    35
    arrow-down
    1
    ·
    8 months ago

    Over half of tech industry workers have seen the “great demo -> overhyped bullshit” cycle before.

      • VintageTech@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        8
        ·
        8 months ago

        Once we’re able to synergize the increased throughput of our knowledge capacity we’re likely to exceed shareholder expectation and increase returns company wide so employee defecation won’t be throttled by our ability to process sanity.

    • SineSwiper@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      10
      ·
      8 months ago

      No SQL, block chain, crypto, metaverse, just to name a few recent examples.

      AI is overhyped, but it is, so far, more useful than any of those other examples, though.

  • steeznson@lemmy.world
    link
    fedilink
    English
    arrow-up
    30
    ·
    8 months ago

    I remember when it first came out I asked it to help me write a MapperConfig custom strategy and the answer it gave me was so fantastically wrong - even with prompting - that I lost an afternoon. Honestly the only useful thing I’ve found for it is getting it to find potential syntax errors in terraform code that the plan might miss. It doesn’t even complement my programming skills like a traditional search engine can do; instead it assumes a solution that is usually wrong and you are left to try to build your house on the boilercode sand it spits out at you.

    • lloram239@feddit.de
      link
      fedilink
      English
      arrow-up
      9
      ·
      8 months ago

      It’s a general problem with ChatGPT(free), the more obscure the topic, the more useless the answers will be. It works pretty good for Wikipedia-style general knowledge, but everything that goes even a little deeper is a mess. This is true even when it comes to things that shouldn’t be that obscure, e.g. pop-culture things like movies. It can give you a summary of StarWars, but anything even a little more outside the mainstream it makes up on the spot.

      How much better is ChatGPT-Pro when it comes to this? Can it answer /r/tipofmytongue/ style question?

      • applebusch@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        8 months ago

        I’ve found the free one can sometimes answer tip of my tongue questions but yeah anything even remotely obscure it will just lie and say that doesn’t exist, especially if you stray a little too close to the puritanical guard rails. One time I was going down a rabbit hole researching human sex organ variations and it flat out told me the people in South America who grow a penis at 12 don’t exist until I found the name guevedoces on my own, and wouldn’t you know it then it knew what I was talking about.

    • phoneymouse@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      8 months ago

      I also have tried to use it to help with programming problems, and it is confidently incorrect a high percentage (50%) of the time. It will fabricate package names, functions, and more. When you ask it to correct itself, it will give another confidently incorrect answer. Do this a few more times and you could end up with it suggesting the first incorrect answer it gave you and then you realize it is literally leading you in circles.

      It’s definitely a nice option to check something quickly, and it has given me some good information, but you really can’t blindly trust its output.

      At least with programming, you can validate fairly quickly that it is giving bad information. With other real-life applications, using it for cooking/baking, or trip planning, the consequences of bad information could be quite a bit worse.

  • shirro@aussie.zone
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    1
    ·
    edit-2
    8 months ago

    Many areas of machine learning, particularly LLMs are making impressive progress but the usual ycombinator techbro types are over hyping things again. Same as every other bubble including the original Internet one and the crypto scams and half the bullshit companies they run that add fuck all value to the world.

    The cult of bullshit around AI is a means to fleece investors. Seen the same bullshit too many times. Machine learning is going to have a huge impact on the world, same as the Internet did, but it isn’t going to happen overnight. The only certain thing that will happen in the short term is that wealth will be transferred from our pockets to theirs. Fuck them all.

    I skip most AI/ChatGPT spam in social media with the same ruthlessness I skipped NFTs. It isn’t that ML doesn’t have huge potential but most publicity about it is clearly aimed at pumping up the market rather than being truly informative about the technology.

    • Barack_Embalmer@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      8 months ago

      ML has already had a huge impact on the world (for better or worse), to the extent that Yann LeCun proposes that the tech giants would crumble if it disappeared overnight. For several years it’s been the core of speech-to-text, language translation, optical character recognition, web search, content recommendation, social media hate speech detection, to name a few.

      • shirro@aussie.zone
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 months ago

        ML based handwriting recognition has been powering postal routing for a couple of decades. ML completely dominates some areas and will only increase in impact as it becomes more widely applicable. Getting any technology from a lab demo to a safe and reliable real world product is difficult and only more so when there are regulatory obstacles and people being dragged around by vehicles.

        For the purposes of raising money from investors it is convenient to understate problems and generate a cult of magical thinking about technology. The hype cycle and the manipulation of the narrative has been fairly obvious with this one.

  • MeanEYE@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    ·
    8 months ago

    Of course, because hype didn’t come from tech people, but content writers, designers, PR people, etc. who all thought they didn’t need tech people anymore. The moment ChatGPT started being popular I started getting debugging requests from few designers. They went there and asked it to write a plugin or a script they needed. Only problem was it didn’t really work like it should. Debugging that code was a nightmare.

    I’ve seen few clever uses. Couple of our clients made a “chat bot” whose reference was their poorly written documentation. So you’d ask a bot something technical related to that documentation and it would decipher the mess. I still claim making a better documentation was a smarter move, but what do I know.

  • thorbot@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    3
    ·
    8 months ago

    That’s because it is overrated and the people in the tech industry are actually qualified to make that determination. It’s a glorified assistant, nothing more. we’ve had these for years, they’re just getting a little bit better. it’s not gonna replace a network stack admin or a programmer anytime soon.

  • milkjug@lemmy.wildfyre.dev
    link
    fedilink
    English
    arrow-up
    14
    ·
    8 months ago

    I have a doctorate in computer engineering, and yeah it’s overhyped to the moon.

    I’m oversimplifying it and some one will ackchyually me but once you understand the core mechanics the magic is somewhat diminished. It’s linear algebra and matrices all the way down.

    We got really good at parallelizing matrix operations and storing large matrices and the end result is essentially “AI”.

  • kromem@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    8
    ·
    8 months ago

    In my experience, well over half of tech industry workers don’t even understand it.

    I was just trying to explain to someone on Hacker News that no, the “programmers” of LLMs do not in fact know what the LLM is doing because it’s not being programmed directly at all (which even after several rounds of several people explaining still doesn’t seem to have sunk in).

    Even people that do understand the tech more generally pretty well are still remarkably misinformed about it in various popular BS ways, such as that it’s just statistics and a Markov chain, completely unaware of the multiple studies over the past 12 months showing that even smaller toy models are capable of developing abstract world models as long as they can be structured as linear representations.

    It’s to the point that unless it’s in a thread explicitly on actual research papers where explaining nuances seem fitting I don’t even bother trying to educate the average tech commentators regurgitating misinformation anymore. They typically only want to confirm their biases anyways, and have such a poor understanding of specifics it’s like explaining nuanced aspects of the immune system to anti-vaxxers.

  • rsuri@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    8 months ago

    I use github copilot. It really is just fancy autocomplete. It’s often useful and is indeed impressive. But it’s not revolutionary.

    I’ve also played with ChatGPT and tried to use it to help me code but never successfully. The reality is I only try it if google has failed me, and then it usually makes up something that sounds right but is in fact completely wrong. Probably because it’s been trained on the same insufficient data I’ve been looking at.

    • MeanEYE@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      8 months ago

      I still consider copilot to be a serial license violator. So many things are GPL licensed on GitHub and completing your code with someone else’s or at least variation of it without giving credit is a clear violation of the license.

    • 1984@lemmy.today
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 months ago

      For me it depends a lot on the question. For tech questions like programming language questions, it’s much faster than a search engine. But when I did research for cars and read reviews, I used Kagi.

    • thelastknowngod@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 months ago

      Yeah agreed. I use copilot too. It’s fine for small, limited tasks/functions but that’s about it. The overwhelming majority of my work is systems design and maintenance though… There’s no AI for that…

  • mdurell@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    8 months ago

    As with all tech; it depends. It’s another tool in my toolbox and a useful one at that. Will it replace me in my job? Not anytime soon. However, it will make me more proficient at my job and my 30+ years of experience will keep its bad ideas out of production. If my bosses decide tomorrow that I can be replaced with AI in the current state, they deserve what they have coming. That said, they are willing to pay for additional tooling provided me with multiple AI engines and I can’t be more thrilled. I’d rather give AI a simple task to do the busy work than work with overseas developers that get it wrong time and time again and take a week to iterate while asking how for loops work in Python.

    • AA5B@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      8 months ago

      I’m definitely looking forward to adding this as a tool: I’m in DevOps so have to jump back and forth among many different programming languages. It should real help to switch context faster

      … somehow I’m one of the “experts” using JavaScript despite never learning or using it. Hooray for my search engine skills I guess!

  • Echo Dot@feddit.uk
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    8 months ago

    That is a terrible graph. There’s no y axis, there’s no indication of what the scale is, and I don’t know how many people they asked or who these people were or what tech company they worked in.

    Just over 23% believe it is rated fairly, while a quarter of respondents were presumably proponents of the tech as they said it was underrated. However, 51.6% of people said it was overrated.

    That sentence is a fantastic demonstration of how bad this article is. The article says that a quarter say the technology is underrated, but it looks more like half to me. Not that it matters because, as I said the scale is useless. Also they are lumping 51.6% I don’t know how they came up with that number because again we don’t know what the total was, just that it was more than 1,500. You can’t calculate a percentage without knowing the total.

    The graph has 11 options so were they rating it on a scale of between 1 and 11. What’s that?

  • Blue and Orange@lemm.ee
    link
    fedilink
    English
    arrow-up
    9
    ·
    8 months ago

    The best use I’ve found for AI is getting it to write me covering letters for job applications. Even then I still need to make a few small adjustments. But it saves a bit of time and typing effort.

    Other than that, I just have fun with it making stupid images and funny stories based on inside jokes.

  • Dewded@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    8 months ago

    I work in an AI company. 99% of our tech relies on tried and true standard computer vision solutions instead of machine-learning based. It’s just that unreliable when production use requires pixel precision.

    We might throw a gradient descent here or there, but not for any learning ops.