I came across this article in another Lemmy community that dislikes AI. I’m reposting instead of cross posting so that we could have a conversation about how “work” might be changing with advancements in technology.
The headline is clickbaity because Altman was referring to how farmers who lived decades ago might perceive that the work “you and I do today” (including Altman himself), doesn’t look like work.
The fact is that most of us work far abstracted from human survival by many levels. Very few of us are farming, building shelters, protecting our families from wildlife, or doing the back breaking labor jobs that humans were forced to do generations ago.
In my first job, which was IT support, the concept was not lost on me that all day long I pushed buttons to make computers beep in more friendly ways. There was no physical result to see, no produce to harvest, no pile of wood being transitioned from a natural to a chopped state, nothing tangible to step back and enjoy at the end of the day.
Bankers, fashion designers, artists, video game testers, software developers and countless other professions experience something quite similar. Yet, all of these jobs do in some way add value to the human experience.
As humanity’s core needs have been met with technology requiring fewer human inputs, our focus has been able to shift to creating value in less tangible, but perhaps not less meaningful ways. This has created a more dynamic and rich life experience than any of those previous farming generations could have imagined. So while it doesn’t seem like the work those farmers were accustomed to, humanity has been able to shift its attention to other types of work for the benefit of many.
I postulate that AI - as we know it now - is merely another technological tool that will allow new layers of abstraction. At one time bookkeepers had to write in books, now software automatically encodes accounting transactions as they’re made. At one time software developers might spend days setting up the framework of a new project, and now an LLM can do the bulk of the work in minutes.
These days we have fewer bookkeepers - most companies don’t need armies of clerks anymore. But now we have more data analysts who work to understand the information and make important decisions. In the future we may need fewer software coders, and in turn, there will be many more software projects that seek to solve new problems in new ways.
How do I know this? I think history shows us that innovations in technology always bring new problems to be solved. There is an endless reservoir of challenges to be worked on that previous generations didn’t have time to think about. We are going to free minds from tasks that can be automated, and many of those minds will move on to the next level of abstraction.
At the end of the day, I suspect we humans are biologically wired with a deep desire to output rewarding and meaningful work, and much of the results of our abstracted work is hard to see and touch. Perhaps this is why I enjoy mowing my lawn so much, no matter how advanced robotic lawn mowing machines become.
Starting this conversation with Sam Altman is like showing up at a funeral in a clowncar
Or showing up at a strip club with communion wafers
Or both, not a singularity, but a duality
So long as we’re not engaging with someone quoting Altman, I’m good with anything.
At one time software developers might spend days setting up the framework of a new project, and now an LLM can do the bulk of the work in minutes.
No and no. Have you ever coded anything?
Yeah, I have never spent “days” setting anything up. Anyone who can’t do it without spending “days” struggling with it is not reading the documentation.
Sometimes documentation is inconsistent.
Have you ever built anything with your hands that mattered?
Yes. How is it relevant to moderne SWE practices?
OP wrote 10 paragraphs and your head is still in devland.
I know this was aimed at someone else. But my response is “Every day.” What is your follow-up question?
If your argument attacks my credibility, that’s fine, you don’t know me. We can find cases where developers use the technology and cases where they refuse.
Do you have anything substantive to add to the discussion about whether AI LLMs are anything more than just a tool that allows workers to further abstract, advancing all of the professions it can touch towards any of: better / faster / cheaper / easier?
Says the guy who hadn’t worked a day in their life
From the article:
“The thing about that farmer,” Altman said, is not only that they wouldn’t believe you, but “they very likely would look at what you do and I do and say, ‘that’s not real work.'”
I think he pretty much agrees with you.
Talking psychology, please stop calling it AI. This raises unrealistic expectations. They are Large Language Models.
Raising unrealistic expectations is what companies like OpenAI are all about
In computer science machine learning and LLMs are part of AI. Before that other algorithms were considered part of AI. You may disagree, probably because all the hype around LLMs, but they are AI
Granting them AI status, we should recognize that they “gained their abilities” by training on the rando junk that people post on the internet.
I have been working with AI for computer programming, semi-seriously for 3 months, pretty intensively for the last two weeks. I have also been working with humans for computer programming for 35 years. AI’s “failings” are people’s failings. They don’t follow directions reliably, and if you don’t manage them they’ll go down rabbit holes of little to no value. With management, working with AI is like an accelerated experience with an average person, so the need for management becomes even more intense - where you might let a person work independently for a week then see what needs correcting, you really need to stay on top of AI’s “thought process” on more of a 15-30 minute basis. It comes down to the “hallucination rate” which is a very fuzzy metric, but it works pretty well - at a hallucination rate of 5% (95% successful responses) AI is just about on par with human workers - but faster for complex tasks, and slower for simple answers.
Interestingly, for the past two weeks, I have been having some success with applying human management systems to AI: controlled documents, tiered requirements-specification-details documents, etc.
You missed the psychology part?
No, I saw it, but I was replying to the “please stop calling it AI” part. This is a computer science term, not a psychology term. Psychologists have no business discussing what computer scientists call these systems
What do i even answer here…
Who talks even about computer scientists? It’s the public and especially company bosses who get wrong expectations about “intelligence”. It’s about psychology, not about scientifically correct names.
Ah, I see. We in the software industry are no longer allowed to use our own terms because outsiders co-opted them.
Noted.
deleted by creator
Dude, what age are you? 13? Log off and go play with your friends.
The solution to the public misusing technical terms isn’t to change the technical terms, but to educate the public. All of the following fall under AI:
- pathing algorithms of computer opponents, but probably not the decisions that computer opponents make (i.e. who to attack; that’s usually based on manually specified logic)
- the speech to text your phone used before Gemeni or whatever it’s called now on Android (Gemeni is also AI, just a different type of AI)
- home camera systems that can detect people vs animals, and sometimes classify those animals by species
- DDOS protection systems and load balancers for websites probably use some type of AI
AI is a broad field, and you probably interact with non-LLM variants every day, whether you notice or not. Here’s a Wikipedia article that goes through a lot of it. LLMs/GPT are merely one small subfield in the larger field of AI.
I don’t understand how people went from calling the computer player in their game “AI” (or even older, “CPU”), which nobody mistook for actual intelligence, to now people believing AI means something is sentient. Maybe it’s because LLMs are more convincing since they do a much better job at languages, idk, but it’s the same category of thing under the hood. ChatGPT isn’t “thinking,” and when it claims to “think,” it’s basically turning a prompt into a set of things to “think” about (basically generates and answers related prompts), and then uses that set of things in its context to provide an answer. It’s not actually “thinking” as people do, it’s merely following a set of statistically-motivated steps based on your prompt to generate a relevant answer. It’s a lot more complex than that Warcraft 2 bot you played against as a kid, but it’s still following steps a human designed, along with some statistical methods to adapt to things the developer didn’t encounter.
What do we need the mega rich for anyway? They aren’t creative and easily replaced with AI at this point.
What do we need the mega rich for anyway?
Supposedly the creation and investment of industries, then managing those businesses which also supposedly provide employment for thousands who make the things for them. Except they’ll find ways to cut costs and maximize profit. Like looking for cheaper labor while at the same time thinking of building the next megayacht for which to flex off at Monte Carlo next summer.
Can’t AI replace Sam Altman?
says the guy who never did real work in his life
deleted by creator
The problem is the capitalist investor class, by and large, determines what work will be done, what kinds of jobs there will be, and who will work those jobs. They are becoming increasingly out of touch with reality as their wealth and power grows and seem to be trying to mold the world into something, somewhere along the lines of what Curtis Yarven advocates for, that most people would consider very dystopian.
This discussion is also ignoring the fact that currently, 95% of AI projects fail, and studies show that LLM use hurts the productivity of programmers. But yeah, there will almost surely be breakthroughs in the future that will produce more useful AI tech; nobody knows what the timeline for that is though.
But isn’t the investment still driven by consumption in the end? They invest in what makes money, but in the end things people are willing to spend money on make money.
You’d think so, but unfortunately not. Venture captial is completely illogical, designed around boom or bust “moonshot” ideas that are supposed to completely change everything. So this money isn’t driven by actual consumption, rather speculation. I can’t really speak to other forms of investment but I suspect it doesn’t get a whole lot better. The economy has become far too financialised with a fiat currency that is completely separate from actual intrinsic value. That’s why a watch can cost more than a family home, which isn’t true consumption - just this weird concept of “wealth”
They invest in things they think they will be able to sell later for a higher price. Expected consumption is sometimes part of their calculations. But, they are increasingly not in touch with reality (see blockchain, metaverse, Tesla, etc). Sometimes they knowingly take a loss to gain power over the masses (Twitter, Washington Post). They are also powerful enough to induce consumption (bribe governments for contracts, laws, bailouts, and regulations that ensure their investments will be fruitful). They are powerful enough to heavily influence which politicians will get elected, choosing who they want to bribe. They are powerful enough to force the businesses they are invested in to buy/sell to each other. The largest, most profitable companies, produce nearly nothing, they use their positions of being near-monopolies to extract rent (i.e. enshittification/technofeudalism).
To be fair, a lot of jobs in capitalist societies are indeed pointless. Some of them even actively do nothing but subtract value from society.
That said, people still need to make a living and his piece of shit artificial insanity is only making it more difficult. How about stop starving people to death and propose solutions to the problem?
There’s a book Bullshit Jobs that explores this phenomenon. Freakonomics also did an episode referring to the book, which I found interesting.
Bullshit Jobs: A Theory is a 2018 book by anthropologist David Graeber that postulates the existence of meaningless jobs and analyzes their societal harm. He contends that over half of societal work is pointless and becomes psychologically destructive when paired with a work ethic that associates work with self-worth
They may seem pointless to those outside of the organization. As long as someone is willing to pay them then someone considers they have value.
No one is “starving to death” but you’d have people just barely scraping by.
With many bearaucracies there’s plenty of practically valueless work going on.
Because some executive wants to brag about having over a hundred people under them. Because some proceas requires a sort of document be created that hasn’t been used in decades but no one has the time to validate what does or does not matter anymore. Because of a lot of little nonsense reasons where the path of least resistance is to keep plugging away. Because if you are 99 percent sure something is a waste of time and you optimize it, there’s a 1% chance you’ll catch hell for a mistake and almost no chance you get great recognition for the efficiency boost if it pans out.
why capitalist societies specifically?
Fuck you
He doesn’t know Jobs was wiped out by cancer?
I have a feeling people are gonna remember that when his job gets wiped out.
If OpenAI gets wiped out, maybe it wasn’t even a “real company” to start with
I agree with the sentiment, as bad as it feels to agree with Altman about anything.
I’m working as a software developer, working on the backend of the website/loyalty app of some large retailer.
My job is entirely useless. I mean, I’m doing a decent job keeping the show running, but (a) management shifts priorities all the time and about 2/3 of all the “super urgent” things I work on get cancelled before then get released and (b) if our whole department would instantly disappear and the app and webside would just be gone, nobody would care. Like, literally. We have an app and a website because everyone has to have one, not because there’s a real benefit to anyone.
The same is true for most of the jobs I worked in, and about most jobs in large corporations.
So if AI could somehow replace all these jobs (which it can’t), nothing of value would be lost, apart from the fact that our society requires everyone to have a job, bullshit or not. And these bullshit jobs even tend to be the better-paid ones.
So AI doing the bullshit jobs isn’t the problem, but people having to do bullshit jobs to get paid is.
If we all get a really good universal basic income or something, I don’t think most people would mind that they don’t have to go warm a seat in an office anymore. But since we don’t and we likely won’t in the future, losing a job is a real problem, which makes Altman’s comment extremely insensitive.
The same is true for most of the jobs I worked in, and about most jobs in large corporations.
I don’t think that’s necessarily true.
My job started as a relatively BS job. Basically, the company I work for makes physical things, and the people who use those physical things need to create reports to keep the regulators happy. So my first couple years on the job was improving the report generation app, which was kind of useful since it saved people an hour or two a week in producing reports. But the main reason we had this app in the first place was because our competitors had one, and the company needed a digital product to point to in order to sell customers (who didn’t use the app, someone a few layers down did) on it. Basically, my job existed to check a box.
However, my department went above and beyond and created tools to optimize our customers’ businesses. We went past reporting and built simulations related to reporting, but that brought actual value. They could reduce or increase use of our product based on actual numbers, and that change would increase their profitability (more widgets produced per dollar spent). When the company did a round of cost cutting, they took a look at our department ready to axe us, but instead increased our funding when they saw the potential of our simulations, and now we’re making using the app standard for all of our on-staff consultants and front-and-center for all new customer acquisitions (i.e. not just reporting, but showcasing our app as central to the customer’s business).
All that has happened over the last year or so, so I guess we’ll see if that actually increases customer retention and acquisition. My point is that my job transitioned from something mostly useless (glorified PDF generator) to something that actually provides value to the business and likely reduces costs downstream (that’s about 3 steps away from the retail store, but it could help cut prices a few percent on certain products while improving profits for us and our customers).
If we all get a really good universal basic income or something
I disagree with your assertion that many jobs exist because people need jobs. I think jobs exist because even “BS” job create value. If there was a labor surplus today, jobs would be created the lower cost of labor acquisition makes certain products profitable that wouldn’t otherwise be.
That said, I am 100% a fan of something like UBI, though I personally would make it based on income (i.e. a Negative Income Tax, so only those under $X get the benefit), but that’s mostly to make the dollar amount of that program less scary. For example, there are ~130M households in the US (current pop is 342M, or about 2.6 people per household). The poverty line is $32,150 for a family, and sending that out as UBI would cost ~4.1T, which is almost as much as the current US budget. If we instead brought everyone to the poverty line through something like NIT, that’s only ~168B, or about 4% of the current budget.
Regardless of the approach, I think ensuring everyone is between the poverty line (i.e. unemployed people) and a living wage (i.e. minimum wage people) is a good idea for a few reasons:
- allows you to quit your BS job and not be screwed - puts pressure on employers at low-paying jobs to provide a better work experience and pay
- allows us to distribute other benefits in dollars instead of services - this book opened my eyes to how much poor people want cash, not benefits; it’s easier to move if you have $1k/month in rent allowance than stuck in your section 8 (government assisted) housing
- could eliminate the federal minimum wage - if employers aren’t paying well, people won’t take the job because they’d rather take the gov’t handout, so I’d consider the UBI/NIT to be the minimum wage instead
- encourages entrepreneurs to start businesses - my main reason for not starting a business in worries about not being able to cover my basic needs; UBI/NIT covers that, so I probably would have started a few small businesses if I had that as a fallback
- can replace Social Security (or other gov’t pension plan), since retirees can treat UBI/NIT as their pension, and not be restricted to a specific age to take it (benefits would be lower, but very predictable)
Giving people a backup plan encourages people to take more risks, which should result in more value across the economy.
This guy needs to find Luigi.
that’s a smart comment to make.











