It’s not AI that’s the problem. AI is an amazingly powerful tool (I’m an AI researcher).
The problem is that it’s in the hands of psychotic technofascist greedy subhumans that want to destroy basically all of society so their stock can go up 0.001%. If we can cut out the source of the cancer, the body can begin to heal itself.
I was excited about the idea of purpose-built systems trained on specific datasets to be help find complex patterns to diagnose diseases or suggest potential molecules for specific purposes.
Then the LLM shit started and everyone started fantasizing about intelligent “AI” just because it was able to reproduce patterns of language that seem relevant to a given input. Some of those funding it kept chasing that dream and are convinced that, if they just throw more compute at the problem, they can evolve the renaissance AGI that can do anything. Then they can fire every worker and be bazillionaires with robot slaves and never have to work another day of their lives… and fuck everyone and everything else.
It’s amazing what we can ruin when we let greed and selfishness drive our society.
The LLM craze is a natural maturation point of the AI field though, and now it’s expanded into foundational models (FM) which you would still probably just call LLMs because most people don’t know the differences. FMs are getting close to that point of a magical universal computer that you can tell it to do anything about anything and it just works. There are specific FM applications like FMs for earth science or remote sensing (which I work in), but the big money coming from this technofascist elite is pushing for FMs for everything along with Agentic AI, which is the ultimate state to replace pesky human workers overall. They seek the ultimate triumph of Capital over Labor.
There are competing incentives driving the industry, but by far the strongest one is coming from who has the most money, and those who have the most money are the worst possible people that should have no say in how anything works. Scary times we’re in.
The LLM craze is a natural maturation point of the AI field
I don’t see why that is. Using ML to generate models that accurately perform specific tasks is orders of magnitude away from attempting to feed the entirety of human text into ML and expecting superhuman intelligence to emerge.
now it’s expanded into foundational models (FM) which you would still probably just call LLMs because most people don’t know the differences.
While ML and “AI” is not my field, I’m fairly certain that what I was attempting to describe in layman’s terms in my literal first sentence were these foundational models you are referring to.
FMs are getting close to that point of a magical universal computer that you can tell it to do anything about anything and it just works.
I have no direct experience outside of LLMs and I don’t really take issue with what I understand FMs to be, so long as they keep their scope narrow and focus on accurating completing specific tasks to assist humans. As soon as we hand off control and trust it blindly without extensive trials ensuring it’s reliability and failsafes in place to ensure inaccuracies are caught I start raising concerns.
My only experience is with LLMs - a few, minor attempts to “test the waters” of the major, publicly available LLM models. I’ve been frustrated with my search results and glanced at the AI results. Work gave us Gemini licenses and I used it in similar, desperate situatiuons for coding help and help with Google products foolishly thinking that if any LLM designed to help with such tasks would be passably useful it would be the LLM of the company that owns the products I seek help with. Unless something has changed drastically in the last month or so, every interaction has been a roll of the dice to such an extent that my occasional “testing the waters” caused me to jump out and avoid it as much as possible. I simply can’t trust it to not halucinate and gaslight me.
What I see as the problem is moving way, way, way too quickly in trusting language models to do anything even remotely important. Human communication is extremely nuanced, complicated, fluid, and imperfect. Humans misunderstand each other during communication even when we have the context of in-person visual/audible cues and interpersonal history.
What the pro-AI people always tend to argue back at comments like yours is that:
you used the wrong AI - it should be <insert preferred model here> - probably Claude at this point in time, for programming? i.e. the implication being that you are some old man who yells at clouds and does not know what they only learned themselves <6 months ago, as if that knowledge entirely invalidates your own lived experiences even in the last ~4 weeks.
you used the wrong parameters / queries. When applied to the equivalent of Google searches this seems a false claim to me because those used to be fairly brainless, whereas sometime soon Gemini is going to start charging $$$ in return for being able to find anything remotely helpful on the internet, but for now they would like it pretty please if you would help them train their model, before they turn around and sell it to you, and others (isn’t it glorious how you are allowed to help share in the work part, without proportionate access to the reward at the end?).
Tbf you probably did use the wrong queries for the programming questions. It seems to me to be like someone who actually lets a “self-driving car” drive by… itself? Like you are supposed to pay money for what is marketed one way but the reality after purchase is quite different, and if you e.g. run over little children then it’s not the fault of those who sold you a “self-driving car”, but rather (legally speaking) yourself who should not have allowed the car to drive by itself - how dare you not know better! (Despite being told precisely such with a nod and a wink)
The AI hype is real, and false, though despite that LLMs are quite a capable tool, if ignoring the hype and used under much more constrained circumstances than the hype would lead us to believe (despite the hype surrounding AI, rather than LLM technology itself, being the literal point of the OP though?).
They actually have a disorder or disease. However in this case their disorder is destroying the rest of the world. There’s a fast approaching point that the world organism will self-heal to prevent its own death.
Maybe it’s because I’ve only ever had at most a comfortable income but I truly don’t understand the mentality of needing so much money.
I don’t get paid as much as my peers but I make enough to be comfortable. I am my own department and, aside from emergencies and other high priority situations, I manage myself and choose what to work on when. I have a decent work life balance. Because I make enough to be comfortable (in large part because my landlord promised not to raise our rent - early in the COVID lockdown - if we were “good tenants” and has managed to keep true to her word) I don’t feel the need for more. That balance is worth not making the 20% more a year I might get somewhere else because I can’t guarantee I won’t have a shitty boss that doesn’t let me have that work/life balance.
Right! If you don’t count the mass surveillance boost, the autonomous killing machines they’re trying to make, the environmental impact, the pillaging of our individual experiences, and the destruction of all our shared spaces online, AI is a pretty cool tool.
All of that is because the incentives are coming from those with the most power/money who are the most psychotic cancer cells in the history of the world. You’re only aware of such a tiny sliver of it because that’s the most problematic and gets the most news. Those are all huge problems that need to be solved, but the cause isn’t AI. AI is just an accelerant for a sick hypercapitalist society that is doomed to collapse. AI itself has been used for millions of great things that improve all of life on earth, but in the hands of these psychopaths it’s just being used for the ultimate triumph of Capital over Labor, at the expense of literally everything else on earth.
Do you hate the concept of iron alloy? Because it was used for hundreds of years in swords and weapons to kill millions of people. See how silly that sounds?
Iron alloy doesn’t convince people they shouldn’t have their noose visible in case someone might see it and intervene. You’re not going to change my mind. Once the bubble is popped and all our lives get worse and 3 people control all the technology it’s not going to matter that it saves people time, or it creates efficiency.
Just because you don’t like my points doesn’t mean I’m arguing in bad faith, and I find it a little insulting that you’re trying to dodge instead of responding to my point by insinuating I am.
No I’m saying you’re not even trying to understand, you’re just saying you don’t like it no matter what. To that I said, ok keep living in your echochamber. I’m not saying that’s bad faith, it’s just not trying to reach truth.
The lack of regulation of AI is absolutely a serious problem, there are so many problems your comment isn’t even funny.
Problems with people using it for health advice.
Problems with teens using it instead of friends.
Problems with AI giving absurdly incorrect advice to people in general, but also professionals like managers and CEO’s.
Problems with data-centers that host these AI systems require enormous amounts of power. So much researchers have shown these data centers are drying up vast areas around the centers.
The techno-fascists are in all sorts of business, that’s not special for AI. The problem is with AI the techno-fascists aren’t regulated in any way.
Neither how their data centers impact the environment and the electric grid, or how AI has actual bad effects for their customers, because there is no regulation on the use or supply of AI services.
100% agree with every point you made. Everything you’re saying is specific to this iteration of LLMs though. That’s just one tiny piece (well large in terms of public perception and capital acquisition but small in terms of the research space).
The problem is that it’s in the hands of psychotic technofascist greedy subhumans
gee maybe people like you shouldn’t have put those tools into the shitbag’s hands?
I remember a decade ago multiple movements to reign in AI before it became uncontrollable, and any chance of that is long fuckin gone. we’re gonna barrel forward heedless of the danger, because fuck you that guy wants profits and doesn’t care about humanity.
and people like you made the tools and gave it to 'em.
That seems terribly extreme. Its not like its a bomb that is obviously for blowing people up. Someone made something with some cool applications, then some guys with many times more money and resources than anyone should be allowed to have, took the idea and ran with it toward a bunch of psychotic ends.
The problem isn’t that people can use good things for bad purposes, nor is it the people that make or improve those things. The root cause is that western society is currently structured in a way that ends up rewarding certain types of madness, and the reward structure is set up such that individuals can get a vast undue amount of influence and power. Under these conditions, it is natural that even a tiny number of such individuals can overtake the system like a single cancer cell can eventually kill someone. All of these alarming things going on for over 60 years are symptoms of that societal illness. Please don’t blame scientists for sciencing.
I fucking work on climate models you jabroni. You have no idea about the industry or really anything other than what your most echochambered influencers tell you to think.
Doubtful. And you thought that AI would stay in modeling? You made them something dangerous, and you thought it wouldn’t be weaponized?
you fucking moron. you either made yourself their bitch, or were used as their bitch unknowingly. science is ashamed of idiots like you who enable the worst.
It’s not AI that’s the problem. AI is an amazingly powerful tool (I’m an AI researcher).
The problem is that it’s in the hands of psychotic technofascist greedy subhumans that want to destroy basically all of society so their stock can go up 0.001%. If we can cut out the source of the cancer, the body can begin to heal itself.
I want to agree with you, but AI is just another psychopath in a world where we don’t need any more psychopaths.
-is an AI researcher -immediately uses Nazi lingo after introducing themselves
you can’t be more obvious than this about the ideology of AI💀
lol get offline a bit, not everything is Nazi everything. You’re saying “subhuman” is Nazi coded?
I believe Peter Thiel, Musk, Andreeson, Horowitz, Yarvin, and about 100 others who are actively trying to erode our society to the point of collapse, so they can rise as god-kings from the ashes, need to not exist in our free society. They have broken the inherent social contract and have therefore lost the privilege.
I was excited about the idea of purpose-built systems trained on specific datasets to be help find complex patterns to diagnose diseases or suggest potential molecules for specific purposes.
Then the LLM shit started and everyone started fantasizing about intelligent “AI” just because it was able to reproduce patterns of language that seem relevant to a given input. Some of those funding it kept chasing that dream and are convinced that, if they just throw more compute at the problem, they can evolve the renaissance AGI that can do anything. Then they can fire every worker and be bazillionaires with robot slaves and never have to work another day of their lives… and fuck everyone and everything else.
It’s amazing what we can ruin when we let greed and selfishness drive our society.
The LLM craze is a natural maturation point of the AI field though, and now it’s expanded into foundational models (FM) which you would still probably just call LLMs because most people don’t know the differences. FMs are getting close to that point of a magical universal computer that you can tell it to do anything about anything and it just works. There are specific FM applications like FMs for earth science or remote sensing (which I work in), but the big money coming from this technofascist elite is pushing for FMs for everything along with Agentic AI, which is the ultimate state to replace pesky human workers overall. They seek the ultimate triumph of Capital over Labor.
There are competing incentives driving the industry, but by far the strongest one is coming from who has the most money, and those who have the most money are the worst possible people that should have no say in how anything works. Scary times we’re in.
I don’t see why that is. Using ML to generate models that accurately perform specific tasks is orders of magnitude away from attempting to feed the entirety of human text into ML and expecting superhuman intelligence to emerge.
While ML and “AI” is not my field, I’m fairly certain that what I was attempting to describe in layman’s terms in my literal first sentence were these foundational models you are referring to.
I have no direct experience outside of LLMs and I don’t really take issue with what I understand FMs to be, so long as they keep their scope narrow and focus on accurating completing specific tasks to assist humans. As soon as we hand off control and trust it blindly without extensive trials ensuring it’s reliability and failsafes in place to ensure inaccuracies are caught I start raising concerns.
My only experience is with LLMs - a few, minor attempts to “test the waters” of the major, publicly available LLM models. I’ve been frustrated with my search results and glanced at the AI results. Work gave us Gemini licenses and I used it in similar, desperate situatiuons for coding help and help with Google products foolishly thinking that if any LLM designed to help with such tasks would be passably useful it would be the LLM of the company that owns the products I seek help with. Unless something has changed drastically in the last month or so, every interaction has been a roll of the dice to such an extent that my occasional “testing the waters” caused me to jump out and avoid it as much as possible. I simply can’t trust it to not halucinate and gaslight me.
What I see as the problem is moving way, way, way too quickly in trusting language models to do anything even remotely important. Human communication is extremely nuanced, complicated, fluid, and imperfect. Humans misunderstand each other during communication even when we have the context of in-person visual/audible cues and interpersonal history.
What the pro-AI people always tend to argue back at comments like yours is that:
you used the wrong AI - it should be <insert preferred model here> - probably Claude at this point in time, for programming? i.e. the implication being that you are some old man who yells at clouds and does not know what they only learned themselves <6 months ago, as if that knowledge entirely invalidates your own lived experiences even in the last ~4 weeks.
you used the wrong parameters / queries. When applied to the equivalent of Google searches this seems a false claim to me because those used to be fairly brainless, whereas sometime soon Gemini is going to start charging $$$ in return for being able to find anything remotely helpful on the internet, but for now they would like it pretty please if you would help them train their model, before they turn around and sell it to you, and others (isn’t it glorious how you are allowed to help share in the work part, without proportionate access to the reward at the end?).
Tbf you probably did use the wrong queries for the programming questions. It seems to me to be like someone who actually lets a “self-driving car” drive by… itself? Like you are supposed to pay money for what is marketed one way but the reality after purchase is quite different, and if you e.g. run over little children then it’s not the fault of those who sold you a “self-driving car”, but rather (legally speaking) yourself who should not have allowed the car to drive by itself - how dare you not know better! (Despite being told precisely such with a nod and a wink)
The AI hype is real, and false, though despite that LLMs are quite a capable tool, if ignoring the hype and used under much more constrained circumstances than the hype would lead us to believe (despite the hype surrounding AI, rather than LLM technology itself, being the literal point of the OP though?).
I stumbled upon this randomly and enjoyed the read: https://www.structural-integrity.eu/is-there-a-need-for-ai-after-capitalism/.
At 1million i could already stop working and live decent life :/. I really don’t get why past 1billion they continue to search for more
They actually have a disorder or disease. However in this case their disorder is destroying the rest of the world. There’s a fast approaching point that the world organism will self-heal to prevent its own death.
Maybe it’s because I’ve only ever had at most a comfortable income but I truly don’t understand the mentality of needing so much money.
I don’t get paid as much as my peers but I make enough to be comfortable. I am my own department and, aside from emergencies and other high priority situations, I manage myself and choose what to work on when. I have a decent work life balance. Because I make enough to be comfortable (in large part because my landlord promised not to raise our rent - early in the COVID lockdown - if we were “good tenants” and has managed to keep true to her word) I don’t feel the need for more. That balance is worth not making the 20% more a year I might get somewhere else because I can’t guarantee I won’t have a shitty boss that doesn’t let me have that work/life balance.
Right! If you don’t count the mass surveillance boost, the autonomous killing machines they’re trying to make, the environmental impact, the pillaging of our individual experiences, and the destruction of all our shared spaces online, AI is a pretty cool tool.
All of that is because the incentives are coming from those with the most power/money who are the most psychotic cancer cells in the history of the world. You’re only aware of such a tiny sliver of it because that’s the most problematic and gets the most news. Those are all huge problems that need to be solved, but the cause isn’t AI. AI is just an accelerant for a sick hypercapitalist society that is doomed to collapse. AI itself has been used for millions of great things that improve all of life on earth, but in the hands of these psychopaths it’s just being used for the ultimate triumph of Capital over Labor, at the expense of literally everything else on earth.
All those things being true is enough for me to hate AI.
Edit: As my dad says, One aw shit wipes away a million attaboys.
Do you hate the concept of iron alloy? Because it was used for hundreds of years in swords and weapons to kill millions of people. See how silly that sounds?
Iron alloy doesn’t convince people they shouldn’t have their noose visible in case someone might see it and intervene. You’re not going to change my mind. Once the bubble is popped and all our lives get worse and 3 people control all the technology it’s not going to matter that it saves people time, or it creates efficiency.
You’re not um… you’re not even reading, but ok. Keep living in your echochamber I guess.
Just because you don’t like my points doesn’t mean I’m arguing in bad faith, and I find it a little insulting that you’re trying to dodge instead of responding to my point by insinuating I am.
No I’m saying you’re not even trying to understand, you’re just saying you don’t like it no matter what. To that I said, ok keep living in your echochamber. I’m not saying that’s bad faith, it’s just not trying to reach truth.
Narrator: actually, no it was not.
e.g. it still spreads misinformation.
The lack of regulation of AI is absolutely a serious problem, there are so many problems your comment isn’t even funny.
Problems with people using it for health advice.
Problems with teens using it instead of friends.
Problems with AI giving absurdly incorrect advice to people in general, but also professionals like managers and CEO’s.
Problems with data-centers that host these AI systems require enormous amounts of power. So much researchers have shown these data centers are drying up vast areas around the centers.
The techno-fascists are in all sorts of business, that’s not special for AI. The problem is with AI the techno-fascists aren’t regulated in any way.
Neither how their data centers impact the environment and the electric grid, or how AI has actual bad effects for their customers, because there is no regulation on the use or supply of AI services.
100% agree with every point you made. Everything you’re saying is specific to this iteration of LLMs though. That’s just one tiny piece (well large in terms of public perception and capital acquisition but small in terms of the research space).
gee maybe people like you shouldn’t have put those tools into the shitbag’s hands?
I remember a decade ago multiple movements to reign in AI before it became uncontrollable, and any chance of that is long fuckin gone. we’re gonna barrel forward heedless of the danger, because fuck you that guy wants profits and doesn’t care about humanity.
and people like you made the tools and gave it to 'em.
That seems terribly extreme. Its not like its a bomb that is obviously for blowing people up. Someone made something with some cool applications, then some guys with many times more money and resources than anyone should be allowed to have, took the idea and ran with it toward a bunch of psychotic ends.
The problem isn’t that people can use good things for bad purposes, nor is it the people that make or improve those things. The root cause is that western society is currently structured in a way that ends up rewarding certain types of madness, and the reward structure is set up such that individuals can get a vast undue amount of influence and power. Under these conditions, it is natural that even a tiny number of such individuals can overtake the system like a single cancer cell can eventually kill someone. All of these alarming things going on for over 60 years are symptoms of that societal illness. Please don’t blame scientists for sciencing.
bingo
I fucking work on climate models you jabroni. You have no idea about the industry or really anything other than what your most echochambered influencers tell you to think.
Doubtful. And you thought that AI would stay in modeling? You made them something dangerous, and you thought it wouldn’t be weaponized?
you fucking moron. you either made yourself their bitch, or were used as their bitch unknowingly. science is ashamed of idiots like you who enable the worst.
lol thanks for the chuckle. Go outside for a bit.
enjoy your vibe coding clanker
It’s amazing how open source has benefitted the individual. The monopolization of compute is still a barrier we’ll have to crash through