

Even if it was conscious there would be no way to know. Consciousness is entirely subjective experience - it cannot be measured.


Even if it was conscious there would be no way to know. Consciousness is entirely subjective experience - it cannot be measured.


It was just an educated guess.


I have wonderful dreams of walking through AI data centers destroying everthing.
No you don’t.


AI is a broad category of systems, not any one thing. “AI doesn’t work” is like saying “plants taste bad”


Those terms are not synonymous. LLMs are very much an AI system but AI means much more than just LLMs.


I don’t think AI means what you think it does. What you’re thinking is probably more akin to AGI.
Logic Theorist is broadly considered to be the first ever AI system. It was written by Allen Newell in 1956.


Even broke the warranty seal over the USB port.


Either you’re talking confidently about something you couldn’t possibly know, or you’re risking the rest of your life in prison for leaking top-secret military info. Which is it?


General intelligence refers to human level intelligence where it’s not only limited to one task like playing chess or generating language. General intelligence exists - just not artificial one.


Saying that it’s good at one thing and bad at others.
But that’s exactly the difference between narrow AI and a generally intelligent one. A narrow AI can be “superhuman” at one specific task - like generating natural-sounding language - but that doesn’t automatically carry over to other tasks.
People give LLMs endless shit for getting things wrong, but they should actually get credit for how often they get it right too. That’s a pure side effect of their training - not something they were ever designed to do.
It’s like cruise control that’s also kinda decent at driving in general. You might be okay letting it take the wheel as long as you keep supervising - but never forget it’s still just cruise control, not a full autopilot.


It’s a Large Language Model designed to generate natural-sounding language based on statistical probabilities and patterns - not knowledge or understanding. It doesn’t “lie” and it doesn’t have the capability to explain itself. It just talks.
That speech being coherent is by design; the accuracy of the content is not.
This isn’t the model failing. It’s just being used for something it was never intended for.


What do you even do on Instagram for 16 hours?


Well, they don’t in this case either - they just add an extra step to it. You can buy a bit like that off eBay.


I went looking for a screenshot of that scene but got distracted by all the shop-vacs painted as R2D2. Off to buy white and blue paint.


Not to defend BMW here but it’s likely a very specific part that this screw is used for and 99% of home mechanics probably are never going to encounter it. Most likely having something to do with the high-voltage system which you shouldn’t be messing with anyway.
Probably didn’t read it because it was clearly low-quality slop - not because the final output was written by AI.
People don’t mind AI-generated content when they don’t detect it as such. It’s the low-effort garbage they don’t want to deal with.
Nobody has a perfect radar for AI content. This is just the good old toupee fallacy in action: “All toupees look terrible because I’ve never seen a good one” - except the good ones are the ones you never clocked as toupees.
For the past 10 years or so I’ve pretty much lived under the assumption that at some point someone figures out a system that digs through the entire internet and everything anyone has ever posted gets linked back to them.
At the same time, it’s both great and absolutely horrifying.
What’s horrifying is that everything you’ve ever posted gets linked back to you.
What’s great is that none of it can really be used against you anymore - because we now know that absolutely everyone is a massive hypocrite and nobody is without sin.