so the ai kills off all its food sources and then what
Grab a copy of the stackoverflow database and use it locally, or train your own local LLM on the datastore.
And if you can, donate to the Internet Archive – those people do really important work in today’s age of killing off old information and constant enshittification.
Bad news. Since AI can only answer what it knows. If you have a question that is legit but not yet part of stackoverflow, you get a bad AI response.
In that case you can ask it on the stackoverflow website. But due to the fact that everybody now only rely on AI stackoverflow is dead. Well there you go, you just killed the source of truth.
I don’t know if it’s just my age/experience or some kind of innate “horse sense” But I tend to do alright with detecting shit responses, whether they be human trolls or an LLM that is lying through its virtual teeth. I don’t see that as bad news, I see it as understanding the limitations of the system. Perhaps with a reasonable prompt an LLM can be more honest about when it’s hallucinating?
I don’t know if it’s just my age/experience or some kind of innate “horse sense” But I tend to do alright with detecting shit responses, whether they be human trolls or an LLM that is lying through its virtual teeth
I’m not sure how you would do that if you are asking about something you don’t have expertise in yet, as it takes the exact same authoritative tone no matter whether the information is real.
Perhaps with a reasonable prompt an LLM can be more honest about when it’s hallucinating?
So far, research suggests this is not possible (unsurprisingly, given the nature of LLMs). Introspective outputs, such as certainty or justifications for decisions, do not map closely to the LLM’s actual internal state.
I’m not sure how you would do that if you are asking about something you don’t have expertise in yet, as it takes the exact same authoritative tone no matter whether the information is real.
I agree – That’s why I’m chalking it up to some kind of healthy sense of skepticism when it comes to trusting authoritative-sounding answers by themselves. e.g. “ok that sounds plausible, let’s see if we can find supporting information on this answer elsewhere or, maybe ask the same question a different way to see if the new answer(s) seem to line up.”
So far, research suggests this is not possible (unsurprisingly, given the nature of LLMs). Introspective outputs, such as certainty or justifications for decisions, do not map closely to the LLM’s actual internal state.
Interesting – I still see them largely as black boxes so reading about how people smarter than me describe the processes is fascinating.
let’s see if we can find supporting information on this answer elsewhere or, maybe ask the same question a different way to see if the new answer(s) seem to line up
Yeah, that’s probably the best way to go about it, but still requires some foundational knowledge on your part. For example, in a recent study I worked on we found that programming students struggle hard when the LLM output is wrong and they don’t know enough to understand why. They then tend to trust the LLM anyways and end up prompting variations of the same thing over and over again to no avail. Other studies similarly found that while good students can work faster with AI, many others are actually worse off due to being misled.
I still see them largely as black boxes
The crazy part is that they are, even for the researchers that came up with them. Sure we can understand how the data flows from input to output, but realistically not a single person in the world could look at all of the weights in an LLM and tell you what it has learned. Basically everything we know about their capabilities on tasks is based on just trying it out and seeing how well it works. Hell, even “prompt engineers” are making a lot of their decisions based on vibes only.
Maybe, just maybe, most of the big questions have been asked and answered already.
These days when I look something up it’s been answered like 8 years ago, and the answer is still valid. And they aggressively mark questions as dupes, so people aren’t opening too many repeat questions.
The annoying thing about the dupe policy is sometimes the answer does change and the accepted answer to the existing question is from 5 years ago.
Yup. Infuriating. I can’t remember how many times I saw a thread of someone asking my version of a question that was then closed as duplicate linking to an older one that wasn’t the right version and therefore the fix was irrelevant or at least not best practice anymore.
It also doesn’t help when the answer is “yeah just disable this feature that is used for security. That fixes the issue” but that really isn’t the best solution
Yep, I’ve never needed to ask a question on Stack Overflow as everything I’ve searched for has been answered already… or I’ve looked elsewhere for the answer as I’m not allowed to upvote, downvote or ask questions on it anyway due to lack of karma (or whatever they call it). No wonder it’s in decline if nobody new is allowed to contribute, and every new question is closed as a duplicate.
The barrier to entry is silly.
Maybe StackOverflow is dying because its community is full of incredibly toxic, passive-aggressive and hostile basement dwellers who will berate, downvote and lock the threads of anybody who dares ask a programming question. Genuinely the kind of people you often see moderating subreddits or Discord servers who have never been punched in the face.
ChatGPT hammered the final nail in the site’s coffin because it’s now become a tool where you can ask specific programming questions and likely get an answer that isn’t “use the search bar you fucking dipshit. Question closed as off-topic.”
There are poor personality types everywhere, but I have found stackexchange/stackoverflow to be one of the better sources of user curated help. LLMs are a new and interesting avenue and I’ve had some good success with them too, but Stackoverflow was really, really good.
Yes. Stack overflow is a place where you can get knowledge from experts for free. The people that complain about the moderation being toxic generally think they are entitled to expert’s time without putting in any effort themselves and would drastically degrade the utility of the site if they got their way.
Here’s the thing - Stack Overflow replaced existing non-corporate less shitty places on the Internet where we experts shared knowledge for free. Stack Overflow quickly got so bad that many experts stopped sharing, but only after disrupting existing sharing communities.
People who remember what came before have a right to be angry that SO embraced and extinguished the free (and advertising free) forums and IRC channels that came before it.
(I admit SO was better in many ways. But it also killed off something more resilient. I hope we can someday rebuild some of what we had, outside of the long corporate line-must-go-up shadow. I don’t know if we will or not.)
Well it might goes both ways. People are not afraid to ask stupid questions to AI. And at the same time, AI will not judge the user.
Eh, they will complain that “ai is stupid” when the actual issue is pepple’s inability to even describe their problem. We already see this happen.
Another big problem is that we’ve been collectively trying to shoehorn everybody into programming careers for the better part of two decades. In fact, “just learn to code” is often thrown around by people in response to the prospect of AI automating and taking over everybody’s jobs.
What they don’t understand is that coding is actually very difficult, especially for people who are bad at math, which is a significant portion of the population if you look at statistics, grades, test scores, etc. Expecting a lowly paid call center worker who lost their job to AI to suddenly open up Visual Studio and write any code is a fools errand.
I bring this up because I think there’s a correllation between people asking low-quality questions and people being pushed into making a career move into tech.
Found the person that asks shitty questions
Found the shitty moderator
Alas, I’m just a person who only had positive experiences in stack overflow and know the type of entitled dumbasses who think they should be able to ask volunteers to do their homework for them
Every time I go to SO I have to deal with CloudFlare checks or captchas. I’m not genuinely not sure why, but it has kept me from clicking SO links from search engines first. Not even using a VPN. Kind of odd.
Stack Exchange is a business owned by investment company Prosus, and the Stack Exchange products include private versions of its site (Stack Overflow for Teams)
Private equity milking another product dry.
“Which is bad news for developers”
Nah, we’ve been through lots of iterations of community for developers, irc, maillists, forums, stackoverflow, etc. Most of my complex questions go through specific discord communities now. I’m not trying to spend a year editing a single post because some swamp ass weanie on stackoverflow has his nose covered in rule dust.
Yes ai has changed the game a bit, but it is not removing community, it’s mostly just cutting down on the question duplication
My most recent foray into a new technology was working with vulkan in rust on a mac, stackoverflow is useless compared to the vulkan discord.
fuck discord. this is the only thing I want to add to the other 2 responses
and mind me, matrix wouldn’t be that much better in that regard. better, but still bad, because it’s a bad format for this.
RIP permanence and discoverability
Down side of discord is huge. It’s not searchable to start with / its not index. Often it’s not even public information.
It’s like storing data on your personal hard drive/ssd. It’s the worst way to share knowledge.
They don’t allow me to create an account because email restriction, VPN/IP restriction…
If they don’t want content, that’s their choice
Even if your in and have a history of good questions and responses it is still ridiculously hard to get a question accepted. Stackoverflow is dying due to its own choices and its driven many people away from it. They caused their own peak in 2014 and its amazing it took this long to decline.