• 0 Posts
  • 111 Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle

  • There’s plenty of science fiction without technology playing a significant role.

    Robert Silverberg’s Dying Inside was the first that came to mind; Asimov’s The Gods Themselves or Nightfall might be other examples; Olaf Stapledon’s Sirius; Clarke’s Childhood’s End has (alien) tech, but it mostly focuses on the psychological and societal effects of the contact with aliens, as does Ted Chiang’s Story of Your Life (and some of the other stories collected in the same volume, Stories of Your Life and Others); Philip K. Dick’s The Man in the High Castle, Kurt Vonnegut’s Slaughterhouse-Fivelots of great science fiction works focus on aspects other than technology.














  • Having to deal with pull requests defecated by “developers” who blindly copy code from chatgpt is a particularly annoying and depressing waste of time.

    At least back when they blindly copied code from stack overflow they had to read through the answers and comments and try to figure out which one fit their use case better and why, and maybe learn something… now they just assume the LLM is right (despite the fact that they asked the wrong question and even if they had asked the right one it’d’ve given the wrong answer) and call it a day; no brain activity or learning whatsoever.




  • Are search engines worse than they used to be?

    Definitely.

    Am I still successfully using them several times a day to learn how to do what I want to do (and to help colleagues who use LLMs instead of search engines learn how to do what they want to do once they get frustrated enough to start swearing loudly enough for me to hear them)?

    Also yes. And it’s not taking significantly longer than it did when they were less enshittified.

    Are LLMs a viable alternative to search engines, even as enshittified as they are today?

    Fuck, no. They’re slower, they’re harder and more cumbersome to use, their results are useless on a good day and harmful on most, and they give you no context or sources to learn from, so best case scenario you get a suboptimal partial buggy solution to your problem which you can’t learn anything useful from (even worse, if you learn it as the correct solution you’ll never learn why it’s suboptimal or, more probably, downright harmful).

    If search engines ever get enshittified to the point of being truly useless, the alternative aren’t LLMs. The alternative is to grab a fucking book (after making sure it wasn’t defecated by an LLM), like we did before search engines were a thing.


  • I’ve been finding it a lot harder recently to find what I’m looking for when it comes to coding knowledge on search engines

    Yeah, the enshittification has been getting worse and worse, probably because the same companies making the search engines are the ones trying to sell you the LLMs, and the only way to sell them is to make the alternatives worse.

    That said, I still manage to find anything I need much faster and with less effort than dealing with an LLM would take, and where an LLM would simply get me a single answer (which I then would have to test and fix), while a search engine will give me multiple commented answers which I can compare and learn from.

    I remembered another example: I was checking a pull request and it wouldn’t compile; the programmer had apparently used an obscure internal function to check if a string was empty instead of string.IsNullOrWhitespace() (in C# internal means “I designed my classes wrong and I don’t have time to redesign them from scratch; this member should be private or protected, but I need to access it from outside the class hierarchy, so I’ll allow other classes in the same assembly to access it, but not ones outside of the assembly”; similar use case as friend in c++; it’s used a lot in standard .NET libraries).

    Now, that particular internal function isn’t documented practically anywhere, and being internal can’t be used outside its particular library, so it wouldn’t pop up in any example the coder might have seen… but .NET is open source, and the library’s source code is on GitHub, so chatgpt/copilot has been trained on it, so that’s where the coder must have gotten it from.

    The thing, though, is that LLM’s being essentially statistic engines that’ll just pop up the most statistically likely token after a given sequence of tokens, they have no way whatsoever to “know” that a function is internal. Or private, or protected, for that matter.

    That function is used in the code they’ve been trained on to figure if a string is empty, so they’re just as likely to output it as string.IsNullOrWhitespace() or string.IsNullOrEmpty().

    Hell, if(condition) and if(!condition) are probably also equally likely in most places… and I for one don’t want to have to debug code generated by something that can’t tell those apart.