• 0 Posts
  • 36 Comments
Joined 6 months ago
cake
Cake day: March 3rd, 2024

help-circle

  • Computer - Radar Rat Race on the C-64, bought it in cartridge when I got the computer just to have something to start with. Last cartridge game I bought too, the rest were either on tape, later floppy disk, or typed in from a computer magazine. On the latter, I think they were trying to develop a generation of programmers with those and intentionally put bugs in them to make them not work until you fixed them. Every one.

    Arcade - that’s a harder one to remember, but I do know it/they would have been at a roller skating rink. Probably Asteroids as that would have gotten my attention first, or Turbo or Wizard of Wor.


  • The idea of migration and data preservation has been a topic since day one, since that’s a big reason why so many moved to the Fediverse. I still haven’t seen a perfect solution, and maybe there isn’t one. Perhaps just having a lot of redundancy (oh no, reposts!) is the only true way of protecting posts for as long as possible, and even then…

    Ernest started things rolling with something that probably wasn’t ready for the demand, but it was there when the time came. That others forked off from it and kept it going is the bright spot here. I appreciate Lemmy and even have an account from the first days, but I like the kbin/mbin setup better so that’s where I sit.




  • LLMs alone won’t. Experts in the field seem to have different opinions on if they will help get us there. What is concerning to me is that the issues and dangers of AGI also exist with advanced LLM models, and that research is being shelved because it gets in the way of profit. Maybe we’ll never be able to get to AGI, but we sure better hope if we do we get it right the first time. How’s that been going with the more primitive LLMs?

    Do we even know what the “right” AGI would be? We’re treading in dangerous waters.





  • Then I’d opt for the better one, because you don’t see all brownouts, only the ones that are long enough to affect lights and more sensitive devices. I have one touch light that would go out when everything else would be fine. So you most likely have very “dirty” power, at least in the room you see this going on.

    I’ll also add that since putting my UPSs in, occasionally I’ll have them click. It’s not registering as anything on the software monitor, nothing I can see via lights, but I’m sure it’s breaker or whatever they use to step in and keep things clean.




  • Good questions.

    What sorts of scenarios involving the emergence of AGI do you think regulating the availability of LLM weights and training data (or of more closely regulating AI training, research, and development within the “closed source” shops like OpenAI) would help us avoid?

    Honestly, we might be too late anyway for avoidance, but it’s specifically research of the alignment problem that I think regulation could help with, and since they’re still self regulation and free to do what OpenAI did with their department for that…it’s akin to someone manufacturing a new chemical and not bothering any research on side effects, only what they can gain from it. Oh shit, never mind, that’s standard operating procedure isn’t it, at least as long as the government isn’t around to stop it.

    And how does that threat compare to impending damage from climate change if we don’t reduce energy consumption + reliance on fossil fuels?

    Another topic that I personally think we’re doomed to ignore until things get so bad they affect more than poor people and countries. How does it compare? Climate change and the probable directions it takes the planet are much more of a certainty than the unknown of if AGI is possible and what effects AGI could have. Interesting that we’re taking the same approaches though, even if it’s more obvious a problem. Plus profiting via greenwashing rather than a concentrated effort to do effective things to mitigate what we could.


  • No surprise, since there’s not a lot of pressure to do any other regulation on the closed source versions. Self monitoring of a profit company always works out well…

    And for any of the “AGI won’t happen, there’s no danger”…what if on the slightest chance you’re wrong? Is the maddening rush to get the next product out without any research on what we’re doing worth a mistake? Scifi is fiction, but there’s lessons there too, and we’re ignoring them all because “that can’t happen” is stronger than “let’s be sure”.

    Besides, even with no AGI, humans alone can do huge damage with “bad” AI tools, that we’re not looking into either.





  • Understanding the variety of speech over a drive-thru speaker can be difficult for a human with experience in the job. I can’t see the current level of voice recognition matching it, especially if it’s using LLMs for processing of what it managed to detect. If I’m placing a food order I don’t need a LLM hallucination to try and fill in blanks of what it didn’t convert correctly to tokens or wasn’t trained on.