I did this, except in a cardboard box, plugged into an outlet with no ground, and using it to mine cryptocurrency. Somehow I didn’t burn down my apartment.
I did this, except in a cardboard box, plugged into an outlet with no ground, and using it to mine cryptocurrency. Somehow I didn’t burn down my apartment.
I have but it kind of goes away after enough years. I was enthusiastic about this book series at one point but that was more than a decade ago, don’t really remember what cliffhanger it was left on even
I see these posts every once in a while and it seems weird the topic of a book not being written still captures people’s attention after so long and so much repeated discussion.
I bet you could do it with ring signatures
a message signed with a ring signature is endorsed by someone in a particular set of people. One of the security properties of a ring signature is that it should be computationally infeasible to determine which of the set’s members’ keys was used to produce the signature
I agree that it’s bad that there’s a false impression of privacy, but I think it would be better to allow this as an extension or something and not include it as a feature in the UI, or at least not on by default. That way people who otherwise wouldn’t bother won’t be tempted to drive themselves crazy looking for imaginary enemies.
Can anyone recommend any cool mods/projects built on top of Minetest?
tbf the article only assumes he told them no because of how implausible it seems the task would be, the actual details of what if anything was discussed and what happened are unknown.
Implying that it was worse and has gotten better, or will get better to the point where data hoarding is unnecessary. I guess it would be nice if things turned out that well.
Privacy means personal agency and freedom from people, whether individuals, companies, or the government, controlling you with direct or implied threats, or more subtle manipulation, which they can do because they have your dox and because information is power.
A lack of privacy adds fuel to the polycrisis because if we can’t act in relative secrecy that basically means we can’t act freely at all, and nothing can challenge whoever runs the panopticon.
The output for a given input cannot be independently calculated as far as I know, particularly when random seeds are part of the input.
The system gives a probability distribution for the next word based on the prompt, which will always be the same for a given input. That meets the definition of deterministic. You might choose to add non-deterministic rng to the input or output, but that would be a choice and not something inherent to how LLMs work. Random ‘seeds’ are normally used as part of deterministically repeatable rng. I’m not sure what you mean by “independently” calculated, you can calculate the output if you have the model weights, you likely can’t if you don’t, but that doesn’t affect how deterministic it is.
The so what means trying to prevent certain outputs based on moral judgements isn’t possible. It wouldn’t really be possible if you could get in there with code and change things unless you could write code for morality, but it’s doubly impossible given you can’t.
The impossibility of defining morality in precise terms, or even coming to an agreement on what correct moral judgment even is, obviously doesn’t preclude all potentially useful efforts to apply it. For instance since there is a general consensus that people being electrocuted is bad, electrical cables normally are made with their conductive parts encased in non-conductive material, a practice that is successful in reducing how often people get electrocuted. Why would that sort of thing be uniquely impossible for LLMs? Just because they are logic processing systems that are more grown than engineered? Because they are sort of anthropomorphic but aren’t really people? The reasoning doesn’t follow. What people are complaining about here is that AI companies are not making these efforts a priority, and it’s a valid complaint because it isn’t the case that these systems are going to be the same amount of dangerous no matter how they are made or used.
They are deterministic though, in a literal sense. Rather their behavior is undefined. And yes, a LLM is not a person and it’s not quite accurate to talk about them knowing or understanding things. So what though? Why would that be any sort of evidence that research efforts into AI safety are futile? This is at least as much of an engineering problem as a philosophy problem.
So it is a way for Lemmy instances to let people log in with their Reddit accounts? Neat
So much awful stuff that both sides seem to be able to agree on
Seems broken
We’re unable to submit your comments to congress because of a problem on our end. We apologize for the inconvenience. Please try again later.
If you have enough other investments to be comfortable, don’t especially want to change retirement timeline etc. and your wife is fine with it, I’d keep it as a potential hedge against a depression that crashes the value of index funds. I would not split it between whichever small crypto projects can sell you a convincing narrative that they have ‘moon potential’ when your financial circumstances mean you don’t really need that anyway and that is specifically what would open you up to the ‘scamminess’ of crypto.
Youtube’s actual website usually doesn’t even work for me anymore. The page takes a really long time to load and often gets stuck and doesn’t load at all.
Everyone moving to open protocols and companies like Reddit, Twitter, Meta going bankrupt
AI has honestly made me a much more powerful Linux user
What gets me about this is that, while it would still be bad, they could have mostly avoided the privacy nightmare here with some kind of Zero Knowledge Proof scheme, but the tracking is obviously part of the point.
I doubt the school administrators who would be buying this thing or the people trying to make money off it have really thought that far ahead or care whether or not it does that, but it would definitely be one of its main effects.