As Mozilla envisions Firefox’s future, we are focused on building a browser that empowers you to choose your own path and gives you the freedom to explor
How long does AI need to be used, and how much demand needs to be sustained, for it to stop being called a “buzzword”? I’m a little dubious that NVIDIA became literally the most highly-valued company on Earth off the back of a mere “buzzword.”
I made a generalization based on the abundance of comments from people saying they don’t want AI. Your desires may not be the desires of the majority of users.
Or maybe it’s just a common fallacy. Like argumentum ad populum.
It’s not. Saying a bunch of people don’t want something because a bunch of people are saying they don’t want it isn’t argumentum ad populum. I never made an assessment about whether AI was good or bad.
If you want to argue that Lemmy doesn’t represent users at large, or that the people complaining about AI are a loud minority, go for it. But the vast majority of comments on anything AI related seem opposed to it.
Can you reminds us what the current state of NFTs is? Or most crypto? Web3 tech? This is next.
Of course Nvidia are the highest-valued company. They capitalized on idiots misusing the technology, until it created issues in society, for personal gain.
Why are you explicitly picking those examples, and not things like IoT, DevOps and Edge computing, all buzzwords, all successful and still in general existence today?
You’re cherry picking failed buzzwords and using them as proof that “AI” will fail.
To be clear, I agree that LLMs are bullshit for 95% of applications they are being put into. But at least argue in good faith.
I chose those examples, because that’s what’s been heavily marketed recently, and it all either fundamentally failed, ended up being a scam, or both.
In contrast:
devops is software automation practices…?
edge computing is on-call load balancing? It’s horrendously expensive though, so i’ll give them time to figure it out
IoT, admittedly, is largely oversold, but even then, there were a ton of products on the market that absolutely outlived all 3 of the examples i’ve given, combined. HomeAssistant+Zigbee home automation is awesome. A raspberryPi is “iot”. Your smartwatch is “iot”.
There’s a difference between cherry-picking, and refusing to accept that something is a scam. Crypto ended up begging for government regulation, when the original intention was to move away from it. NFTs are a pump-and-dump ponzi scheme. web3 literally doesn’t mean anything
LLMs aren’t a scam, I don’t even understand how you could twist it into such. While something like NFTs have no real legitimate use case, LLMs excel at translation and as an advanced form of spelling and grammar checking.
Your complaint seems to boil down to “it doesn’t work in all use cases it’s being used” which is fair enough, but if I put a car on my bed and try to use it as a blanket… does that make it a scam?
We literally agree with each other, and yet you’re still arguing. The reason why it’s a scam, is because people sell it like some kind of a godsend, when it’s literally not used in the way it is intended to be used. When it is, that’s great. When it’s trained properly, that’s even better. But that’s not the reality
AI may have its uses, but the easy counterpoint to your argument is to look at FTX at its peak and where it is now (bankrupt). The stock exchange is the exact opposite of rational, and is terrible at estimating the use one can get out of tech.
I strongly believe that generative AI is catastrophically misused in the vast majority of its applications, so in my eyes, adding gpt-based AI to the browser is largely a wasted effort
I highly doubt they have one team that switches between experiments and bug fixes, never doing two things at once. Not to mention that something ultimately being ripped out isn’t necessarily wasted effort. They could likely easily pivot virtually anything they put into this specific experiment into any number of other uses.
I wish they spent their time fixing bugs, rather than implementing this bullshit
Why not both? A large project like this needs to fix bugs and also continue to refine its features for long term relevance.
You will never achieve long-term relevance, by chasing immediately available buzzwords
How long does AI need to be used, and how much demand needs to be sustained, for it to stop being called a “buzzword”? I’m a little dubious that NVIDIA became literally the most highly-valued company on Earth off the back of a mere “buzzword.”
It doesn’t seem like end users are the ones demanding AI.
I am an end user and I find it quite handy for a number of applications.
The reasoning “I don’t find it useful and therefore nobody finds it useful” is common in these sorts of threads.
If the sentiment is that common, maybe there’s something to it.
You made an assertion about what end users want. I’m an end user and my desires are not the same as your desires.
Or maybe it’s just a common fallacy. Like argumentum ad populum.
I made a generalization based on the abundance of comments from people saying they don’t want AI. Your desires may not be the desires of the majority of users.
It’s not. Saying a bunch of people don’t want something because a bunch of people are saying they don’t want it isn’t argumentum ad populum. I never made an assessment about whether AI was good or bad.
If you want to argue that Lemmy doesn’t represent users at large, or that the people complaining about AI are a loud minority, go for it. But the vast majority of comments on anything AI related seem opposed to it.
Can you reminds us what the current state of NFTs is? Or most crypto? Web3 tech? This is next.
Of course Nvidia are the highest-valued company. They capitalized on idiots misusing the technology, until it created issues in society, for personal gain.
Can you remind me how those technologies are related, other than the mere accusation of them being “buzzwords”?
Cryptocurrency is actually doing fine, BTW. Just because you don’t find it useful doesn’t mean it’s not useful to other people.
Why are you explicitly picking those examples, and not things like IoT, DevOps and Edge computing, all buzzwords, all successful and still in general existence today?
You’re cherry picking failed buzzwords and using them as proof that “AI” will fail.
To be clear, I agree that LLMs are bullshit for 95% of applications they are being put into. But at least argue in good faith.
I chose those examples, because that’s what’s been heavily marketed recently, and it all either fundamentally failed, ended up being a scam, or both.
In contrast:
There’s a difference between cherry-picking, and refusing to accept that something is a scam. Crypto ended up begging for government regulation, when the original intention was to move away from it. NFTs are a pump-and-dump ponzi scheme. web3 literally doesn’t mean anything
LLMs aren’t a scam, I don’t even understand how you could twist it into such. While something like NFTs have no real legitimate use case, LLMs excel at translation and as an advanced form of spelling and grammar checking.
Your complaint seems to boil down to “it doesn’t work in all use cases it’s being used” which is fair enough, but if I put a car on my bed and try to use it as a blanket… does that make it a scam?
We literally agree with each other, and yet you’re still arguing. The reason why it’s a scam, is because people sell it like some kind of a godsend, when it’s literally not used in the way it is intended to be used. When it is, that’s great. When it’s trained properly, that’s even better. But that’s not the reality
AI may have its uses, but the easy counterpoint to your argument is to look at FTX at its peak and where it is now (bankrupt). The stock exchange is the exact opposite of rational, and is terrible at estimating the use one can get out of tech.
FTX was a cryptocurrency exchange, how is that remotely similar to NVIDIA?
deleted by creator
I strongly believe that generative AI is catastrophically misused in the vast majority of its applications, so in my eyes, adding gpt-based AI to the browser is largely a wasted effort
I highly doubt they have one team that switches between experiments and bug fixes, never doing two things at once. Not to mention that something ultimately being ripped out isn’t necessarily wasted effort. They could likely easily pivot virtually anything they put into this specific experiment into any number of other uses.