a lot, like World scale, amount of data and that has repetitively been done WITHOUT permissions from authors of that data
huge amount of data must be processed and this is done in enormous datacenters that consume radically MORE than traditional ones without GPUs
energy and cooling for those very specific new datacenters that then becomes unavailable to the local community, energy produced that is often rushed and typically more polluting
So I think it is fundamental to distinguish
“AI” as a theoretical researcher field, public research focusing on processing CERN data, weather forecast, genomics, medicine, etc that is indeed a tool that might produce results that helps us all
versus
commercialized for-profit “AI” with GenAI and LLMs as blackboxes mostly used for spam, scan, low quality code, etc.
When one amalgamates one with the other, knowingly or not, they do the marketing for the later.
My biggest problem with AI is that currently it is a very shitty tool that outputs nonsense 9 out of 10 times while big tech pretends it is totally awesome, which like you say, makes it being forced on you even more frustrating.
Is it here to stay? Yes I believe so. But it needs a lot of work in a lot of area’s to be truly useful.
I feel like you must be prompting it poorly or using ChatGPT / Copilot?
I’d say in my day to day, AI tooling successfully tackles 90% of my software engineering jobs and with proper context and promoting the output is pretty stellar.
Assuming you’re maintaining a big codebase and not just producing boilerplate, do you find LLMs to be more help than language servers/IDE code snippets?
There’s nothing wrong with ai. It’s a tool. It’s nice to have access to more tools.
The only problem with AI is how it’s being forced on everyone and it’s taking away consumer access to technology.
It’s a VERY specific tool that needs
So I think it is fundamental to distinguish
versus
When one amalgamates one with the other, knowingly or not, they do the marketing for the later.
My biggest problem with AI is that currently it is a very shitty tool that outputs nonsense 9 out of 10 times while big tech pretends it is totally awesome, which like you say, makes it being forced on you even more frustrating.
Is it here to stay? Yes I believe so. But it needs a lot of work in a lot of area’s to be truly useful.
I feel like you must be prompting it poorly or using ChatGPT / Copilot?
I’d say in my day to day, AI tooling successfully tackles 90% of my software engineering jobs and with proper context and promoting the output is pretty stellar.
Assuming you’re maintaining a big codebase and not just producing boilerplate, do you find LLMs to be more help than language servers/IDE code snippets?
That’s one of the problems.
The other problem is that the billionaires want to use AI to make censorship and kill decisions (see Palantir) to lock up their olygarchy.