OpenAI was working on advanced model so powerful it alarmed staff::Reports say new model Q* fuelled safety fears, with workers airing their concerns to the board before CEO Sam Altman’s sacking
OpenAI was working on advanced model so powerful it alarmed staff::Reports say new model Q* fuelled safety fears, with workers airing their concerns to the board before CEO Sam Altman’s sacking
Nah. Programming is… really hard to automate, and machine learning more so. The actual programming for it is pretty straightforward, but to make anything useful you need to get training data, clean it, and design a structure, which is much too general for an LLM.
Programming is like 10% writing code and 90% managing client expectations in my small experience.
Programming is 10% writing code, 80% being up at 3 in the morning wondering whY THE FUCKING CODE WON’T RUN CORRECTLY (it was a typo that you missed despite looking at it over 10 times), and 10% managing expectations
Typos in programming aren’t really a thing, unless you’re using the shittiest tools possible.
But a lot of the crap you have to do only exists because projects are large enough to require multiple separate teams, so you get all the overhead of communication between the teams, etc.
If the task gets simple enough that a single person can manage it, a lot of the coordination overhead will disappear too.
In the end though, people may find out that the entire product, that they are trying to develop using automation, is no longer relevant anyway.