Seeing as it’s their river and they are operating it, not sure what exactly you want as evidence.
Seeing as it’s their river and they are operating it, not sure what exactly you want as evidence.
I’m not so sure we’re missing that much personally, I think it’s more just sheer scale, as well as the complexity of the input and output connections (I guess unlike machine learning networks, living things tend to have a much more ‘fuzzy’ sense of inputs and outputs). And of course sheer computational speed; our virtual networks are basically at a standstill compared to the paralellism of a real brain.
Just my thoughts though!
While true that the x nm nomenclature doesn’t match physical feature size anymore, it’s definitely not just marketing bs. The process nodes are very significant and very (very) challenging technological steps, that yield power efficiency gains in the dozens of % usually.
To me, what is surprising is that people refuse to see the similarity between how our brains work and how neural networks work. I mean, it’s in the name. We are fundamentally the same, just on different scales. I belive we work exactly like that, but with way more inputs and outputs and way deeper network, but fundamental principles i think are the same.
They do also use an antireflective coating/paint on the satellites now, which had helped quite a lot.
The auto update was awful for usability. Hate my page randomly jumping around when I’m trying to read.
I think the misunderstanding here is in thinking ChatGPT has “languages”. It doesn’t choose a language. It is always drawing from everything it knows. The ‘configuration’ hence is the same for all languages, it’s just basically an invisible prompt telling it, in plain text, how to communicate.
When you change/add your personalized “Custom Instructions”, this is basically the same thing.
I would assume that this invisible context is in English, no matter what. It should make no difference.