cross-posted from: https://lemmy.ml/post/20858435

Will AI soon surpass the human brain? If you ask employees at OpenAI, Google DeepMind and other large tech companies, it is inevitable. However, researchers at Radboud University and other institutes show new proof that those claims are overblown and unlikely to ever come to fruition. Their findings are published in Computational Brain & Behavior today.

  • The actual paper is an interesting read. They present an actual computational proof, stating that even if you have essentially infinite memory, a computer that’s a billion times faster than what we have now, perfect training data that you can sample without bias and you’re only aiming for an AGI that performs slightly better than chance, it’s still completely infeasible to do within the next few millenia. Ergo, it’s definitely not “right around the corner”. We’re lightyears off still.

    They prove this by proving that if you could train an AI in a tractable amount of time, you would have proven P=NP. And thus, training an AI is NP-hard. Given the minimum data that needs to be learned to be better than chance, this results in a ridiculously long training time well beyond the realm of what’s even remotely feasible. And that’s provided you don’t even have to deal with all the constraints that exist in the real world.

    We perhaps need some breakthrough in quantum computing in order to get closer. That is not to say that AI won’t improve or anything, it’ll get a bit better. But there is a computationally proven ceiling here, and breaking through that is exceptionally hard.

    It also raises (imo) the question of whether or not we can truly consider humans to have general intelligence or not. Perhaps we’re not as smart as we think we are either.

    • Barry Zuckerkorn@beehaw.org
      link
      fedilink
      arrow-up
      10
      ·
      28 days ago

      The paper’s scope is to prove that AI cannot feasibly be trained, using training data and learning algorithms, into something that approximates human cognition.

      The limits of that finding are important here: it’s not that creating an AGI is impossible, it’s just that however it will be made, it will need to be made some other way, not by training alone.

      Our squishy brains (or perhaps more accurately, our nervous systems contained within a biochemical organism influenced by a microbiome) arose out of evolutionary selection algorithms, so general intelligence is clearly possible.

      So it may still be the case that AGI via computation alone is possible, and that creating such an AGI will not require solution of an NP-hard problem. But this paper closes one potential pathway that many believe is a viable pathway (if the paper’s proof is actually correct, I definitely am not the person to make that evaluation). That doesn’t mean they’ve proven there’s no pathway at all.

      • Our squishy brains (or perhaps more accurately, our nervous systems contained within a biochemical organism influenced by a microbiome) arose out of evolutionary selection algorithms, so general intelligence is clearly possible.

        That’s assuming that we are a general intelligence. I’m actually unsure if that’s even true.

        That doesn’t mean they’ve proven there’s no pathway at all.

        True, they’ve only calculated it’d take perhaps millions of years. Which might be accurate, I’m not sure to what kind of computer global evolution over trillions of organisms over millions of years adds up to. And yes, perhaps some breakthrough happens, but it’s still very unlikely and definitely not “right around the corner” as the AI-bros claim (and that near-future thing is what the paper set out to disprove).

    • zygo_histo_morpheus@programming.dev
      link
      fedilink
      arrow-up
      9
      ·
      28 days ago

      A breakthrough in quantum computing wouldn’t necessarily help. QC isn’t faster than classical computing in the general case, it just happens to be for a few specific algorithms (e.g. factoring numbers). It’s not impossible that a QC breakthrough might speed up training AI models (although to my knowledge we don’t have any reason to believe that it would) and maybe that’s what you’re referring to, but there’s a widespread misconception that Quantum computers are essentially non-deterministic turing machines that “evaluate all possible states at the same time” which isn’t the case.