• bh11235@infosec.pub
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    1
    ·
    edit-2
    9 months ago

    This is an issue that has plagued the machine learning field since long before this latest generative AI craze. Decision trees you can understand, SVMs and Naive Bayes too, but the moment you get into automatic feature extraction and RBF kernels and stuff like that, it becomes difficult to understand how the verdicts issued by the model relate to the real world. Having said that, I’m pretty sure GPTs are even more inscrutable and made the problem worse.

    • Socsa@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      9 months ago

      It’s still inscrutable, but it makes more sense if you think of all these as arbitrary function approximation on higher dimension manifolds. The reason we can’t generate traditional numerical solvers for these problems is because the underlying analytical models fall apart when you over-parameterize them. Backprop is very robust at extreme parameter counts, and comes with much weaker assumptions compared to things like series decomposition, so it really just looks like a generic numerical method which can scale to absurd levels.