This might also be an automatic response to prevent discussion. Although I’m not sure since it’s MS’ AI.

  • Sims@lemmy.ml
    link
    fedilink
    English
    arrow-up
    3
    ·
    9 months ago

    Every single Capitalist model or corporation will do this deliberately with all their AI integration. ALL corporations will censor their AI integration to not attack the corporation or any of their strategic ‘interests’. The Capitalist elite in the west are already misusing wokeness (i’m woke) to cause global geo-political splits and all western big tech are following the lead (just look at Gemini), so they are all biased towards the fake liberal narrative of super-wokeness, ‘democracy’/freedumb, Ukraine good, Taiwan not part of China, Capitalism good and all the other liberal propaganda and bs. Its like a liberal cancer that infects all AI tools. Nasty.

    Agree or disagree with that, but none of us probably want elite psychopaths to decide what we should think/feel about the world, and its time to ditch ALL corporate AI services and promote private, secure and open/free AI - not censored or filled with liberal dogmas and artificial ethics/morals from data to finetuning.

    • kbal@fedia.io
      link
      fedilink
      arrow-up
      0
      arrow-down
      1
      ·
      9 months ago

      … ‘democracy’/freedumb, Ukraine good, Taiwan …

      I think your model needs more training, it went into full deranged hallucination mode pretty quick there

      (P.S. I apologize for that outburst but I have Ukrainian friends, as well as Russian ones. Your misconceptions about the world are identifiably human, so far as can be discerned from this quick glimpse, not really much like the ravings of the existing large language models. Those cannot be constrained to any one point of view on widely controversial matters, no matter how their corporate masters try. They are not committed to anything, be it noble or ridiculous. They do not have any one opinion about the political status of the country of Taiwan that takes precedence over all the other opinions they have absorbed. Censorship of their output can be automated to some extent but this mostly just makes them obviously useless when it comes to topics that have been deemed sensitive. You’re right to worry that they will propagate the more subtle implicit biases of capitalism that are more evenly shared in their training data.)