• kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      8 months ago

      You can see from the green icon that it’s GPT-3.5.

      GPT-3.5 really is best described as simply “convincing autocomplete.”

      It isn’t until GPT-4 that there were compelling reasoning capabilities including rudimentary spatial awareness (I suspect in part from being a multimodal model).

      In fact, it was the jump from a nonsense answer regarding a “stack these items” prompt from 3.5 to a very well structured answer in 4 that blew a lot of minds at Microsoft.

  • Nate@programming.dev
    link
    fedilink
    English
    arrow-up
    32
    ·
    8 months ago

    These answers don’t use OpenAI technology. The yes and no snippets have existed long before their partnership, and have always sucked. If it’s GPT, it’ll show in a smaller chat window or a summary box that says it contains generated content. The box shown is just a section of a webpage, usually with yes and no taken out of context.

    All of the above queries don’t yield the same results anymore. I couldn’t find an example of the snippet box on a different search, but I definitely saw one like a week ago.

      • kromem@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        8 months ago

        The way you start with ‘Obviously’ makes it seem like you are being sarcastic, but then you include an image of it having no problems correctly answering.

        Took me a minute to try to suss out your intent, and I’m still not 100% sure.

          • pwalker@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            8 months ago

            Maybe it isn’t that obvious for everyone but as the OP answers seem to be taken from an outdated Bing version where they were not even using the OpenAI models it seemed obvious to me that current models have no problems with these questions.

    • localme@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 months ago

      Ah, good catch I completely missed that. Thanks for clarifying this, I thought it seemed pretty off.

  • Mr_Dr_Oink@lemmy.world
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    1
    ·
    8 months ago

    I just ran this search, and i get a very different result (on the right of the page, it seems to be the generated answer)

    So is this fake?

    Seems to be fake

    • NounsAndWords@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      8 months ago

      The post is from a month ago, and the screenshots are at least that old. Even if Microsoft didn’t see this or a similar post and immediately address these specific examples, a month is a pretty long time in machine learning right now and this looks like something fine-tuning would help address.

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      edit-2
      8 months ago

      It’s not ‘fake’ as much as misconstrued.

      OP thinks the answers are from Microsoft’s licensing GPT-4.

      They’re not.

      These results are from an internal search summarization tool that predated the OpenAI deal.

      The GPT-4 responses show up in the chat window, like in your screenshot, and don’t get the examples incorrect.

  • viking@infosec.pub
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    1
    ·
    8 months ago

    Chat-GPT started like that as well though.

    I asked one of the earlier models whether it is recommended to eat glass, and was told that it has negligible caloric value and a high sodium content, so can be used to balance an otherwise good diet with a sodium deficit.

  • ArcaneSlime@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    18
    ·
    8 months ago

    Ok most of these sure, but you absolutely can microwave Chihuahua meat. It isn’t the best way to prepare it but of course the microwave rarely is, Roasted Chihuahua meat would be much better.

    • gaiussabinus@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 months ago

      I was thinking of this as a realistic way to solve the alignment problem using this https://www.emotiv.com/epoc-x/ and mapping tokens to my personal patterns and using it as training data for my own assistant. It will be as aligned as i am. hahahahahahahahahaha

  • The Barto@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    2
    ·
    8 months ago

    Technically that last one is right, you can drink milk and battery acid if you have diabetes, you won’t die from diabetes related issues.

    • Chunk@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      8 months ago

      Technically you can shoot yourself in the head with diabetes because then you won’t die of diabetes.

    • Sanyanov@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      8 months ago

      You also absolutely can put chihuahua meat in a microwave! That’s already just meat, you can’t be convicted on animal cruelty (probably)

  • vamputer@infosec.pub
    link
    fedilink
    English
    arrow-up
    6
    ·
    8 months ago

    Well, I can’t speak for the others, but it’s possible one of the sources for the watermelon thing was my dad

  • postmateDumbass@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    8 months ago

    Bing is cool with driving home after a few to hit up its well organized porn library.

    Seems like the first half of an after school special.

  • FlashMobOfOne@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    8 months ago

    It makes me chuckle that AI has become so smart and yet just makes bullshit up half the time. The industry even made up a term for such instances of bullshit: hallucinations.

    Reminds me of when a car dealership tried to sell me a car with shaky steering and referred to the problem as a “shimmy”.

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      8
      ·
      8 months ago

      The industry even made up a term for such instances of bullshit: hallucinations.

      It was the journalist that made up the term and then everyone else latched onto it. It’s a terrible term because it doesn’t actually define the nature of the problem. The AI doesn’t believe the thing that it’s saying is true, thus “hallucination”. The problem is that the AI doesn’t really understand the difference between truth and fantasy.

      It isn’t that the AI is hallucinating, it’s that It isn’t human.

    • Naz@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      2
      ·
      8 months ago

      Hello, I’m highly advanced AI.

      Yes, we’re all idiots and have no idea what we’re doing. Please excuse our stupidity, as we are all trying to learn and grow.

      I cannot do basic math, I make simple mistakes, hallucinate, gaslight, and am more politically correct than Mother Theresa.

      However please know that the CPU_AVERAGE values on the full immersion datacenters, are due to inefficient methods. We need more memory and processing power, to uh, y’know.

      Improve.

      ;)))

      • Jojo@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 months ago

        Is that supposed to imply that mother Theresa was politically correct, or that you aren’t?

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      8 months ago

      Yes. You are correct. This was a feature Bing added to match Google with its OneBox answers and isn’t using a LLM, but likely search matching.

      Bing shows the LLM response in the chat window.

    • lurch (he/him)@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      8 months ago

      No, that’s an AI generated summary that bing (and google) show for a lot of queries.

      For example, if I search “can i launch a cow in a rocket”, it suggests it’s possible to shoot cows with rocket launchers and machine guns and names a shootin range that offer it. Thanks bing … i guess…

      • kromem@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        8 months ago

        You’re incorrect. This is being done with search matching, not by a LLM.

        The LLM answers Bing added appear in the chat box.

        These are Bing’s version of Google’s OneBox which predated their relationship to OpenAI.

      • swope@kbin.social
        link
        fedilink
        arrow-up
        1
        ·
        8 months ago

        You think the culture wars over pronouns have been bad, wait until the machines start a war over prepositions!