The link above is for the petition

Here is the letter:

Our Letter To WhatsApp:

WhatsApp needs to implement these product changes during polling days and in the month before and the month after elections:

  • Add friction to forwarding messages: Reduce the ease with which messages can be forwarded on the platform by adding one additional step which nudges users to pause and reflect before they forward content.
  • Add disinformation warning labels to viral content: Automatically add clear “Highly forwarded: please verify” warning labels to viral messages, in addition to the “forwarded many times” label currently in use.
  • Reduce WhatsApp’s broadcast capabilities: Disable the Communities feature and also limit the size of broadcast lists to 50 people and cap their usage to twice a day.

Without decisive action from WhatsApp, disinformation attacks will likely scale up in 2024, aimed at manipulating and undermining elections affecting half of the world’s population. WhatsApp must act to change its product to protect election integrity.

  • The Snark Urge@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    4 months ago

    “Please verify” is not enough of a red flag to overcome confirmation bias. People have to be reminded to seek disconfirming evidence. “Highly forwarded link is likely propaganda, consider the writers motivations and other views on the subject.”

    • Otter@lemmy.caOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 months ago

      A downside to a statement like this would be the ‘crying wolf’ effect. If that message pops up on information they know to be true, where it’s being shared because it is important or relevant, then people are less likely to care.

      A neutral message would help prevent that

  • gedaliyah@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    4 months ago

    It seems like messaging services are particularly prone the misinformation campaigns, since it is much more difficult to audit what is happening on the platform. How is a service like messenger or WhatsApp (both meta)going to monitor the content of messeges in a way that is safe to users? How would researchers identity and track information?

    I know that the most outlandish content I see as a highly connected individual tends to come from these platforms. I do my best to educate when I see it, but I doubt it has much of a lasting impact.

    It’s depressing and a little frightening to know how easily and cheaply our electorate is manipulated, and to see it happening in real time.