• grinde@programming.dev
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      1 year ago

      This is second-hand, so take it with a grain of salt, but I’ve seen mention of a bug that sometimes causes the same graphql query to be executed in an infinite loop (presumably they’re async requests, so the browser wouldn’t lock and the user wouldn’t even notice).

      So they may essentially be getting DDOS’d by their own users due to a bug on their end.

      Edit: better info: https://sfba.social/@sysop408/110639435788921057

      • interolivary@beehaw.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Oh yeah I completely forgot about that particular idiocy, Elmo gets up to so much stupid shit that it’s hard to keep track.

        But I’d also be willing to bet money on this being somehow at least partially tied to ditching GC, likely due to not being able to pay (at that’s what is implied by them refusing to pay the bill.) I guess Elmo thought “how hard can running some servers be? I’m a rokit skientist” and decided to just skip paying the bill as a power move instead of trying to make a deal with Google, and now the remaining developers, ops people etc. – those poor bastards – are paying the price.

      • fidodo@beehaw.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        That’s my bet too. They weren’t hosting the site itself on GCP but they were using them for trust and safety services, and I bet that one of those services was anti scraping prevention with things like ip blocking and captchas, which would explain why scraping suddenly became a problem for them the day their contract ended. It can’t be a coincidence.

    • Scrubbles@poptalk.scrubbles.tech
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Or it could be. It’s no coincidence that scraping went way up when he started charging for the API.

      Everyone with a brain knows that data will be retrieved somehow, it’s do you want a lower cost API option or do you want them to scrape the whole webpage?

    • AChiTenshi@vlemmy.net
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I’m suspect some of the backend is starting to fail. So the servers can’t keep up with the demand.