• 1 Post
  • 13 Comments
Joined 1 year ago
cake
Cake day: June 20th, 2023

help-circle




  • The headline is pretty misleading. Reading the headline, I was imagining Nigerian Prince scams. But in the article, they state “Compared to older generations, younger generations have reported higher rates of victimization in phishing, identity theft, romance scams, and cyberbullying.”
    Teens get bullied more than the elderly? Say it ain’t so!
    While GenZ is, according to their source, also the generation with the highest percentage of victims if phishing scams, it’s actually millenials that fall for identity theft and romance scams the most.

    The article also states that the “cost of falling for those scams may also be surging for younger people: Social Catfish’s 2023 report on online scams found that online scam victims under 20 years old lost an estimated $8.2 million in 2017. In 2022, they lost $210 million.”
    The source for Social Catfish’s claim is data released in 2023 by the FBI Internet Crime Complaint Center. According to that data, in 2022, there were 15,782 complaints for internet crime by victims under 20 totaling $210.5 million in losses. In the same year, there were 88,262 complaints by victims over 60, totaling $3.1 billion in losses.

    Every generation since the beginning of times has claimed that the following generation was rude, stupid, and stopped doing things the “right way” like we used to do in the good old days. It has always been bullshit, it will always be bullshit. Stop stressing, the kids are alright.







  • True, GPT does not return a “yes” or “no” 100% of the time in either case, but that’s not the point. The point is that it’s impossible to say if GPT has actually gotten better or worse at predicting prime numbers with their test set. Since the test set is composed of only prime numbers, we do not know if GPT is more likely to call a number “prime” when it actually is a prime number than when it isn’t. All we know is that it was very likely to answer “yes” to the question “is this number prime?” in March, and very likely to answer “no” in July. We do not know if the number makes a difference.



  • Damn, you’re right. The study has not been peer reviewed yet according to the article, and in my opinion, it really shows. For anyone who doesn’t want to actually read the study:

    They took the set of questions from a different study (which is fine). The original study had a set of 500 randomly chosen prime numbers and asked ChatGPT if they were prime, and to support its reasoning. They did this to see if in the cases where ChatGPT got the question wrong, ChatGPT would try to support its wrong answer with more faulty reasoning - a dataset with only prime numbers is perfectly fine for this initial question.

    The study in the article appears to be trying to answer two questions - is there significant drift in the answers ChatGPT gives, and is ChatGPT getting better or worse at answering questions. The dataset is perfectly fine for answering the first question, but completely inadequate for answering the second, since an AI that simply thinks all numbers are prime would be judged as having perfect accuracy! Some good peer review would never let that kind of thing slide.