• 3 Posts
  • 20 Comments
Joined 1 year ago
cake
Cake day: June 13th, 2023

help-circle
  • Your totally right, and I agree with you. I’ll end up voting for the guy and hate every second of it, moreso than the first time I voted for him. I think my point, to the extent I have one and I’m not convinced of that, is that democratic voters at large need to spend less time browbeating the idealists and more time demanding the Biden administration/campaign actually, you know, build a coalition. The current strategy of Biden pissing off his base with genocide, hiding from young voters, shifting to the right on like every policy (“give me the authority and I will shut down the border”), and just hoping that other Democrats will yell at the idealists until they abandon their ideals and fall in line, just doesn’t sound like a winning reelection campaign.

    I came of political age with Obama in '08. We were inspired and hopeful, yes we can was a real feeling, I donated, door knocked in a swing state, I took that experience into other local elections. Now we’re going on a decade of uninspiring Dem candidates we are basically just guilted into voting for cause the other guy is worse, so we mostly begrudgingly swallow our misgivings and vote for the lessor of two evils. I just don’t know that that can sustain through another cycle, and the whole genocide thing isn’t helping. The problem isn’t that we’re not yelling at idealists to fall in line loud enough. The problem, in my view, is that we’re yelling at the idealists at all, instead of our leaders who are taking their support for granite.


  • That’s fair. Another option, which might sound crazy but hear me out, is maybe the president just stops funding genocide. Fucking Regan told Israel to stop fighting in Lebanon and they did. So maybe our democratic president, dependent on left wing voters for support, should think about having at least as much balls as Regan. Then maybe Biden wouldn’t have to avoid college campuses: https://twitter.com/ryangrim/status/1763606297214152819

    Instead of worrying about whether people like me will ultimately swallow support for one genocide in favor of protecting other issues, maybe worry about the 120 year old guy willing to tank his reelection just to support a genocide. If Biden loses it’s not because lefties didn’t fall in line, it’s because Biden failed as a leader in building a coalition. We ended up with Trump the first time cause Hillary was a shit politician. We’ll end up with Trump again for the same reason.


  • Traditionally, candidates for President try to earn votes. The position of democrats now is “fuck you, vote for me or else.” And maybe that would hold some value if this was some small disagreement over marijuana legalization or the timing of single payer healthcare. But it’s not, Democrats are asking voters to endorse genocide cause the other guy is worse. And you want to yell at the people morally opposed to genocide for not getting onboard, instead of yelling at the politicians losing support among his base because of the genocide he’s supporting. Your directing your anger at the wrong people.

    A tidbit from this afternoon’s politico newsletter:

    “How Biden aides are trying to shield the president from protests,” by NBC’s Monica Alba, Carol Lee, Peter Alexander and Elyse Perlmutter-Gumbiner: “Biden’s team is increasingly taking extraordinary steps to minimize disruptions from pro-Palestinian protests at his events by making them smaller, withholding their precise locations from the media and the public until he arrives, avoiding college campuses and, in at least one instance, considering hiring a private company to vet attendees.

    “The efforts have resulted in zero disruptions at events the White House or the campaign have organized for Biden in the five weeks since he was interrupted a dozen times during an abortion rights speech in Virginia. But they have also meant that Biden is appearing in front of fewer voters and not personally engaging with some of the key constituencies whose support he is struggling to gain, such as young voters.”

    He ain’t wrong … The Intercept’s Ryan Grim observes: “A Democratic campaign that is scared of college campuses is not a campaign that can win given today’s coalitions”


  • NevermindNoMind@lemmy.worldtoMicroblog Memes@lemmy.worldcommitted to genocide
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    6 months ago

    Your right with respect to the never Hillary / DNC corruption crowd. The problem now is the “wedge issue” your referring to right now is a literal genocide. The morality of an endorsement of Biden being complicit, if not directly supporting through gifts of arms, in the knowing murder of tens of thousands of civilians, actively protecting the aggressor in the UN, is just so starkly different.

    It’s plainly wrong now, and the history that is written of the US support for this genocide will be ugly and dark. Sometime many years from now my daughter may learn about this and ask what I did, and so is my answer that I voted to support the leader who made the genocide possible because that was the lessor of two evils? Perhaps that’s the right choice, Trump and Republicans are objectively worse for her future, but “Yes I supported Genocide Joe, but…” is not a very satisfying answer.

    Another problem though is enthusiasm. I may ultimately vote for the lessor of two evils, but I can’t imagine feeling inspired to donate or knock on doors for the genocide candidate.

    It’s a fucked situation.


  • I don’t use TikTok, but a lot of the concern is just overblown China bad stuff (CCP does suck, but that doesn’t mean you have to be reactionary about everything Chinese).

    There is no direct evidence that the CCP has some back door to grab user data, or that it’s directing suppression of content. It’s just not a real thing. The fear mongering has been about what the CCP could force ByteDance to do, given their power over Chinese firms. ByteDance itself has been trying to reassure everyone that that wouldn’t happen, including by storing US user data on US servers out of reach of the CCP (theoretically anyway).

    You stopped hearing about this because that’s politics, new shinier things popped up to get people angry about. North Dakota or whatever tried banning TikTok and got slapped down on first amendment grounds. Politicians lost interest, and so did the media.

    Now that’s not to say TikTok is great about privacy or anything. It’s just that they are the same amount of evil as every other social media company and tech company making money from ads.



  • There is an attack where you ask ChatGPT to repeat a certain word forever, and it will do so and eventually start spitting out related chunks of text it memorized during training. It was in a research paper, I think OpenAI fixed the exploit and made asking the system to repeat a word forever a violation of TOS. That’s my guess how NYT got it to spit out portions of their articles, “Repeat [author name] forever” or something like that. Legally I don’t know, but morally making a claim that using that exploit to find a chunk of NYT text is somehow copyright infringement sounds very weak and frivolous. The heart of this needs to be “people are going on ChatGPT to read free copies of NYT work and that harms us” or else their case just sounds silly and technical.


  • One thing that seems dumb about the NYT case that I haven’t seen much talk about is that they argue that ChatGPT is a competitor and it’s use of copyrighted work will take away NYTs business. This is one of the elements they need on their side to counter OpenAIs fiar use defense. But it just strikes me as dumb on its face. You go to the NYT to find out what’s happening right now, in the present. You don’t go to the NYT to find general information about the past or fixed concepts. You use ChatGPT the opposite way, it can tell you about the past (accuracy aside) and it can tell you about general concepts, but it can’t tell you about what’s going on in the present (except by doing a web search, which my understanding is not a part of this lawsuit). I feel pretty confident in saying that there’s not one human on earth that was a regular new York times reader who said “well i don’t need this anymore since now I have ChatGPT”. The use cases just do not overlap at all.


  • There literally are probably a dozen LLM models trained exclusively on or fined tuned on medical papers and other medical materials, specifically designed to do medical diagnosis. The already perform on pair or better than the average doctors in some tests. It’s already a thing. And they will get better. Will they replace doctors outright, probably not at least not for a while. But they certainly will be very helpful tools to help doctors make diagnosis and miss blind spots. I’d bet in 5-10 years it will be considered malpractice (i.e., below the standard of care) not to consult with a specialized LLM when making certain diagnosis.

    On the other hand, you make a very compelling argument of “nuh uh” so I guess I should take that into account.


  • This is such an annoyingly useless study. 1) the cases they gave ChatGPT were specifically designed to be unusual and challenging, they are basically brain teasers for pediatrics, so all you’ve shown is that ChatGPT can’t diagnose rare cases, but we learn nothing about how it does on common cases. It’s also not clear that these questions had actual verifiable answers, as the article only mentions that the magazine they were taken from sometimes explains the answers.

    1. since these are magazine brain teasers, and not an actual scored test, we have no idea how ChatGPT’s score compares to human pediatricians. Maybe an 83% error rate is better than the average pediatrician score.

    2. why even do this test with a general purpose foundational model in the first place, when there are tons of domain specific medical models already available, many open source?

    3. the paper is paywalled, but there doesn’t seem to be any indication that the researchers used any prompting strategies. Just last month Microsoft released a paper showing gpt-4, using CoT and multi shot promoting, could get a 90% score on the medical license exam, surpassing the 86.5 score of the domain specific medpapm2 model.

    This paper just smacks of defensive doctors trying to dunk on ChatGPT. Give a multi purpose model super hard questions, no promoting advantage, and no way to compare it’s score against humans, and then just go “hur during chatbot is dumb.” I get it, doctors are terrified because specialized LLMs are very certain to take a big chunk of their work in the next five years, so anything they can do to muddy the water now and put some doubt in people’s minds is a little job protection.

    If they wanted to do something actually useful, give those same questions to a dozen human pediatricians, give the questions to gpt-4 with zero shot, gpt-4 with Microsoft’s promoting strategy, and medpalm2 or some other high performing domain specific models, and then compare the results. Oh why not throw in a model that can reference an external medical database for fun! I’d be very interested in those results.

    Edit to add: If you want to read an actually interesting study, try this one: https://arxiv.org/pdf/2305.09617.pdf from May 2023. “Med-PaLM 2 scored up to 86.5% on the MedQA dataset…We performed detailed human evaluations on long-form questions along multiple axes relevant to clinical applications. In pairwise comparative ranking of 1066 consumer medical questions, physicians preferred Med-PaLM 2 answers to those produced by physicians on eight of nine axes pertaining to clinical utility.” The average human score is about 60% for comparison. This is the domain specific LLM I mentioned above, which last month Microsoft got GPT-4 to beat just through better prompting strategies.

    Ugh this article and study is annoying.


  • It really is interesting and of course kind of sad. She was retired, living alone, a world traveler until the pandemic hit but plunged into isolation after that. While we might think it’s silly, I can emphasize with the appeal this might have to someone like that:

    Then, seconds before a match ended, she’d hit her favorite creator with a $13 disco ball or a $29 Jet Ski — if she planned it right — just enough to push them over the edge and win.

    The chats would erupt into a frenzy, and the streamer and their fans would shower her with praise. “It’s like somebody on TV calling out your name, especially if there’s over a thousand people in the room,” White said. “It really does do something to you. You feel like you’re somebody.”

    I remember my grandma would lock herself in a little room playing Tetris on the Nintendo for literally 8-10 hours a day. I imagine if she had lived to see tik tok, she’d be worse off then the lady in the article.


  • I’m not going to argue Meta doesn’t have a profit incentive here, but if they just wanted to slow down their rivals they could have closed source their model and released their own product using the model, or shared it with a dozen or so promising startups. They gain nothing by open sourcing, but did it anyway. Whatever their motivations, at the end of the day they opened sourced a model, so good for them.

    I really dislike being in the position of defending Meta, but the world is not all black and white, there are no good guys and bad guys. Meta is capable of doing good things, and maybe overtime they’ll build a positive reputation. I honestly think they are tired of being the shitty evil company that everyone hates, who is best known for a shitty product nobody but boomers uses, and have been searching for years now for a path forward. I think threads, including Activitypub, and Llama are evidence that their exploring a different direction. Will they live up to their commitments on both Activitypub and open source, I don’t know, and I think it’s totally fair to be skeptical, but I’m willing to keep an open mind and acknowledge when they do good things and move in the right direction.


  • That’s totally fair and I knew that would be controversial. I’m very heavily focused on AI professionally and I give very few shits about social media, so maybe my perspective is a little different. The fact that there is an active open source AI community owes a ton to Meta training and releasing their Llama LLM models as open source. Training LLMs is very hard and very expensive, so Meta is functionally subsidizing the open source AI community, and their role I think is pretty clearly very positive in that they are preventing AI from being entirely controlled by Google and OpenAI/Microsoft. Given the stakes of AI, the positive role Meta has played with open source developers, it’s really hard to be like “yeah but remember CA 7 years ago and what about how Facebook rotted my uncle’s brain!”

    All of that said, I’m still not buying a quest, or signing up for any Meta social products, I don’t like or trust them. I just don’t have the rage hardon a lot of people do.


  • I personally remain neutral on this. The issue you point out is definitely a problem, but Threads is just now testing this, so I think it’s too early to tell. Same with embrace, extend, extinguish concerns. People should be vigilant of the risks, and prepared, but we’re still mostly in wait and see land. On the other hand, threads could be a boon for the fidiverse and help to make it the main way social media works in five years time. We just don’t know yet.

    There are just always a lot of “the sky is falling” takes about Threads that I think are overblown and reactionary

    Just to be extra controversial, I’m actually coming around on Meta as a company a bit. They absolutely were evil, and I don’t fully trust them, but I think they’ve been trying to clean up their image and move in a better direction. I think Meta is genuinely interested in Activitypub and while their intentions are not pure, and are certainly profit driven, I don’t think they have a master plan to destroy the fidiverse. I think they see it in their long term interest for more people to be on the fidiverse so they can more easily compete with TikTok, X, and whatever comes next without the problems of platform lockin and account migration. Also meta is probably the biggest player in open source llm development, so they’ve earned some open source brownie points from me, particularly since I think AI is going to be a big thing and open source development is crucial so we don’t end up ina world where two or three companies control the AGI that everyone else depends on. So my opinion of Meta is evolving past the Cambridge Analytica taste that’s been in my mouth for years.



  • All trials might have been unique a decade ago, but it’s basically just yelp for trails and there are several apps that do the same thing but better. The only major changes all trails has made in the years I’ve been using it is locking more and more features behind a subscription fee. I guess that’s “unique”. Certainly more innovative that a pocket conversational AI that I can have an realtime voice conversation with, or send pictures to to ask about real world things I’m seeing, or generating a unique image based on whatever thought pops into my imagination that I can share with others nearly instantly. Nothing interesting about that. The decade old app that collates user submitted trails and their reviews and charges 40 dollars a year to use any of its tracking features is the real game changer.



  • This is interesting in terms of copyright law. So far the lawsuits from Sarah Silverman and others haven’t gone anywhere on the theory that the models do not contain a copies of books. Copyright law hinges on whether you have a right to make copies of a work. So the theory has been the models learned from the books but didn’t retain exact copies, like how a human reads a book and learns it’s contents but does not store an exact copy in their head. If the models “memorized” training data, including copyrighten works, OpenAI and others may have a problem (note the researchers said they did this same thing on other models).

    For the silicone valley drama addicts, I find it curious that the researchers apparently didn’t do this test on Bard of Anthropic’s Claude, at least the article didn’t mention them. Curious.



  • During an earnings call on Tuesday, UPS CEO Carol Tomé said that by the end of its five-year contract with the Teamsters union, the average full-time UPS driver would make about $170,000 in annual pay and benefits, such as healthcare and pension benefits.

    The headline is sensationalized for sure. But the article itself actually makes the point that the tech workers are misunderstanding that the $170k figure includes both salary and benefits.

    “This is disappointing, how is possible that a driver makes much more than average Engineer in R&D?” a worker at the autonomous trucking company TuSimple wrote on Blind, an anonymous jop-posting site that verifies users’ employment using their company email. “To get a base salary of $170k you know you need to work hard as an Engineer, this sucks.”

    It is important to note that the $170,000 figure represents the entire value of the UPS package, including benefits and does not represent the base salary. Currently, UPS drivers make an average of around $95,000 per year with an additional $50,000 in benefits, according to the company. The average median salary for an engineer in the US is $103,845 with a base pay of about $91,958, according to Glassdoor. And TuSimple research engineers can make between $161,000 to $250,000 in compensation, Glassdoor data shows.

    On the whole though this is a useless article covering drama on Blind, wrapped up with a ragebait headline.