• 9 Posts
  • 68 Comments
Joined 1 year ago
cake
Cake day: September 29th, 2024

help-circle
  • https://en.wikipedia.org/wiki/Marc_Benioff

    Marc Russell Benioff is an American internet entrepreneur and philanthropist. He is best known as the co-founder, chairman and CEO of the software company Salesforce, as well as being the owner of Time magazine since 2018.

    In January 2023 Benioff announced the mass dismissal of approximately 7,000 Salesforce employees via a two-hour all-hands meeting over a call, a course of action he later admitted had been a ‘bad idea’.

    In September 2025, Benioff reduced Salesforce’s support workforce from 9,000 to about 5,000 employees because he “need[ed] less heads”. Salesforce stated that AI agents now handle half of all customer interactions and have reduced support costs by 17% since early 2025. The company added it had redeployed hundreds of employees into other departments within the company. The decision contrasted with Benioff’s earlier remarks suggesting that artificial intelligence would augment, rather than replace, white-collar workers.

    https://en.wikipedia.org/wiki/Salesforce

    In September 2024, the company deployed Agentforce, an agentic AI platform where users can create autonomous agents for customer service assistance, developing marketing campaigns, and coaching salespersons.

    Salesforce CEO Marc Benioff stated in a June 2025 interview on The Circuit that artificial intelligence now performs between 30% and 50% of internal work at Salesforce, including functions such as software engineering, customer service, marketing, and analytics. Although he made clear that “humans still drive the future,” Benioff noted that AI is enabling the company to reassign employees into higher-value roles rather than reduce headcount.

    haha consent factory go brrrr


  • How is this keyboard not popular?

    their front page explicitly says “Currently in beta state” and according to their docs installation via Google Play requires joining a beta tester group.

    that means a random user searching “keyboard” on the Play store isn’t going to see it. likewise if a friend told you “I use Florisboard” and you searched for it by name in the Play store. if you’re not already in the beta test group the direct link to the app page literally 404s.

    it’s certainly available to power users who already know they want it, but it’s sort of pointless to ask why it’s not popular at this stage of its development.


  • other brands of snake oil just say “snake oil” on the label…but you can trust the snake oil I’m selling because there’s a label that says “100% from actual totally real snakes”

    “By integrating Trusted Execution Environments, Brave Leo moves towards offering unmatched verifiable privacy and transparency in AI assistants, in effect transitioning from the ‘trust me bro’ process to the privacy-by-design approach that Brave aspires to: ‘trust but verify’,” said Ali Shahin Shamsabadi, senior privacy researcher and Brendan Eich, founder and CEO, in a blog post on Thursday.

    Brave has chosen to use TEEs provided by Near AI, which rely on Intel TDX and Nvidia TEE technologies. The company argues that users of its AI service need to be able to verify the company’s private claims and that Leo’s responses are coming from the declared model.

    they’re throwing around “privacy” as a buzzword, but as far as I can tell this has nothing to do with actual privacy. instead this is more akin to providing a chain-of-trust along the lines of Secure Boot.

    the thing this is aimed at preventing is you use a chatbot, they tell you it’s using ExpensiveModel-69, but behind the scenes they’re routing it to CheapModel-42, and still charging you like it’s ExpensiveModel-69.

    and they claim they’re getting rid of the “trust me bro” step, but:

    Brave transmits the outcome of verification to users by showing a verified green label (depicted in the screenshot below)

    they do this verification themselves and just send you a green checkmark. so…it’s still “trust me bro”?

    my snake oil even comes with a certificate from the American Snake Oil Testing Laboratory that says it’s 100% pure snake oil.


  • “am I out of touch? no, it’s the customers who are wrong”

    talking to a friend recently about the push to put “AI” into everything, something they said stuck with me.

    oversimplified view of the org chart at a large company - you have the people actually doing the work at the bottom, and then as you move upwards you get more and more disconnected from the actual work.

    one level up, you’re managing the actual workers, and a lot of your job is writing status reports and other documents, reading other status reports, having meetings about them, etc. as you go further up in the hierarchy, your job becomes consuming status reports, summarizing them to pass them up the chain, and so on.

    being enthusiastic about “AI” seems to be heavily correlated with position in that org chart. which makes sense, because one of the few things that chatbots are decent at is stuff like “here’s a status report that’s longer than I want to read, summarize it for me” or “here’s N status reports from my underlings, summarize them into 1 status report I can pass along to my boss”.

    in my field (software engineering) the people most gung-ho about using LLMs have been essentially turning themselves into managers, with a “team” of chatbots acting like very-junior engineers.

    and I think that explains very well why we see so many executives, including this guy, who think LLMs are a bigger invention than sliced bread, and can’t understand the more widespread dislike of them.



  • This is an inflammatory way of saying the guy got served papers.

    ehh…yes and no.

    they could have served the subpoena using registered mail.

    or they could have used a civilian process server.

    instead they chose to have a sheriff’s deputy do it.

    from the guy’s twitter thread:

    OpenAI went beyond just subpoenaing Encode about Elon. OpenAI could (and did!) send a subpoena to Encode’s corporate address asking about our funders or communications with Elon (which don’t exist).

    If OpenAI had stopped there, maybe you could argue it was in good faith.

    But they didn’t stop there.

    They also sent a sheriff’s deputy to my home and asked for me to turn over private texts and emails with CA legislators, college students, and former OAI employees.

    This is not normal. OpenAI used an unrelated lawsuit to intimidate advocates of a bill trying to regulate them. While the bill was still being debated.

    in context, the subpoena and the way in which it was served sure smells like an attempt at intimidation.



  • “Nurses and medical staff are really overworked, under a lot of pressure, and unfortunately, a lot of times they don’t have capacity to provide engagement and connection to patients,” said Karen Khachikyan, CEO of Expper Technologies, which developed the robot.

    tapping the sign: every “AI” related medical invention is built around this assumption that there’s too few medical staff and they’re all overworked and changing that is not feasible. so we have to invest millions of dollars into hospital robots because investing millions of dollars in actually paying workers would be too hard. (also, robots never unionize)

    Robin is about 30% autonomous, while a team of operators working remotely controls the rest under the watchful eyes of clinical staff.

    30%…according to the company itself. they have a strong incentive to exaggerate. and they’re not publishing any data of how they arrived at that figure so that it could be independently verified.

    it sounds like they took one of the telepresence robots that’s been around for 10+ years and slapped ChatGPT into it and now they’re trying to fundraise on the hype of being an “AI” company. it’s a good grift if you can make it work.


  • Asshole cars for mostly assholes

    from the article:

    Some firms have reportedly already laid off staff, with the Unite union claiming that workers in the JLR supply chain “are being laid off with reduced or zero pay.” Some have been told to “sign up” for government benefits, the union claims.

    JLR, which is owned by India’s Tata Motors, is one of the UK’s biggest employers, with around 32,800 people directly employed in the country. Stats on the company’s website also claim it supports another 104,000 jobs through its UK supply chain and another 62,900 jobs “through wage-induced spending.”

    regardless of your opinion about the cars or the people who drive them…thousands of people getting furloughed or laid off suddenly is bad.



  • “In other words, these conversations with a social robot gave caregivers something that they sorely lack – a space to talk about themselves”

    so they’re doing a job that’s demanding, thankless, often unpaid (in the case of this study, entirely unpaid, because they exclusively recruited “informal” caregivers)

    and…it turns out talking about it improves their mood?

    yeah, that’s groundbreaking. no one could have foreseen it.

    if you did this with actual humans it’d be “lol yeah that’s just therapy and/or having friends” and you wouldn’t get it published in a scientific paper.

    it’s written up as a “robotics” story but I’m not sure how it being a “robot” changes anything compared to a chatbot. it seems like this is yet another “discovery” of “hey you can talk to an LLM chatbot and it kinda sorta looks like therapy, if you squint at it”.

    (tapping the sign about why “AI therapy” is stupid and trying to address the wrong problem)



  • here is the official NASA press release. primary sources are always preferable, especially compared to this fuckass “digital trends” clickbait website.

    “This finding by Perseverance, launched under President Trump in his first term, is the closest we have ever come to discovering life on Mars. The identification of a potential biosignature on the Red Planet is a groundbreaking discovery, and one that will advance our understanding of Mars,” said acting NASA Administrator Sean Duffy. “NASA’s commitment to conducting Gold Standard Science will continue as we pursue our goal of putting American boots on Mars’ rocky soil.”

    quick fact check: it was launched in 2020, but announced back in 2012. giving Trump credit here is idiotic, but it’s about what you’d expect from Sean Duffy, he’s a Trump crony through-and-through. before being the NASA administrator he was Trump’s Secretary of Transportation, and before that he was a Republican congressman, and reality TV contestant (on The Real World and the *checks notes* Lumberjack World Championship)

    I think it’s important to remember that everything, even basic scientific research, is liable to be politicized if it suits their ends. so it’s totally possible this biosignature is legitimate, but it’s also totally possible that they’re hyping up questionable findings because they want to persuade Trump that funding a NASA mission to Mars would boost his TV ratings.


  • I haven’t. It was omitted from the article in question. I stand corrected.

    keep standing…because here’s the 5th paragraph of the article:

    Political analyst Matthew Dowd was fired from MSNBC on Wednesday after speaking about Kirk’s death on air. During a broadcast on Wednesday following the shooting, anchor Katy Tur asked Dowd about “the environment in which a shooting like this happens,” according to Variety. Dowd answered: “He’s been one of the most divisive, especially divisive younger figures in this, who is constantly sort of pushing this sort of hate speech or sort of aimed at certain groups. And I always go back to, hateful thoughts lead to hateful words, which then lead to hateful actions. And I think that is the environment we are in. You can’t stop with these sort of awful thoughts you have and then saying these awful words and not expect awful actions to take place. And that’s the unfortunate environment we are in.”


  • a contributor who made an unacceptable and insensitive comment about this horrific event

    have you read the actual statement that got him fired?

    from wikipedia:

    On September 10, 2025, commenting on the killing of Charlie Kirk, Dowd said on-air, “He’s been one of the most divisive, especially divisive younger figures in this, who is constantly sort of pushing this sort of hate speech or sort of aimed at certain groups. And I always go back to, hateful thoughts lead to hateful words, which then lead to hateful actions. And I think that is the environment we are in. You can’t stop with these sort of awful thoughts you have and then saying these awful words and not expect awful actions to take place. And that’s the unfortunate environment we are in.” Dowd also speculated that the shooter may have been a supporter.

    you can agree or disagree with the decision to fire him (I’m not shedding any tears, Dowd was the chief strategist for the 2004 Bush re-election campaign, it’s ludicrous that he was working for a supposedly “progressive” network like MSNBC in the first place)

    but characterizing that statement as “celebrating murder” is just bullshit.



  • My best guess is that you were going for “hypothetical.”

    no, if I meant hypothetical I would have said hypothetical. notice that I gave two hypotheticals - Brinnon-Redmond and Tacoma-Redmond. only the Brinnon one was pathological.

    let’s go back to 9th grade Advanced English and diagram out my comment. that sentence is in a paragraph, the topic of which is “some shit about Seattle’s geography that people who’ve never lived here probably don’t know”. notice I’m talking about geography. I wasn’t saying anything about Brinnon’s population, or the likelihood of its residents working at Microsoft. that was entirely words you put into my mouth and then decided you disagreed with.

    if you think pathological is the wrong word choice there, then no I don’t think you actually understand what it means, at least not in the context I was using it. from wikipedia:

    In computer science, pathological has a slightly different sense with regard to the study of algorithms. Here, an input (or set of inputs) is said to be pathological if it causes atypical behavior from the algorithm, such as a violation of its average case complexity, or even its correctness.

    there’s crow-flies distance and there’s driving distance, and obviously driving distance is always longer, but usually not that much longer. playing around with Google Maps again, Seattle-Tacoma is 25 miles crow-flies but 37 miles driving, for a ratio of 1.5. that seems likely to be about average. the Brinnon-Redmond distance, without the ferry, gives you a ~3.7 ratio. that’s an input that causes significantly worse performance than the average case. it’s pathological.

    the closest synonym to pathological in this context would be “worst-case”, but that would be subtly incorrect, because then I would be claiming that Brinnon is the longest driving distance out of all possible commutes to Redmond within a 50 miles crow-flies bubble. you’d need some fancy GIS software to find that, not just me poking around for a few minutes in Google Maps.

    (and this is the technology sub-lemmy, in a thread about something that will mostly affect software engineers, and planning out a driving commute is a classic example of a pathfinding algorithm…using “pathological” from the computer science context here is actually an extremely cromulent word choice)

    there seems to be a recurring pattern of you responding to me, making up shit I didn’t actually say, and then nitpicking about it. recently you accused me of “trying to both-sides Nazis”. please stop doing that.