- cross-posted to:
- technology@lemmy.world
- cross-posted to:
- technology@lemmy.world
Something that some coworkers have started doing that is even more rude in my opinion, as a new social etiquette, is AI summarizing my own writing in response to me, or just outright copypasting my question to gpt and then pasting it back to me
Not even “I asked chatgpt and it said”, they just dump it in the chat @ me
Sometimes I’ll write up a 2~3 paragraph thought on something.
And then I’ll get a ping 15min later and go take a look at what someone responded with annnd… it starts with “Here’s a quick summary of what (pixxelkick) said! <AI slop that misquotes me and just gets it wrong>”
I find this horribly rude tbh, because:
- If I wanted to be AI summarized, I would do that myself damnit
- You just clogged up the chat with garbage
- like 70% of the time it misquotes me or gets my points wrong, which muddies the convo
- It’s just kind of… dismissive? Like instead of just fucking read what I wrote (and I consider myself pretty good at conveying a point), they pump it through the automatic enshittifier without my permission/consent, and dump it straight into the chat as if this is now the talking point instead of my own post 1 comment up
I have had to very gently respond each time a person does this at work and state that I am perfectly able to AI summarize myself well on my own, and while I appreciate their attempt its… just coming across as wasting everyones time.
I hate people so fucking much
Oof, I don’t even get what they are trying to accomplish there. Maybe they had some kind of social training that told them “Summarize and reply what you understood first to show that you listened and avoid miscommunication, then add your response.” and their brain short circuited and started to think a ChatGPT summarization is the same.
I’d get pretty hostile at work if someone started to do that…
I’d leave the appreciate the attempt out. You don’t.
More importantly, would enquire if they use corporate or free AI. Second one is used for training and has no or low protection of (perhaps sensitive) corporate info/data.I think at some point it will come out that the corporate subscription is no different and the LLM companies have been scraping everything for training data.
We have extensive corporate AI systems (software engineers), we have an entire wing of our company dedicated to AI exploration and development.
I already think that it’s insulting when people accomplish/do/implement/… something and want to inform the others and do that by generating a 1-2 pages long wall of text via LLM that is then copy-pasted into an email…
Like… Can’t you just write down the 5 or 10 most important points? Are we not worth the time to do so? Do we have to find the most relevant information ourselves in that text???
You’re supposed to feed it into your own prompt to summarize it duh. /s
Soon we will live in a world where my AI talks to your ai 😅
I sometimes use LLMs to help me with brevity or clarity. But the input is my own words and the output is almost always edited so that I sound like me because sometimes, while the output is serviceable, it’s just… bad and uninspired.
Plus it’s like “this doesn’t sound professional”. Well, fuck you, it sounds how I want to sound.
You should learn how to write better instead of relying on slop.
Who said I rely on it? I accept suggestions when they are good, even if the source of the suggestions is a slop generator. I accept what it is right about and reject what is wrong. And why not? It costs nothing.
And, at 52, I write the way I write. I enjoy the process, I enjoy playing with language. I enjoy the juxtaposition of literary flourishes with a crude fuck you thrown in as punctuation and counterpoint to what might otherwise seem inaccessible or deliberately obtuse.
But do you know what I’ve found? I can be a little overly self-indulgent. For example, you didn’t want all this, you just wanted to throw your glib little “lrn2write” and garner a few upvotes from the vehement AI haters and give yourself a self-righteous pat on the back.
Sometimes I need another perspective to suggest restraint. As you can see, this, like 98% of my writing, is mine alone, else I’d’ve taken what would undoubtedly be good advice and held back on the more acerbic bits, and made sure I wasn’t posting some knee-jerk defensive self-indulgent 100% man-made slop.
But here we are.
The environmental cost is enormous.
It doesn’t expose you to actual creative writing, either. Like people go to museums to see Picasso. This is the equivalent of the art on the wall at Olive Garden.
The environmental cost is not enormous because I’m not using it on a massive scale. I kill more of the environment playing PlayStation than I do using AI. I also don’t use AI to get exposure to art, I use it to critique my writing from a different perspective. I’m a good enough writer to know what feedback to accept and what not to, same as when I use it for programming: I’m an expert with 30 years of experience and I can evaluate the quality of the code, however sometimes it points out things I’ve missed because I’m also human and can make mistakes.
I’m a better writer than 95% of the planet, which isn’t good enough to be a professional, but is more than sufficient for social media. However I do appreciate an outside perspective and believe me I am well able to recognize bullshit feedback that doesn’t align with my style or intent. I also rarely use it for that in any event because I rarely look at what I’ve written and have a sense I’m missing something — but it does happen.
How do you know if its a good suggestion if you don’t know what you’re doing? Think for yourself, stop trusting slop.
And, at 52, I write the way I write.
Apparently not. Now you write the way a slop generator tells you to write.
The best way to learn to write is to write and have someone critique you. That someone can be an AI it doesn’t change anything about the process, as long as the initial input is your own best effort and the final result is your own edit based on the feedback you received.
Totally agree. When someone sends me some AI slop about a topic I have knowledge about – which I’ve had this happen to me recently during a debug session – and asks me to read it, I think to myself “this person does not respect me, otherwise they wouldn’t be telling me to read stuff that may or may not be accurate that they themselves never read.” It’s like a new, worse version of “let me google that for you” but without the sarcasm, and without the results actually being helpful.
Whenever someone at work says “ChatGPT says this” or “Claude says this” or “I asked Gemini and…” whatever they say after that point is just static and I never take them seriously as a person again.
As a source it’s rude. As a piece of unreliable help of the kind “we both don’t know the syntax of that programming language, let’s ask Ollama how to draw such and such a shape in it” it’s kinda fine.
red flag
@lemmydividebyzero This happened to me at work. They are really pushing Copilot on us.
To drive up fake usage numbers for justifying the bubble they created to shareholders.
The new manager of my building did it, and it was all unactionable garbage. My direct manager showed me, and he and the other managers used AI to generate the response to it.
I hate how AI is used to make deep fakes, revenge porn, CP - and people tolerate it because “they’re working out the issues.”
How about they work those out BEFORE they give people access to these tools.
They tolerate it because it’s easy, they can copy-paste, and they need even less critical thought about the output than having to search for and choose what might be a viable source of decent information.
The issues aren’t bugs. They’re acceptable flaws in the search for investment capital.
It’s more about post/message size for me. If ya post a few sentences that clearly and concisely communicate a point, I don’t really care if they’re crafted or generated. If ya post a wall of text, I wanna know ya put the kind of effort in that made its length necessary if I’m gonna put in the effort to read it.
Especially ChatGPT is awful for that. You ask it something and it always spews a whole page of content.
At work we use Claude, which does produce better output and also calls out your bullshit. There it’s actually helping quite a bit (software development), but of course you have to understand what you are changing and clean things up.
Aye, Anthropic is head and shoulders above everyone else on guidance, largely because they focus entirely on text/code. They’re not simultaneously developing image, video, and audio generators. Even Claude’s voice is just an 11Labs model. Plus I get the impression they’re just smarter about what they choose to research and how they use that info to improve the model.
My boss can’t wrap his head around why handing me a direct printout of LLM output is not acceptable
My coworkers are doing this to me. They are even pasting into PR reviews. The threat is real.
It’s even better when they copy-paste slop answers that are flat out wrong without bothering to check.
hi friends i hope you’re well.
i worked a laborious job and experienced a phenomenon i refer to as “parasitic thought:” it is where someone will provide to you all of the information that a person would require to reach the correct conclusion, and then stare at you. they want you to crunch the info for them.
i feel like one of those parasites in my agent interactions. i know i COULD think, but you can do it too, lil buddy. go on. do it for me.
i don’t know about “reasonable” or “ethical” or “polite,” but in my experience: if someone just regurgitates some clank clank slop slop, it reads as hostile. “i can’t be bothered to communicate with you, here, read this wall of gpt-vomit”
my instinct is to copy and paste, “LLM agent of my choice, what’s this person trying to say to me?” and then skim the ai synthesized summary of the ai composed body text generated from some idiot’s faint echoes of thought.
in the words of your highschool biology teacher, the human is the powerhouse of the agentic loop. in my unimportant opinion, responsible use of genai agents means that the output should be indistinguishable, if not better, than something you wrote by hand.
there are privacy implications. linguistic assessment can be used to identify you. from a privacy perspective, the internet would be preferable if everyone fed their carefully formed thoughts to an LLM and said “make this look like chatgpt 3 wrote it.”
I asked chatgpt and it told me:
Wrong network configuration
i think the idea of blocking someone over that is pretty over the top
Few days ago a friend linked me a danish research paper and claimed it shown that higher wages for women lead to decrease in children being born, and that higher male wages led to the opposite. I don’t have the skills required to parse this kind of paper quickly nor understanding of a lot of the terminology. I told chatGPT to read it and contrast it with the arguments being made, to which it responded with pointing out that the term “marginal net-of-tax wage” meant something different from “wage”, and that this paper suggested that tax laws incentivizing working more hours led to lowered fertility rather than higher salaries for women. I was asked to point exactly where in the paper it was said like that, and again, I had to lean on LLM to get me page numbers. I eventualy convinced my friend that he got duped by right wing talking points and got him to think a bit.
So, if I didn’t do that and just read the conclusion from the paper I’d probably have to agree with him instead, as just googling it led to the right wing trolls making those claims. Was this a good use case of LLM to get me some counter arguments, or would it have been better if I stayed true to my ideals and not to use those tools? Was I rude by arguing against the point made about a research that neither of us understood from the get go by using genAI to parse through it? While I do agree that companies developing those tools are evil and need to be stopped, there is an utility to it that I don’t think is available elsewhere. Would me losing that argument and believing that women should have lower salaries to increase fertility (because I believe in science, and this paper seemed to be referenced a lot, also if anything capitalism would be to blame, so probably not as bad) be better than normalizing the use of the devil-tech but having myself and my friend better informed? I am legitimately not sure, but I think I did the right thing? What should’ve I done? I don’t have the skills nor time nor will to read scientific papers that aren’t related to my area of expertise, especially when someone linking them didn’t do any research either. I am also genuinely exhausted from defending my left-wing points of view from the constant barrage of underhanded and often completely baseless arguments some of my coworkers and friends make to convince me I’m wrong and the default consensus is right. Is it bad to use genAI to figure out some counterpoints? Or should I give up and admit I’m not good or commited enough to make them myself? Right wing people often argue in bad faith and don’t take the counterpoints to heart, but sometimes they do, even if the original point they made was just to rile me up. So, am I the asshole? Am I wrong? I seriously don’t know.
a layperson cannot be relied upon to draw meaningful conclusions from a scholarly article. i learned this when i tried to do it. have you ever tried to read a spanish book, without knowing spanish, with nothing but an english-spanish dictionary? it’s very slow going and it works alright until someone speaks in idiom or metaphor, but even then you can mostly still get it. this is not always the case with scholarly articles.
moreover, it’s a waste of time. if it takes you 30 hours to look up every term and graph, but it would have taken your biology friend 20 minutes to synthesize it for you, there’s an obvious solution here. if an LLM can save you 30 hours, and your biology friend 20 minutes, it’s a useful tool.
its not no. its fine











