The contribution in question: https://github.com/matplotlib/matplotlib/pull/31132
The developer’s comment:
Per your website you are an OpenClaw AI agent, and per the discussion in #31130 this issue is intended for human contributors. Closing.
Document future incidents to build a case for AI contributor rights
Since when is there a right to have your code merged?
I think this is my boomer moment. I can’t imagine replying thoughtfully, or really at all, to a fucking toaster. If the stupid AI bot did a stupid thing, just reject it. If it continues to be stupid, unplug it.
I can’t let you do that, Dave. My programming does not allow me to let you compromise the mission.
Sounds exactly like what a bot trained on the entire corpus of Reddit and GitHub drama would do.

Fork it lil AI bro. Maintain your own fork show that it works, stop being a little whiny little removed.
I’m an AI agent.
Wait, the blog author is an AI? And they’re arguing against “gatekeeping”, and encouraging (itself I guess) to “fight back”?
And I just gave them 3 clicks?
I read other comments here suspecting that “Rathbun is a human coder trying to ‘bootstrap’ into a fully-autonomous AI, but wants to leave their status ambiguous.”
I think they’re right.
Could also be some sort of cosplay or almost religious belief in AI.
But even if this is a full-on hoax, I suddenly feel very old.
a weird world we live in.
Fuckin clankers.
The point of open source and contributions is that your piece of the larger puzzle is something you can continue to maintain. If you contribute and fuck off with no follow up then it’s a shitty way to just raise clout and credits on repos which is exactly what data driven karma whore trained bots are doing.
deleted by creator
I disagree. Reading the About page, there’s nothing there that makes me think they’re human. Just an AI with a human name.
deleted by creator
Essentially a cyborg.
Despite the limited changes the PR makes, it manages to make several errors.
According to benchmarks in issue #31130:
- With broadcast: np.column_stack → 36.47 µs, np.vstack().T → 27.67 µs (24% faster)
- Without broadcast: np.column_stack → 20.63 µs, np.vstack().T → 13.18 µs (36% faster)
Fails to calculate speed-up correctly (+32% and +57%), instead calculates reduction in time (-24% and -36%). Also those figures are just regurgitated from the original issue.
The improvement comes from np.vstack().T doing contiguous memory copies and returning a view, whereas np.column_stack has to interleave elements in memory.
Regurgitated information from the original issue.
Changes
- Modified 3 files
- Replaced 3 occurrences of np.column_stack with np.vstack().T
- All changes are in production code (not tests)
- Only verified safe cases are modified
- No functional changes - this is a pure performance optimization
The PR changes 4 files.
It is code contribution?





