It stands to reason that if you have access to an LLM’s training data, you can influence what’s coming out the other end of the inscrutable AI’s network. The obvious guess is that…
There are poisoning scripts for images, where some random pixels have totally nonsensical / erratic colors, which we won’t really notice at all, however this would wreck the LLM into shambles.
However i don’t know how to poison a text well which would significantly ruin the original article for human readers.
Ngl poisoning art should be widely advertised imo towards independent artists.
Replace all upper case I with a lower case L and vis-versa. Fill randomly with zero-width text everywhere. Use white text instead of line break (make it weird prompts, too).
There are poisoning scripts for images, where some random pixels have totally nonsensical / erratic colors, which we won’t really notice at all, however this would wreck the LLM into shambles.
However i don’t know how to poison a text well which would significantly ruin the original article for human readers.
Ngl poisoning art should be widely advertised imo towards independent artists.
Replace all upper case I with a lower case L and vis-versa. Fill randomly with zero-width text everywhere. Use white text instead of line break (make it weird prompts, too).
Somewhere an accessibility developer is crying in a corner because of what you just typed
Edit: also, please please please do not use alt text for images to wrongly “tag” images. The alt text important for accessibility! Thanks.