Inspired by the comments on this Ars article, I’ve decided to program my website to “poison the well” when it gets a request from GPTBot
.
The intuitive approach is just to generate some HTML like this:
<p>
// Twenty pages of random words
</p>
(I also considered just hardcoding twenty megabytes of “FUCK YOU,” but that’s a little juvenile for my taste.)
Unfortunately, I’m not very familiar with ML beyond a few basic concepts, so I’m unsure if this would get me the most bang for my buck.
What do you smarter people on Lemmy think?
(I’m aware this won’t do much, but I’m petty.)
You should probably change page content entirely, server sizey, based on the user agent og request IP.
Using CSS to change layout based on the request has long since been “fixed” by smart crawlers. Even hacks that use JS to show/hide content is mostly handled by crawlers.
I won’t be using CSS or JS. I control the entire stack, so I can do a server-side check -
GPTBot
user agents get random garbage, everyone else gets the real deal.Obviously this relies on OpenAI not masking their user agent, but I think webmasters would notice a conspicuous lack of hits if they did that.