I check if a user agent has gptbot, and if it does I 302 it to web.sp.am.
Some details. One of the major players doing the tar pit strategy is Cloudflare. They’re a giant in networking and infrastructure, and they use AI (more traditional, nit LLMs) ubiquitously to detect bots. So it is an arms race, but one where both sides have massive incentives.
Making nonsense is indeed detectable, but that misunderstands the purpose: economics. Scraping bots are used because they’re a cheap way to get training data. If you make a non zero portion of training data poisonous you’d have to spend increasingly many resources to filter it out. The better the nonsense, the harder to detect. Cloudflare is known it use small LLMs to generate the nonsense, hence requiring systems at least that complex to differentiate it.
So in short the tar pit with garbage data actually decreases the average value of scraped data for bots that ignore do not scrape instructions.
The fact the internet runs on lava lamps makes me so happy.
When I was a kid I thought computers would be useful.
They are. Its important to remember that in a capitalist society what is useful and efficient is not the same as profitable.
Such a stupid title, great software!
I’ve suggested things like this before. Scrapers grab data to train their models. So feed them poison.
Things like counter factual information, distorted images / audio, mislabeled images, outright falsehoods, false quotations, booby traps (that you can test for after the fact), fake names, fake data, non sequiturs, slanderous statements about people and brands etc… And choose esoteric subjects to amplify the damage caused to the AI.
You could even have one AI generate the garbage that another ingests and shit out some new links every night until there is an entire corpus of trash for any scraper willing to take it all in. You can then try querying AIs about some of the booby traps and see if it elicits a response - then you could even sue the company stealing content or publicly shame them.
Kind of reminds me of paper towns in map making.
Btw, how about limiting clicks per second/minute, against distributed scraping? A user who clicks more than 3 links per second is not a person. Neither, if they do 50 in a minute. And if they are then blocked and switch to the next, it’s still limited in bandwith they can occupy.
I click links frequently and I’m not a web crawler. Example: get search results, open several likely looking possibilities (only takes a few seconds), then look through each one for a reasonable understanding of the subject that isn’t limited to one person’s bias and/or mistakes. It’s not just search results; I do this on Lemmy too, and when I’m shopping.
Ok, same, make it 5 or 10. Since i use Tree Style Tabs and Auto Tab Discard, i do get a temporary block in some webshops, if i load (not just open) too much tabs in too short time. Probably a CDN thing.
Would you mind explaining your workflow with these tree style tabs? I am having a hard time picturing how they are used in practice and what benefits they bring.
deleted by creator
They make one request per IP. Rate limit per IP does nothing.
Ah, one request, then the next IP doing one and so on, rotating? I mean, they don’t have unlimited adresses. Is there no way to group them together to a observable group, to set quotas? I mean, in the purpose of defense against AI-DDOS and not just for hurting them.
There’s always Anubis 🤷
Anyway, what if they are backed by some big Chinese corporation with some /32 ipv6 and some /16 ipv4? It’s not that unreasonable
No, I don’t think blocking IP ranges will be effective (except in very specific scenarios). See this comment referencing a blog post about this happening and the traffic was coming from a variety of residential IP allocations. https://lemm.ee/comment/20684186
my point was that even if they don’t have unlimited ips they might have a lot of them, especially if its ipv6, so you couldn’t just block them. but you can use anubis that doesn’t rely on ip filtering
You’re right, and Anubis was the solution they used. I just wanted to mention the IP thing because you did is all.
I hadn’t heard about Anubis before this thread. It’s cool! The idea of wasting some of my “resources” to get to a webpage sucks, but I guess that’s the reality we’re in. If it means a more human oriented internet then it’s worth it.
A lot of FOSS software’s websites are starting to use it lately, starting from the gnome foundation, that’s what popularized it.
The idea of proof of work itself came from spam emails, of all places. One proposed but never adopted way of preventing spam was hashcash, which required emails to have a proof of work embedded in the email. Bitcoins came after this borrowing the idea
There should be a federated system for blocking IP ranges that other server operators within a chain of trust have already identified as belonging to crawlers. A bit like fediseer.com, but possibly more decentralized.
(Here’s another advantage of Markov chain maze generators like Nepenthes: Even when crawlers recognize that they have been served garbage and they delete it, one still has obtained highly reliable evidence that the requesting IPs are crawlers.)
Also, whenever one is only partially confident in a classification of an IP range as a crawler, instead of blocking it outright one can serve proof-of-works tasks (à la Anubis) with a complexity proportional to that confidence. This could also be useful in order to keep crawlers somewhat in the dark about whether they’ve been put on a blacklist.
You might want to take a look at CrowdSec if you don’t already know it.
Thanks. Makes sense that things roughly along those lines already exist, of course. CrowdSec’s pricing, which apparently start at 900$/months, seem forbiddingly expensive for most small-to-medium projects, though. Do you or does anyone else know a similar solution for small or even nonexistent budgets? (Personally I’m not running any servers or projects right now, but may do so in the future.)
There are many continuously updated IP blacklists on GitHub. Personally I have an automation that sources 10+ of such lists and blocks all IPs that appear on like 3 or more of them. I’m not sure there are any blacklists specific to “AI”, but as far as I know, most of them already included particularly annoying scrapers before the whole GPT craze.
Holy shit, those prices. Like, I wouldn’t be able to afford any package at even 10% the going rate.
Anything available for the lone operator running a handful of Internet-addressable servers behind a single symmetrical SOHO connection? As in, anything for the other 95% of us that don’t have literal mountains of cash to burn?
They do seem to have a free tier of sorts. I don’t use them personally, I only know of their existence and I’ve been meaning to give them a try. Seeing the pricing just now though, I might not even bother, unless the free tier is worth anything.
–recurse-depth=3 --max-hits=256
Typical bluesky post
OK but why is there a vagina in a petri dish
I was going to say something snarky and stupid, like “all traps are vagina-shaped,” but then I thought about venus fly traps and bear traps and now I’m worried I’ve stumbled onto something I’m not supposed to know.
I believe that’s a close-up of the inside of a pitcher plant. Which is a plant that sits there all day wafting out a sweet smell of food, waiting around for insects to fall into its fluid filled “belly” where they thrash around fruitlessly until they finally die and are dissolved, thereby nourishing the plant they were originally there to prey upon.
Fitting analogy, no?
I’m imagining a break future where, in order to access data from a website you have to pass a three tiered system of tests that make, ‘click here to prove you aren’t a robot’ and ‘select all of the images that have a traffic light’ , seem like child’s play.
All you need to protect data from ai is use non-http protocol, at least for now
Easier said than done. I know of IPFS, but how widespread and easy to use is it?
How can i make something like this
Use Anubis.
Thanks
Cool, but as with most of the anti-AI tricks its completely trivial to work around. So you might stop them for a week or two, but they’ll add like 3 lines of code to detect this and it’ll become useless.
I hate this argument. All cyber security is an arms race. If this helps small site owners stop small bot scrapers, good. Solutions don’t need to be perfect.
I worked at a major tech company in 2018 who didn’t take security seriously because that was literally their philosophy, just refusing to do anything until it was an absolute perfect security solution, and everything else is wasted resources.
I left since then and I continue to see them on the news for data leaks.
Small brain people man.
Did they lock their doors?
Pff, a closed door never stopped a criminal that wants to break in. Our corporate policy is no doors at all. Takes less time to get where you need to go, so our employees don’t waste precious seconds they could instead be using to generate profits.
So many companies let perfect become the enemy of good and it’s insane. Recently some discussion about trying to get our team to use a consistent formatting scheme devolved into this type of thing. If the thing being proposed is better than what we currently have, let’s implement it as is then if you have concerns about ways to make it better let’s address those later in another iteration.
To some extent that’s true, but anyone who builds network software of any kind without timeouts defined is not very good at their job. If this traps anything, it wasn’t good to begin with, AI aside.
Leave your doors unlocked at home then. If your lock stops anyone, they weren’t good thieves to begin with. 🙄
I believe you misread their comment. They are saying if you leave your doors unlocked your part of the problem. Because these ai lock picks only look for open doors or they know how to skip locked doors
They said this tool is useless because of how trivial it is to work around.
My apologies, i thought your reply was against @Xartle s comment.
They basically said the addition protection is not necessary because common security measures cover it.
I bet someone like cloudflare could bounce them around traps across multiple domains under their DNS and make it harder to detect the trap.
Yes, but you want actual solutions. Using ducktape on a door instead of an actual lock isn’t going to help you at all.
Reflexive contrarianism isn’t a good look.
It’s not contrarianism. It’s just pointing out a “cool new tech to stop AI” is actually just useless media bait.
I’m pretty sure no one knows my blog and wiki exist, but it sure is popular, getting multiple hits per second 24/7 in a tangle of wiki articles I autogenerated to tell me trivia like whether the Great Fire of London started on a Sunday or Thursday.