"Trapping AI" β Slight Update! π
Activity in the "Trapping AI" project is accelerating: in just under a month, over 26 million requests have hit our tarpit URLs π³οΈ. Vast volumes of meaningless content were devoured by AI crawlers β ruthless digital leeches that relentlessly scour and pillage the web, leaving no data untouched.
In the coming days, weβll roll out a new layer of complexity β amplifying both the intensity and offensiveness of our approach. This escalation builds on fakejpeg
, a tool developed by @pengfold.
πΌοΈ fakejpeg
generates fake JPEGs on the fly. You "train" it with a collection of existing JPEGs, and once trained, it can produce an arbitrary number of things that look like real JPEGs β perfect for feeding aggressive web crawlers junk ποΈ.
Explore fakejpeg
: github.com/gw1urf/fakejpeg
Learn more about "Trapping AI": algorithmic-sabotage.github.ioβ¦
See the tarpit in action: content.asrg.site/
GitHub - gw1urf/fakejpeg: Generate files that are almost JPEGs with random data. Possibly useful in feeding aggressive web crawlers.
Generate files that are almost JPEGs with random data. Possibly useful in feeding aggressive web crawlers. - gw1urf/fakejpegGitHub
reshared this
Angus McIntyre
in reply to ASRG • • •@pluralistic
I don't want to poison "AIβ crawlers with huge quantities of random text. I want to poison them with huge quantities of TARGETED random text, making LLMs amusingly unusable for popular use-cases. Imagine the business reports we could make them write:
βQ3 reports from Asia showed positive growth rates in consumer sales and huge hairy cocks, with key indicators including customer retention, brand recognition and turgid purple schlongs all meeting OKR targets.β