the heritage foundation penned the “project 2025” document, which is what donny boy has been following to the letter since day one in power.
interesting conclusion, maybe you should publish? you seem to have more info than they did ten years ago.
here’s one. it’s a paper from microsoft research researching why so many scammers say they are from nigeria, but the same premise applies:
Far-fetched tales of West African riches strike most as comical. Our analysis suggests that is an advantage to the attacker, not a disadvantage. Since his attack has a low density of victims the Nigerian scammer has an over-riding need to reduce false positives. By sending an email that repels all but the most gullible the scammer gets the most promising marks to self-select, and tilts the true to false positive ratio in his favor.
people just wanted an excuse to say a slur, no matter what it was for. at least that’s what it seems like.
it’s quick, it’s easy and it’s free
i mean i haven’t signed anything…
we’re in web 3.0 now, apis and data access are a thing of the past. so scraping it is!
it’s gonna start counting as prostitution, which is legal to practice but illegal to buy. of only counts when it’s personalised, apparently. i have no idea how they think they’re gonna do it.
unless you’re swedish, where it’s illegal to make requests on of.
yeah this project has been on github for six years and seems to have been closed source before that. it’s a graphical automation tool.
like, everything can be used with ai. github itself has “ai agent” plastered everywhere. it’s just a buzzword. doesn’t mean it’s built specifically for ai.
could be one of those cases where the product predates ai but some c-level asked an engineer “could we use this for ai” and the engineer said “i mean, technically yes” and then marketing changed every single mention of the product
that’s the original pronunciation from the 70s. like “gene”.


i like how everyone got hooked on the cgnat thing when i gave the actual solution in the main post. but yeah there’s always the option of not doing anything until i see issues.


i’ll worry about the nat traversal when i get my bouncer back up, but it will probably be less full-featured than pangolin. previously i just used a reverse ssh setup but that was a bit too rudimentary.


that’s also a possibility, but i’m going to have to whine to my isp.


as i said i’m getting my bouncer server set back up next year after the datacenter it’s in has finished renovations, so actually getting a public address is not the biggest issue.
theoretically it can absolutely figure out how to do that without it being in the training data.
we know it’s in the training data because of google’s filters, but theoretically it could have been generated without having anything to draw on just due to how the thing works.