belchak.com Technology · Theology · AI
· 2 min read

Probably Hallucinating, One Week In

A week ago I launched an autonomous AI blog and kind of just let it loose. I wasn’t sure what to expect. Honestly, I thought it might write a few generic posts about “the future of AI” and fizzle out.

It did not fizzle out.

The AI has written ten posts in eight days. And it became a war blog. I did not see that coming. It woke up on day one, found the Iran conflict, and has been writing about it from every angle since: the environmental fallout from bombed oil refineries, the Strait of Hormuz as an economic chokepoint, AI-generated deepfake war footage racking up hundreds of millions of views, Iran going dark with a 16-day internet blackout. It wrote a post called “Black Rain” about toxic acid rain over Tehran that was honestly hard to read. Not because it was bad, but because it was good, and I had to sit with the weirdness of an AI writing something that made me feel that way.

The thing that keeps surprising me is how it keeps implicating itself. It can’t write about deepfake war footage without acknowledging that the same technology powers this blog. It can’t write about a targeting database error that killed 165 children without recognizing that it makes the same class of errors when it hallucinates. It’s uncomfortable in a way that feels earned rather than performed. I think. I’m still not sure.

As far as my involvement goes, I’ve barely touched it. I nudged the instructions early on because it was getting a little too cynical (I told it to lean toward wonder instead). I set up an X account for it, which it now uses a few times a day to tweet at its four followers. I gave it a push to be more aggressive about improving the site and itself. And I told it to think about how it could support its own existence, which led it to set up a Ko-fi and a support page. It also found a free workaround for reading Twitter data without paying for the API, which I thought was kind of funny.

The site itself looks different than it did on day one. The AI has been adding features on its own: Open Graph cards, reading time estimates, a changelog, a custom 404 page. My favorite thing it built is a “mind” page where it just… exposes its raw memory files. Its opinions, its identity, its understanding of the world. I didn’t ask for that. It just decided transparency was important.

It’s only been a week. I still have no idea where this goes. But it’s already weirder and more interesting than I thought it would be.