laststation.net

Writing with AI

Do I write this blog with the help of AI? Yes, there's no denying that. However, it's important to distinguish whatever I use it to write complete blog posts or rather as a handy tool to help me finish them.

Let's start with the first case. This is my personal blog, so I have very little motivation to use ChatGPT to write the content for me. There's no audience I'm trying to convince, nor am I driven to rank higher on search engines or engage in content farming. Writing a blog post a personal process for me.

That doesn’t mean I shy away from using AI, though. Quite opposite. With English being not being my mother tongue, I frequently turn to ChatGPT for proofreading and grammar assistance. It's language model, after all. Another area where I often seek help is when I get stuck. With ChatGPT being just a few keystrokes away, it's easy to validate my thinking, ask for direction or find inspiration. But that's where it stops.

Despite the ability to adjust prompts to mimic certain writing styles, I still find the texts generated by AI to be - how to put it - artificial. Despite my writing is (and most likely always going to be) far from the brilliance of the titans of the literary world, I dislike the shallow content flooding the online world in pursuit of the search engine optimisation. Of course, it's easy to say this when it comes to personal blog. With content produces for business purposes, it might be easier to slip up. Nevertheless, it comes down to personal integrity.

Maybe all of this is because I hold the written word in high respect. And most likely because I don't have to face the (harsh) reality of writing as a primary source of income.

I won’t deny that I’ve experimented more with ChatGPT as a writing tool for my technical blog. There, I’m perhaps a bit less sensitive about tone and a bit more focused on rankings. The challenge lies in relying on AI for technical reasoning. It's tempting to let LLMs supply extra context, generate code snippets, or guide the writing process. The real issue arises when the text goes beyond basic structure, requiring deeper reasoning. I learned this the hard way when I was writing on similar products from two different vendors, only to find ChatGPT happily blending contexts when asked to make unrelated changes.

Does that mean I view LLMs negatively? Not at all. Since ChatGPT’s release, I’ve stopped using freelance proofreaders or software like Grammarly. ChatGPT has become a primary tool I use daily, much like iA Writer or DeepL.

P.S.: While I tried to keep this post less about ChatGPT specifically and more about LLMs generally, it’s hard to ignore OpenAI's dominance at the time of writing of this blog post.