What should I write if AI can write everything?
What next for Chip Insights
Disclaimer: Opinions shared in this, and all my posts are mine, and mine alone. They do not reflect the views of my employer(s) and are not investment advice.
It’s been more than two months since I wrote my last post. One of the reasons I took a pause was to deliberate over a question that just wasn’t leaving me: Does a technical newsletter like Chip Insights matter in a world where GenAI keeps getting better? To answer this, I need to first define what “matters” means. The reality is, this newsletter will only ever reach a small fraction of people who are interested in a very niche topic - so traditional content metrics like views or subscribers make no sense. The only true feedback for my work comes from this question: “Would I read this post if it was written by someone else.” With today’s AI tools, that question really becomes: “Is it easier to generate this post on an AI chatbot, that write it?”
Things we tell ourselves to feel better
One of the most common arguments in defense of humans is that “AI makes mistakes.” I don’t think this is a strong argument to hide behind. Sure, chatbots hallucinate and make mistakes, but so do humans - even the most decorated content creators. Like a lot of technologies we have seen in the past, I’m sure the errors will reduce over time, and AI answers could become a reliable source of information. In my opinion, content creators saying their content is better because it’s more accurate, are willing to challenge an ever-improving machine backed by trillions of research dollars. Personally, I think that’s a losing battle.
There is another defense of traditional content which never sat well with me. I’ve seen a lot of arguments about how consuming difficult technical content is “inherently noble”, and AI provides processed answers which are “cheats” and will shrink your brain. In my opinion, difficulty, by itself, should never be a virtue to strive for. Before humans discovered fire, digestion was hard, consuming a lot of energy. Cooking made food easier to process, freeing energy for other activities like thinking, building, and evolving. I’m quite sure AI-generated educational content will have a very similar positive effect.
So before dismissing AI, we should acknowledge the aspects that AI excels at.
Is AI is coming for my posts?
Today’s AI is extraordinary at compressing information. It can digest long, messy documents and explain them in simpler terms. Interestingly, this is what most traditional content creation has centered around. (Including some of my posts, I’ll admit.) I think we are getting very close to the point where the returns from such content won’t justify the investment - it will be so much easier to generate such content on demand.
AI is also a fantastic tool to explore content related to a specific topic you are interested in. This has been my favorite use of AI: I can easily get a list of 5-10 sources I want to explore for my research on a topic, bypassing the SEO-engineered, often irrelevant links. In my experience of doing this, I have found that a few in-depth engaging sources, along with the AI summary on the topic, are sufficient for my research - I don’t want to jump multiple links where parts of the information are spread out.
Essentially, if the purpose of a piece of writing is simply to transfer information from one form to another, or one place to another, AI will inevitably outperform humans on speed, breadth, and efficiency.
So where does that leave me?
The point of this exercise was not to say that AI will replace content creation entirely. The concept of generative AI fundamentally cannot displace certain types of content - which is specifically what I want to focus on.
My north star for content creation has been the Acquired podcast. For those who don’t know, the idea behind Acquired is to tell the story of a company, with all of its gory details, in episodes that sometimes last 4 hours. Today’s AI struggles with such analysis, and there are fundamental reasons why it might be this way for a long time. An LLM lacks a “mental model” - understanding of concepts like time, hierarchies and abstractions. Models also gravitate towards a median of possibilities. While this makes AI excellent at flattening complexity, maintaining engagement sometimes requires expanding complexity into a narrative form. The failure to do this is what makes AI sound “robotic.” While you can get away with a robotic tone if you have “explain the differences between a CPU and a GPU,” it’s important to keep the audience engaged if you want to “share a deep dive on the evolution of GPU architectures”.
Another problem with AI content is that it is user driven - the value of the answer depends on the question you are asking. AI might have all the answers, but AI isn’t curious. Good content, therefore, should prompt the audience to ask more questions - questions about bottlenecks, architectural shifts, industry narratives, or places where assumptions are beginning to bend. Questions like these generally emerge from judgment, from noticing patterns, and from wondering what others might be missing - something that AI cannot replicate.
This is what I’ve been thinking about during my break: that the real value of technical content isn’t in being a source of information, but in being a source of interpretation. I’m looking forward to returning to writing with these ideas in mind.


You have proved with your writing that you are a great interpreter.
Very interesting to read this article which an AI cannot bring for these types of common topics.
Such a neutral and unbiased view. I like how you broke down the postives and the shortcomings of AI ( very engineer like xD) .
Been admiring your writing, looking forward to more :)