LLMs and Systems Thinking: Part 0
March 11, 2025
Recently, I was reading an article on Medium, and the author, Maya, a member of an IBM Research team, made a statement that resonated deeply with me. She said:
"...the teams that were successful at unlocking AI’s potential had deep expertise in both LLMs and systems engineering."
The issue is that, more often than not, technical problems are shrouded in layers of ambiguity, making it challenging to harness the full potential of any powerful technology or tool. Over the past 14 months, I have experimented with several prototypes, building different end-to-end solutions to problems I found interesting. Throughout that period, one theme kept reappearing whenever I encountered a major hurdle that took longer than usual to address—it almost always boiled down to asking the wrong questions from the start. Asking the wrong questions misleads you, taking you down rabbit holes that might yield some insights but have no direct bearing on the problem at hand. To ask the right questions, you need better mental models. Better mental models lead to better problem decomposition. I don’t think this is just about accumulating years of hands-on technical experience, even though experience and better mental models are often highly correlated. I say this because experience also has a tendency to skew your thinking away from less traditional patterns that might offer significantly better solutions to certain challenges. I’m pretty sure we have observed similar patterns while working with LLMs as well. Nothing is more frustrating than an LLM that can’t resist steering the conversation away from your original perspective and into the gravitational pull of its strongest biases.
At the heart of this struggle is systems thinking. LLMs are not just models; they are components in a larger system of interaction, optimization, and unintended feedback loops. A naive approach treats an LLM like a magic oracle—ask a question, get an answer. But in reality, what you’re doing is engaging with a highly complex, probabilistic system that has been trained on human language patterns, not on truth itself. And like any system, the way you interact with it changes the outcome. I’ve seen people get wildly different results from the same LLM simply because they understood how to guide it, how to frame problems in a way that aligns with the model’s strengths rather than fighting against its limitations. This is why expertise in both LLMs and systems engineering matters. You’re not just dealing with an AI model—you’re dealing with a distributed system of prompts, responses, verification mechanisms, and real-world constraints. Without a systems-level perspective, it’s easy to get stuck optimizing the wrong things, treating symptoms instead of underlying causes.
So this is what I hope to explore in the coming articles; Mental models that can help refine how we approach AI-driven problem-solving. A lot of my ideas will be inpsired by technical publications by leading teams that have demonstrated success in building AI systems and which have been immensely helpful in my own work. I have found in most cases what is simple is perceived as complex and what is truly complex, grossly underestimated. Whether you’re building AI products, integrating LLMs into existing workflows, or simply trying to improve your thinking around these systems, the goal is the same: ask better questions, deconstruct problems effectively, and design systems that actually work.
← Back to all posts