The Context Window, Episode 1
None of us expected to have a podcast. That’s probably a good place to start.
Between us, we have more than six decades in the technology industry — one going back to 1998, another to 1999, a third joining in 2015. We’ve watched wave after wave of “next big thing” crest and flatten. Virtual reality in the ’90s. 3D movies. The blockchain gold rush. Each one arrived with world-changing promises and delivered something more complicated — sometimes more, sometimes less, often just different.
So when we talk about AI, we’re not doing it from a place of hype. We’re doing it because something genuinely different is happening, and we want to think about it out loud.
“What AI is doing is legit science fictiony to me. It’s exceeded my expectations — and not in the ways I expected.”
The honest version of where we are
The first episode of The Context Window wasn’t a polished production. It was three people sitting in a room, comparing notes on an industry they’ve spent their careers inside, trying to make sense of a moment that keeps moving faster than anyone can fully track.
One thread that kept coming back: the gap between awareness and action. Everyone in the enterprise world is leaning toward AI. Boards are asking about it. Leaders are paying attention to it. But translating that attention into something concrete — a real use case, a measurable outcome, a strategy that goes beyond “we’re exploring it” — that’s still the hard part. And it’s where most organizations are stuck.
The two areas where we’re actually seeing traction at the enterprise level right now are code generation and document intelligence. AI-assisted development tools are genuinely changing how engineering teams work, though not always in the ways people expect — it’s not replacing engineers, it’s shifting what they’re paid to do. Writing code is becoming a commodity; writing secure, well-architected code that actually serves your business goals is where the human value concentrates. On the document side, retrieval-augmented tools that can surface buried institutional knowledge — meeting summaries, scattered documents, organizational memory that lives in no single place — are proving genuinely useful at scale.
The question worth sitting with
One of the more uncomfortable moments in our first episode came when we started talking about democratization — specifically, whether it will last.
Right now, AI is remarkably accessible. A small business owner and a Fortune 500 CTO are working with essentially the same tools. That’s unusual in the history of technology, and it’s producing real value for people who couldn’t have accessed this kind of capability before. But the economics of running large language models at scale are significant, and history suggests that when something becomes expensive and strategically important, it tends to centralize.
Key ideas from episode 1
1. AI is already changing software engineering — not by replacing engineers, but by shifting what they’re valued for. Security, architecture, and teaching others matter more than ever.
2. The democratization of AI is real — and fragile. Today’s open access could narrow as inference costs scale and market consolidation continues.
3. Open-weight models are closing the gap. Smaller, locally-runnable models may preserve access even if API costs for frontier models climb.
4. The best AI opportunities may not be in AI itself. Just as a hedge fund bet on tires because of self-driving trucks, the real returns might be in what AI makes possible — not AI directly.
5. No one is too far behind to catch up. The field is moving so fast that a skilled engineer who commits six months can become one of the world’s foremost practitioners in emerging sub-fields.
There’s a counterforce worth watching: open-weight models — mostly from companies outside the closed-source giants — are improving rapidly. Techniques like quantization are making capable inference possible on smaller devices. The idea of running a useful AI model on your laptop, without an API call to anyone, is no longer science fiction. That trajectory, if it continues, could preserve access in a way that the current market structure might not.
What this show is actually about
We named it The Context Window for a reason. In AI, a context window is the amount of information a model can hold and work with at one time — the bounded space where thinking happens. We like that framing for what we’re trying to do: hold a lot of ideas at once, give them room to connect, and see what comes out.
We’re not claiming to have the answers. We’re three people with different vantage points — one deeply technical, one closer to the business side, one somewhere in the middle — who think the conversation is worth having in public. We’ll be wrong sometimes. We expect to look back in a year and cringe at some of our predictions. That’s part of the point.
What we’re confident about is that the rate of change is unlike anything we’ve experienced in our combined decades in this industry. The question isn’t whether AI is going to reshape enterprise operations. It’s how, at what pace, and who gets to be part of shaping it. We’d rather be in the room asking those questions than watching from the outside.
So that’s where we’re starting. Episode 1. Awkward wrap-up and all.