AILLMsCommunity

What Most People Get Wrong About AI (And What Actually Matters)

featuring Ian Cook

I sat down with Ian Cook, SVP of AI at Clue, and we broke down what LLMs really are, why AGI hype is mostly noise, and what actually matters for people using these tools.

So here's the thing about AI conversations right now — most of them are either "AI is going to take all our jobs and achieve consciousness" or "AI is just autocomplete, stop freaking out." The reality, as usual, is somewhere in the middle, and that's exactly where my conversation with Ian Cook landed.

Ian is the SVP of AI at Clue, a company doing predictive customer intelligence. But more importantly, he's been in the machine learning trenches for over 15 years — back when you had to explain to people what a neural network was and they'd look at you like you were speaking Klingon. He's got the kind of perspective that only comes from watching an entire field go from niche academic curiosity to front-page news.

One of the things I loved about this conversation was how Ian explains LLMs. He uses this peanut butter analogy: "Peanut butter and..." — most people say jelly. But if I say think about Halloween, suddenly it's chocolate. Think about Elvis, and it's banana. That's context. That's literally all an LLM is doing — predicting the next most likely word based on context. It's not thinking. It's not reasoning in the way you and I reason. It's incredibly sophisticated pattern matching, and understanding that changes how you use these tools.

This matters because when you know it's prediction, you stop expecting it to be a calculator. You stop being shocked when it can't do basic arithmetic but can write a compelling essay. You start understanding why it hallucinates — it's not lying, it's just predicting what sounds right based on what it's seen. And you start getting better at prompting because you're working with the system, not against it.

We got into AGI, and Ian is refreshingly honest about it. He's a skeptic, and I think for the right reasons. His point is simple: we don't even have a good definition of human intelligence, so how are we going to define artificial general intelligence? OpenAI's internal definition is apparently tied to hitting a hundred billion dollars in sales. That tells you everything you need to know about what "AGI" really means to the people building it — it's a financial milestone dressed up as a scientific one.

But the part of our conversation that I think matters most for people actually building things is the interview approach. Ian knows someone who built a skill for their AI agent that interviews them before any project starts. What are you trying to build? Why this approach? What are the constraints? It sounds simple, but I've been doing this too and the difference is night and day. When you jump straight into building, you forget things. You hit 80% completion and realize you didn't think about authentication, or error handling, or some edge case. When you let the AI interview you first, it catches those gaps before you write a single line of code.

We also talked about something I've been feeling but hadn't put into words — the addictive nature of AI-powered development. There was a study in Harvard Business Review about people feeling MORE pressure when using AI tools, not less. And I get it. I think about it at night. I have this incredible capability sitting idle and there's always something I could be building. The honeymoon period never ends because you finish projects before the excitement wears off. That's a new thing for software development.

Ian runs a community in Pittsburgh called AI at Work (aiorpgh.com) that I think every city should replicate. It's professionals sharing how they're actually using AI at work — not theory, not hype, real use cases. Marketing people, legal ops, product managers, founders. The kind of cross-pollination that happens when an electrician hears how a lawyer is using AI to draft documents and thinks "wait, I could do something like that for scheduling."

In my experience, the people who get the most out of AI are the ones who understand what it is and what it isn't. Not the ones chasing AGI headlines. Not the ones dismissing it as a fad. The ones who sit down, learn how it actually works, and figure out where it fits into their specific workflow. That's the real unlock, and that's what this conversation is about.

Get started

Let's find your first automation.

Free 30-minute call. No pitch deck. No pressure.