I was six years old when I first showed up at the Sue Duncan Children's Center on the South Side of Chicago. They didn't have space for me. I kept showing up anyway until they let me in.
That children's center changed my life. I saw that education wasn't just learning—it was the way out. By age 15, I was programming computers. The year was 1969.
I became a tutor at that same children's center, spending decades helping kids who needed what I'd needed: a way out through education. I tutored Arne Duncan—the director's son, already academically strong —who went on to become U.S. Secretary of Education, and the late Michael Clarke Duncan, who became an actor. Most of the kids I worked with didn't become famous. But for many, education was their ladder up, just as it had been mine.
I'm telling you this because it explains why I do what I do now.
From Punch Cards to AI
That 15-year-old learning to program in 1969 didn't know he'd spend the next five decades innovating in technology and engineering software systems worldwide.
I engineered banking systems on mainframes when many people had never used an ATM. I built e-commerce systems as the Internet transformed how we shop. I led cloud platform development as computing moved from data centers to everywhere. And for the past decade, I've been deep in AI, leveraging machine learning and AI systems in healthcare at Optum, one of America's largest healthcare companies.
Along the way, I worked at IBM (where I became their first Black Distinguished Engineer and later an IBM Fellow), a Senior Executive at Google, and VP/CTO at Cisco. I hold 20+ patents. I co-invented architectural techniques that changed how enterprise software is built.
In 2023, I was inducted into the National Academy of Engineering. In 2025, I was inducted into the National Inventors Hall of Fame.
But here's what matters more than any of that: I've been through this movie before.
Every Technology Follows the Same Pattern
I've watched the same cycle repeat six times now:
The Promise: "This technology will change everything!"
The Hype: Vendors make claims. The media amplifies the extremes. Fear and excitement both run wild.
The Reality: Some things actually work. The gap between promise and delivery is enormous.
The Aftermath: We figure out the best and optimal usages for the technology.
I saw it with client-server computing. With the internet. With mobile. With cloud. And now, with AI.
The vendor pitches follow the same pattern. The breathless headlines amplify the same extremes. The gap between what's promised and what's delivered remains enormous.
But AI is different in two ways: the fear is more intense, and some of that fear is justified.
Previous technology hype cycles were mostly about overblown promises. With AI, we're also dealing with manufactured panic about AI itself becoming a threat to humanity—which distracts from the real risks: how people and organizations will misuse these systems.
Algorithmic bias. Surveillance. Misinformation at scale. Deep fakes. Authoritarian control. And yes, job disruption—but not the way most headlines describe it. AI threatens jobs focused on creating assets (documents, images, code) far more than jobs focused on solving problems in complex, real-world contexts.
So my job isn't just to deflate the hype. It's also to help you separate science fiction fears from genuine concerns, understand which risks actually apply to your situation, and focus on what you can do about them.
After five decades of building these systems —not just observing them, but making them work in production—I know how to spot the difference between technology that will matter and technology that's just marketing. And I know how to think clearly about what can go wrong.
Why I'm Writing Now
AI is the most hyped technology I've seen in my entire career. The claims are more extreme. The fear is more intense. And the gap between reality and rhetoric is wider than ever.
But this time, the stakes feel higher to me.
Because AI isn't just affecting enterprise IT departments. It's affecting:
Workers wondering if they'll have jobs in five years
Parents trying to figure out if their kids should use ChatGPT for homework and help their kids navigate career development in a changing world
Healthcare professionals being told to trust AI diagnostic tools
Everyone trying to separate real threats and risks from science fiction
These are the same questions that mattered when I was tutoring kids at the Sue Duncan Children's Center: Who gets access? Who gets left behind? Who gets the truth?
I spent my career in rooms where technology gets built and decided, at Google, IBM, Cisco, and Optum. I've seen the vendor pitches before they become press releases. I've watched AI systems work (and fail) in production. I know the gap between the demo and the deployment.
Now I want to share what I know with people who need it most: everyday people trying to make smart decisions about AI and their futures.
Not with more hype. Not with doom predictions. Just clear thinking from someone who's built systems and lived through the cycles.
That's what this is about.