Week Notes: Vol. 3 – № 16
The AI prompt problem is actually a people problem
We expect a lot from generative AI tools.
Type a couple of sentences and get back exactly what we had in mind. First try. No clarification needed.
This expectation isn't new, though. It's just moved to a different audience.
Bosses and executives have been handing down loosely worded instructions for as long as there have been people to hand them down to. The assumption was that someone always would figure it out.
Early in my career as a graphic designer, I felt that tension constantly. Clients would give me just enough to get started.
Sometimes I asked the right questions. Sometimes I didn't.
Over time, I learned. I got better at asking and owning the early work.
With the introduction of generative AI tools, we see a way to skip this frustration when we open a chat box. But we forget the problem is with us, the prompter – not the receiver.
A while back, my boss sent me a message asking me to email a client contact I'd never met – ASAP.
She mentioned the person was looking to fill a new role. I should let them know we're available to help however we can, nclude some context about our consulting services, and mention a mutual connection who'd be a good reference.
A few minutes later I sent the email and copied her. That's when we figured out something had gotten lost in translation.
Nothing major. No fires started.
She owned it immediately. "Totally on me. Was in a hurry and didn't word it well."
That's when it hit me: I'd just played the role of Claude or ChatGPT.
I was handed a prompt. I did my best with what I had. And like most things that come back from a generative AI tool, it was close. But not quite right.
The prompt wasn't clear enough. I didn't have enough context to fill in the gaps on my own.
We've worked together for years, and I still got it wrong.
AI can't read your mind. Neither can people.
So why do we expect to perfectly prompt a bot on the first try?
Anyone who has spent time with chat tools, or a vibe coding platforms like Lovable or v0 already knows the answer.
You can't type a couple of sentences and expect exactly what you had in mind to come back.
That's a totally reasonable limitation. We just forget to apply it to ourselves.
Generative AI tools are getting better at this. Memory features, preference learning, context retention.
All of it helps narrow the gap. But familiarity alone doesn't guarantee clarity.
If there's an upside to working with AI instead of people, the feedback loop is much faster. If I had prompted a bot to write that email and knew the context, I'd have caught the gaps before it ever left Outlook.
You also don't have to worry about the model jumping ship after you've spent months getting it up to speed.
The lesson isn't that AI is broken. It's that we blame the tool for something we'd never blame the person we poorly briefed.
Clear communication is hard. It has always been hard. The chat box didn't change that.
Writing process: For full transparency, this was crafted with AI while refereeing a Saturday morning standoff between my five year old who wants Danny Go! and my two year old who wants whatever his brother doesn't.
80% my thinking. 55% Claude's words. 6 draft iterations. Honestly, that's exactly why tools like this exist.