I can recall about two years ago when I first started exploring generative AI.
I had drafted an email to a colleague overseeing technology projects. The email was simply to ask what our company was doing around AI and whether I could be involved in some capacity.
That email, just a few lines long, took me 30 minutes to write.
I was deeply intimidated by the idea of joining the conversation. I didn’t have the technical skills. I didn’t have the standing within the company. I couldn’t even articulate a clear outcome I was seeking.
But my curiosity outweighed my fear.
I hit ‘Send,’ formally marking the beginning of my deep dive into generative AI.
Since then, I’ve learned a tremendous amount. But the intimidation hasn’t gone away. If anything, it grows a little each day I read a new post by a brilliant engineer or explore another innovative tool.
It has grown. But so has my capacity to manage it, even embrace it.
That’s what this post is about…
Recognizing intimidation at different stages of AI understanding and learning how to use it to feed curiosity and guide more systematic learning.
Intimidation across key phases of understanding
When I first began to consider exploring AI, everything felt too technical. AI had always seemed like the product of secretive R&D labs, developed by brainiac engineers and scientists at companies I had no proximity to.
I worried about getting it wrong, about not being smart enough to keep up, or even entering the space at all.
Eventually, I worked through that initial barrier and came to understand just how extraordinary—and accessible—the technology had become.
The intimidation didn’t vanish. It just changed shape.
When I started using AI for practical tasks, it became something subtler.
Knowing just enough is dangerous. I started feeling a strange embarrassment about using AI so simply while others online were building agents and deploying code. I wasn’t using it anywhere near its full potential, and I knew it. (Still not, by the way.)
As I moved into more systematic experimentation, the intimidation shifted again, but this time in a more personal direction.
Once I began testing prompts with focused outcomes, I started to notice where my own thinking was fuzzy or incomplete.
AI didn’t just produce outputs. It reflected back the quality of my inputs. If my logic was unclear, AI wasted no time in revealing it. That intimidation didn’t feel technical anymore—it felt like I was personally inadequate.
Later, when I began working with AI more as a creative partner—co-developing frameworks, reviewing drafts, brainstorming problem-solving approaches—the discomfort sharpened again.
What if it generated something better than I could? What did that mean for me as a contributor? Is this what being replaced actually feels like?
Embracing intimidation
Now, the intimidation is less intrusive, but no less real.
I still get overwhelmed by new tools and by the thinking of others who seem years ahead.
But now, that feeling is more of a signal than a barrier. It tells me I’m entering new territory. It reminds me that I’m being led by questions I don’t yet know how to answer.
Managing intimidation isn’t about building confidence. It’s about relentlessly feeding curiosity.
Experiment → fail → reflect → repeat
Embrace the process. Be fearless in executing it.
This mantra, I believe, is essential on the path toward AI stewardship.
- the AI civilian