Level 1: Experimental
Score range: 25–49
At Level 1, AI is something your team tries — not something it relies on. Individual team members may experiment with AI tools, but there’s no consistent approach, no shared practices, and no way to know whether the experimentation is producing value.
What This Level Looks Like
Teams at Level 1 typically describe AI as “hit or miss.” When it works, it’s exciting. When it doesn’t, it’s frustrating — and the team can’t clearly articulate why either outcome happened. Prompting is improvised. There’s no review process for AI output. AI decisions and human decisions aren’t clearly distinguished.
This isn’t a failure state. Every team starts here. The question is whether you stay.
Your Core Risk
The primary risk at Level 1 is invisible poor quality. Without review practices or measurement, AI errors can make it into deliverables, decisions, and communications without anyone catching them. The team doesn’t know its AI output quality because it hasn’t looked.
Your Core Strength
Level 1 teams are still early. You haven’t accumulated bad habits yet. The practices you build now will compound. Starting intentionally at Level 1 is far easier than unwinding entrenched dysfunction at Level 3.
Your Next Move
Pick one AI use case your team does repeatedly. Design a simple, consistent prompt template for it. Use that template for 30 days. Measure whether the output quality is better, worse, or the same as before. You’ve just started building Dimension 1 and Dimension 5 simultaneously.
One thing: Make one AI interaction intentional this week. Define your outcome before you open the chat window.