Level 2: Individualized
Score range: 50–74
At Level 2, AI is working for some people on your team — but not consistently, and not as a team capability. Individual contributors have found approaches that work for them. Those approaches haven’t spread.
What This Level Looks Like
Level 2 teams have uneven AI capability. One person produces excellent AI-assisted work; another on the same team produces poor AI-assisted work. The difference is invisible because there’s no shared standard to compare against.
There may be informal conversations about AI (“here’s what I tried”), but no documentation, no shared library, no review process. Knowledge stays with whoever developed it.
Your Core Risk
The primary risk at Level 2 is capability concentration. If your strongest AI practitioner leaves, retires, or changes teams, that knowledge leaves with them. The team’s AI capability is fragile — dependent on individuals rather than embedded in practice.
A secondary risk: high-capability individuals may be producing AI outputs that get merged with low-quality outputs from others, with no way to distinguish between them.
Your Core Strength
You have proof of concept. AI works for your team — you can point to specific examples. The gap between your best practitioners and your weakest ones is your development roadmap. You know what good looks like. You just haven’t made it consistent yet.
Your Next Move
Ask your strongest AI practitioner to document three prompts they use regularly. Put them somewhere the team can find and use them. That’s the beginning of a shared practice library — and it moves your Dimension 4 score from near-zero to something real.
One thing: This week, share one AI technique or prompt with someone on your team who doesn’t know it yet.