The AI Skills Gap Is Already Here - And It's Not What You Think


Anthropic just published the most interesting piece of AI research you probably won't see on LinkedIn. Their Economic Index, released this week, analyses one million conversations across Claude's consumer and developer platforms. The title is "Learning Curves" and it tells two stories simultaneously. One is encouraging. The other should make every leader pay attention.

The context you need

The Economic Index is a quarterly study that maps how people actually use AI at work. Not surveys. Not self-reported data. Actual usage patterns, matched against labour market data. This latest release covers a week in February 2026 and compares it to November 2025.

A few numbers to orient you. Nearly half of all occupations - 49% - have now used Claude for at least a quarter of their tasks. The average economic value of work performed on the platform sits at $47.90 per conversation, down slightly from $49.30 in November. And usage is diversifying: the top ten tasks now account for 19% of all traffic, down from 24%.

Those numbers matter because they tell us something important. AI adoption isn't just growing. It's broadening. The decline in average task value isn't because people are doing less valuable work. It's because more people in lower-wage roles are finding uses - sports queries, product comparisons, home maintenance questions. The platform is becoming less of a specialist tool and more of a general capability.

The AI Optimist analysis

There are a few things happening here, and they pull in different directions.

First, experienced users are measurably pulling ahead. People who've been using Claude for six months or more show 10% higher success rates in their conversations. They're 7 percentage points more likely to be using it for work rather than personal tasks. They bring harder problems - roughly equivalent to an additional year of education in the complexity of their inputs. And crucially, even after controlling for what tasks they're doing and which model they're using, they maintain a 4-percentage-point success advantage. That's not selection bias. That's genuine skill development.

What that means is something we've been saying for a while: learning to work with AI is itself a skill, and the people who invest time in it get meaningfully better outcomes. The learning curve is real and it compounds.

Second, users are getting smarter about which tools they reach for. Software developers use Anthropic's most capable model 34% of the time, while tutors use it 12% of the time. For every additional £10 in hourly wage, the likelihood of choosing the more powerful model increases by 1.5 to 2.8 percentage points depending on the platform. People aren't just using AI - they're developing judgement about when to use which capability.

Third - and this is where it gets uncomfortable - the global picture is diverging. Within the US, adoption is converging between states, though more slowly than previously estimated. At current rates, it would take 5-9 years for states to reach equal per-capita usage, revised upward from an earlier estimate of 2-5 years. But globally, the picture is the opposite. The top 20 countries now account for 48% of per-capita usage, up from 45%. The gap is widening, not closing.

What connects these threads is a pattern that labour economists call skill-biased technological change. The people who benefit most from a new technology are the ones who already have the skills and access to use it well. Early adopters enjoy what the report calls "dual advantages" - they're exposed to the disruption AND they have the tools to respond to it. Everyone else watches the gap grow.

What does the Anthropic Economic Index tell us about AI's impact on work?

The most important finding isn't about AI replacing jobs - it's about AI rewarding investment. Users who spend time learning to work with AI become measurably more effective, creating a new kind of skills gap. The question for organisations isn't whether to adopt AI, but whether they're building the conditions for their people to develop genuine AI fluency - or accidentally creating a two-tier workforce.

What this means in practice

For team leads and managers, the implications are fairly direct. Your early adopters are getting better at AI faster than you might realise, and the gap between them and everyone else is growing. This isn't about buying more licences. It's about creating the time and space for people to develop real working relationships with these tools.

For HR and L&D leaders, the data suggests that AI training programmes need to be ongoing, not one-off. The learning curve doesn't flatten after a workshop. The people who improve most are the ones who use AI consistently over months, bringing progressively harder problems to it.

And for anyone thinking about this at an organisational or policy level, the global divergence should be front of mind. Two emerging automation patterns - business sales outreach and automated trading - have more than doubled in frequency on the API. When you combine accelerating automation with unequal access to the tools that help people adapt, you get a problem that compounds quickly.

The bigger picture

This is a complex, evolving picture. But the core dynamic is clear: AI is creating a new kind of skills gap, and it's forming faster than most organisations realise. The leaders who navigate it well won't be the ones who adopted AI first. They'll be the ones who made sure their people had the headspace to learn. That's not a technology problem. It's a leadership one.