
Dylan Etkin
March 9th, 2026
You Bought the AI Licenses. Why Is Only One Developer Getting 10x Results?

Here's something nobody talks about at the AI strategy meetings.
Your organization just spent six figures on Cursor licenses, Claude seats, and Copilot subscriptions. Ninety percent of your engineers have access. By most internal measures, the rollout was a success.
But somewhere on your team, one developer is running circles around everyone else. They've spent weeks tuning their setup: custom rules, multiple MCP servers wired in, carefully crafted agent skills that make the AI actually understand your codebase. They're getting two to three times the output of the person sitting next to them, using the exact same tool. The difference isn't the model. It's the context they've given it.
And that context is trapped in their dotfiles, undocumented, unshared, and invisible to the rest of the organization.
The Skill Gap Nobody Predicted
We spent the last year talking to engineering leaders at companies like Rippling, Confluent, Rapid7, Atlassian, Airtable, and dozens more. We went in expecting to hear about model quality, hallucination rates, and cost concerns. Those topics came up. But they weren't what kept people up at night.
What we heard, over and over, was a different kind of problem: the teams that figured out how to use AI well had no way to spread that knowledge to everyone else.
At Confluent, the engineering team identified a "power law distribution of effectiveness." A small group of power users were getting dramatically better results based on how carefully they'd tuned their agent instructions and MCP configurations. Everyone else had access to the same tools but was working at the level of basic autocomplete. The gap wasn't about effort or talent. It was about configuration and context that lived in individuals' heads.
At Rapid7, the platform team has been proactively tackling this challenge, maintaining structured rule files across repositories and pushing for broader adoption. But even with that investment, they've observed what many organizations are finding: enthusiastic adoption among a core group while the broader org hasn't yet found its footing with AI tooling. The tools are there. The knowledge to use them well hasn't scaled.
Atlassian's engineering leadership has been equally candid about the state of the industry: there's "no effective way to share agent rules" with current approaches. Their team has experimented with automated scripts to push rules across repositories. It's a pragmatic solution, but one that can't tailor context to different teams or codebases.
Even at Google, arguably the most well-resourced engineering organization on the planet, a senior director told us knowledge transfer remains "brutally hard" in teams of a hundred people, with cultural barriers persisting even when the productivity benefits are obvious.
Rippling's team built a custom Go service to sync agent configurations across 800+ repositories. Confluent invested in shared marketplaces for AI skills and learned that without central governance, they proliferated into multiple competing versions. At Airtable, MCP server setup "remains manual and tedious" despite 60-70% daily AI tool usage.
These are some of the most sophisticated engineering organizations in the world, and they're all independently building infrastructure to solve the same problem. The best practices exist. They just can't move. And everyone is reinventing this wheel because the wheel doesn't exist yet.
Context Over Compute
After dozens of these conversations, a pattern became clear. The quality of AI output is directly proportional to the quality of the context you give it. The model matters far less than the instructions, rules, and institutional knowledge you feed alongside it.
The AI tools themselves are commoditizing rapidly. Cursor, Claude Code, Copilot, Windsurf, Kiro, with new entrants every quarter. But the knowledge layer on top of those tools is not commoditizing at all. The rules that tell the AI how your team writes code, which patterns to follow, how to handle your specific domain: that's the real competitive advantage. And right now, in most organizations, it's a mess.
The Solutions That Almost Work
If you've been working in this space, you're probably thinking about two obvious approaches. Both are reasonable. Neither is sufficient.
The first instinct is git. Check your .cursorrules and CLAUDE.md files into the repository. It's version-controlled, it lives with the code, and every developer gets it when they clone. For a single repo with a single AI tool, this works well enough.
The problems emerge at organizational scale. Maintaining consistent configurations across hundreds of repositories means hundreds of copies that immediately begin to drift. When someone improves a rule, that improvement stays local. When the organization updates a standard, someone has to push that change to every repo individually. And critically, a .cursorrules file only works in Cursor. If your team uses multiple AI tools, you're maintaining parallel configurations in different formats with no guarantee they stay in sync. Git solves storage. It does not solve distribution, translation, or governance.
The second instinct is vendor marketplaces. Cursor has community-contributed rule directories. GitHub Copilot recently introduced Spaces for sharing project context. Claude has Projects for providing custom instructions. Each vendor is building its own mechanism for sharing AI context, and they're genuinely useful steps forward.
But each marketplace only serves that vendor's tool. Context organized in GitHub Copilot Spaces doesn't reach your Cursor users. Rules shared through Cursor's ecosystem don't help your Claude Code team. You end up with knowledge fragmented across vendor silos, and most organizations we spoke with are running three, four, or more AI tools simultaneously.
There's a deeper issue with both approaches. Neither provides organizational governance. Git repos don't tell you which MCP servers have been security-approved. Vendor marketplaces don't give security teams a central view of what's running across the org. Multiple companies we spoke with described security review as a major bottleneck, not because their security teams were wrong, but because there was no centralized way to manage approvals, track what had been vetted, or enforce compliance across tools and repositories.
The approach organizations actually need sits at a layer above individual tools and individual repositories. That layer doesn't exist yet in git or in any vendor marketplace.
Skills.new: A New Primitive for AI Knowledge
This is why we built Skills.new.
The idea at the core is straightforward: define your AI knowledge once, categorize it properly, and distribute it to every tool, team, and repository that needs it, while being deliberate about what context gets loaded where.
Define Once
Your best developers have built custom .cursorrules, carefully written CLAUDE.md files, curated MCP server configurations, and prompts that actually work for your codebase and domain. Right now, that knowledge lives on their local machine or in a wiki page that was stale the day it was published.
Skills.new captures those patterns as structured, versioned, distributable units. A skill might encode "how to write migrations in our Django codebase" or "our API design patterns" or "the correct way to interact with our internal billing service." These are kept alive and current, not frozen in a document nobody updates.
Categorize With Governance Built In
Not every developer needs every skill. Your frontend team doesn't need your infrastructure team's Terraform patterns, and your new hire probably shouldn't have the same autonomous capabilities as your principal engineer.
Every rule file, every MCP server, every piece of context loaded into an AI tool consumes tokens and expands the surface area of what the AI can access and do. Engineering leaders consistently told us that token usage is becoming a major cost concern, and that unmanaged context creates both waste and risk. Token management, access control, and security review are all facets of the same question: who should have access to what context, and who decides?
Skills.new lets organizations build a taxonomy (company-wide standards, team-specific patterns, repository-level knowledge) with clear ownership and approval at each layer. What applies everywhere, what's restricted to certain teams, what requires security review before deployment. The right knowledge gets to the right place, with an audit trail for all of it.
Distribute Everywhere, Govern Centrally
Unlike git-based approaches that fragment across repos, and unlike vendor marketplaces that fragment across tools, Skills.new defines once and distributes to every client automatically. Teams don't need to maintain separate configurations for each AI tool or reconcile multiple pilot programs with different configuration approaches.
When a skill is updated, it propagates to every developer instantly, regardless of which AI tool they're using. When a skill is revoked or modified for security reasons, that change is immediate and universal. New team members inherit approved AI configurations the moment they start to work. No manual setup. No tribal knowledge. No shadow AI.
One registry. One approval workflow. One audit trail. That's how governance becomes an enabler instead of a bottleneck.
The Same Skill, Two Consumers
The same pattern applies to human developers and autonomous AI agents alike: capturing expert knowledge, packaging it with the right context, and distributing it under governance.
When a developer loads a skill into their coding assistant, they're getting context to make that tool effective. When an autonomous agent picks up that same skill, it's getting the instructions and guardrails it needs to operate correctly and safely. The skill is the same artifact. The consumer is different. The governance layer determines how much latitude each consumer gets.
As organizations move toward agentic workflows, well-governed skills become essential. An agent without the right context is an expensive random number generator. An agent with the right skill and guardrails is a productive team member. The question is not whether agents will operate inside your systems. The question is whether the knowledge they operate with will be governed deliberately, or whether it will be the same ad-hoc, scattered configuration that's already causing problems for your human developers.
The Knowledge Layer Is the Moat
The AI tools are commoditizing fast. But organizational knowledge (the patterns, the domain expertise, the carefully tuned configurations that make those tools effective in your specific environment) compounds over time.
Right now, most of that knowledge is evaporating. It walks out the door when your best developer takes a new job. It drifts into irrelevance in a config file nobody maintains. It gets duplicated and forked until nobody knows which version is current or approved.
Skills are the missing infrastructure layer between your organization's knowledge and the AI tools that need it. Define it once. Categorize it with governance built in. Distribute it everywhere. Keep it fresh and auditable.
That's what Skills.new does.
Your 10x developer figured it out. What's your plan to make it the standard?
Dylan Etkin is the CEO and co-founder of Sleuth, building Skills.new, the platform for defining, distributing, and governing AI skills across your engineering organization. Previously, he helped build Jira at Atlassian.