Organizations currently have no reliable mechanism to test for the skill that now matters most.
They never needed one.
For fifty years, they tested for competence the way engineers test for structural integrity — by applying load and observing what held. They assigned work, measured output, and drew conclusions about capability from the results.
The system was imperfect, as all systems are. But it functioned, because Production and Competence were reliably coupled. You could not consistently produce sophisticated, high-quality work without the underlying skills to do so.
That coupling no longer holds.
Artificial intelligence has done something that no previous technology in the history of professional work has done: it has permanently severed the relationship between Production and the competence required to produce it. A junior analyst can now generate a brilliant-sounding strategic brief in fourteen seconds. The output is clean, the syntax is precise, the structure is sound. And the professional who generated it may lack the analytical foundation to evaluate a single sentence of it.
This is not a warning about AI. It is an observation about what AI has revealed — a gap that was always there, obscured by the friction of production, and now fully exposed.
The organizations that understand this will adapt. The ones that do not will promote the mirage.—
What Fifty Years of Measurement Actually Measured
When I reviewed more than 900 executive searches during the research phase of the Critical Skills project, what struck me was not only what organizations demanded of their candidates — it was the implicit theory of competence embedded in every search brief.
The theory was almost always the same: show me what this person built, led, or produced, and I will tell you what they can do.
That logic was defensible for a long time, because it was largely correct. The executive who delivered a successful market entry, the analyst who produced a rigorous competitive assessment, the manager who turned a struggling division into a profitable one — these outcomes demanded real, underlying skill. Production was the artifact of competence. You measured the artifact because the underlying skill was difficult to measure directly.
This is the critical insight that most commentary on AI and the workforce has missed entirely: organizations were never actually measuring Competence. They were measuring Production and inferring Competence from it. The inference worked — until it didn’t.
Today, the Production Skill has been heavily commoditized. What once took days of rigorous effort — gathering sources, synthesizing data, structuring argument, drafting, revising — now takes minutes. The artifact still looks the same. The competence behind it may be entirely absent. Virtually any junior analyst can do it.
The measurement instrument that organizations relied upon for fifty years is now broken.
Most of them do not know it yet.—
The Eight Critical Skills in the Age of AI — What Changes, and What Doesn’t
When AI disrupted the workforce, the predictable response from consultants, HR professionals, and thought leaders was to invent new competency frameworks. Prompt engineering. AI literacy. Machine fluency. These additions were bolted onto existing frameworks like afterthoughts — because that is precisely what they were.
The Eight Critical Skills — Communication, Production, Information, Analysis, Interpersonal, Technology, Time Management, and Continuous Education — require no revision. Not because they are immune to change, but because they were never built around the specific tools of any era. They were built around the enduring demands of professional performance. AI changes the weighting of several of these skills. It does not alter the architecture.
Consider how each skill maps to the current moment.
The Technology Skill, as defined in the framework, is the ability to select the appropriate technology most efficiently suited to a specific task. This has always been the governing skill for tool selection — from the typewriter to the spreadsheet to the enterprise database. AI is the newest and most powerful entry in that category. The professional who understands this skill does not ask whether to use AI. They ask when, for what, and with what governance. This is not a new question. It is the oldest question in the Technology Skill.
The Production Skill — the ability to convert ideas and plans into completed work — has not disappeared. It has been dramatically assisted. AI can accelerate the mechanical dimensions of production: drafting, formatting, synthesizing, structuring. What it cannot do is supply the judgment that determines whether what has been produced is worth producing, whether it addresses the right problem, whether it reflects sound professional reasoning. That judgment belongs to the professional. It always has.
The Information Skill — the discipline of seeking out, evaluating, and verifying the information needed to conduct a sound analysis — becomes more critical as AI becomes more prevalent, not less. AI synthesizes at scale. It is trained on the past. It hallucinates with confidence. The professional who lacks the Information Skill will accept AI’s outputs as factual. The professional who possesses it will interrogate them.
And then there is the Analysis Skill.—
The Analysis Skill: The Ultimate Gatekeeper
The Analysis Skill is defined in the framework as the ability to process verified information into sound findings, conclusions, and recommendations — moving from what the facts are to what the facts mean, and from what they mean to what should be done.
This is the skill AI cannot replicate. Not because AI lacks processing power. It has more than any human analyst. Not because AI cannot generate conclusions — it generates them constantly, fluently, and in forms that are often indistinguishable from human reasoning.
AI cannot exercise independent professional judgment. It can only simulate it.
The distinction matters enormously, and it is the one that most commentary on AI and competence glosses over. Simulation is not judgment. A simulation can be accurate when the situation resembles the training data. It fails without signal when it does not — and in professional practice, the situations that matter most are almost always the ones that do not perfectly resemble anything that came before.
The analyst who understands the Analysis Skill — who has internalized the discipline of moving from verified information to defensible findings to sound conclusions — can do something AI cannot. They can recognize when the output is wrong. They can identify the gap between what was asked and what was actually answered. They can hold the conclusion up against the facts and ask whether the logic holds.
This is not a technical capability. It is not prompt engineering. It is analytical rigor — the same rigorous discipline that P → Q has always demanded. If the premises are not verified, the conclusions are not sound. If the conclusions are not sound, the recommendations are not defensible. This chain of reasoning has not changed because the tool that generates the draft has changed.
The professional without the Analysis Skill cannot evaluate what AI produces. They cannot identify when the brief is strategically irrelevant, when the financial model rests on flawed assumptions, when the market analysis has drawn conclusions from the wrong data set. They can see that it looks right. They cannot determine whether it is.
This is the Competence Mirage.
The output is flawless. The competence is absent. And without the Analysis Skill, no one in the room can tell the difference.—
How the Mirage Gets Institutionalized
The organizational mechanics of this problem follow a pattern that should be familiar to anyone who read Part One of this series.
A junior professional uses AI to produce work that would previously have required senior-level capability. The manager, evaluating the work by the standards of the old measurement system, sees output that looks sophisticated and concludes that the professional is sophisticated. The professional receives positive feedback, is assigned more responsibility, and continues to produce AI-generated outputs that no one in the chain of command is evaluating with the Analysis Skill.
Over time, the professional advances. Their reputation is built on Production. Their underlying competence — in Information, Analysis, and the other critical skills — has never been seriously developed or tested. They have been operating in the gap between appearance and capability. The organization has been paying them as though the gap does not exist.
Then something goes wrong. The strategic brief that looked compelling turns out to be factually incorrect. The financial model that was built in four hours collapses under scrutiny. The recommendation that seemed sound is revealed as the product of AI-generated assumptions no one verified.
At that point, the organization pays for the mirage. And the professional — who was never told, never tested, never developed — pays for it too.
This is the organizational failure mode that is not being discussed. Everyone is debating whether AI will replace professionals. The more immediate and consequential question is whether AI is creating a generation of professionals whose apparent competence outpaces their actual competence by a margin that no one is measuring.—
The New Competence Divide
The competence divide described in Part One of this series ran between professionals who were genuinely capable and professionals who were not. That divide has not changed. It has sharpened.
- The new divide runs between two kinds of professionals who both use AI.
The first uses AI as a substitute for judgment. They prompt, they accept, they produce. The output is clean. The thinking — the verification of information, the interrogation of conclusions, the application of analytical rigor — is absent. They are fast. They are prolific. They are, in the truest sense of the term, incompetent — not because they cannot produce, but because they cannot evaluate what they have produced. - The second uses AI as a force multiplier for judgment they already possess. They prompt, they interrogate, they verify. They apply the Information Skill to assess what AI has gathered. They apply the Analysis Skill to determine whether the conclusions hold. They use the Technology Skill to select the right tool for the right task at the right stage of the work. They are not slower. They are more capable — because their judgment governs the process rather than substituting for it.
The output of these two professionals often looks identical. The difference is not visible in the document. It is visible only when the document is tested against reality — in the boardroom, in the market, live and in color, in the operating results.
This is precisely why the new divide is more dangerous than the old one. The old divide was detectable, if organizations were willing to look. The new divide requires evaluators who possess the Analysis Skill themselves — evaluators who can see through the mirage because they understand what sound analysis actually looks like.
Many organizations do not have enough of those evaluators. And the ones they do have are increasingly outnumbered by the professionals whose competence is AI-generated rather than developed.—
What Genuine Competence Requires Now
The Continuous Education Skill has never been more important than it is in this moment, and not for the reason most people cite.
The argument typically made is that professionals need to learn AI tools — the platforms, the interfaces, the prompting techniques. That is true, but it is the least important dimension of the problem. The Technology Skill covers it, and it is learnable in weeks.
What the Continuous Education Skill demands in the AI era is something harder: the sustained, deliberate development of the skills AI cannot replicate. The Analysis Skill. The Information Skill. The Communication Skill — not the mechanical production of words, but the judgment to determine what deserves to be communicated and how. These are skills that require genuine development, honest feedback, and years of practice in conditions that demand them.
They cannot be delegated to a tool. They cannot be acquired through a certification. They cannot be faked for long.
The professionals who will matter most in the organizations of the next decade are not the ones who are fastest with AI.
They are the ones who are rigorous enough to govern it — who possess, at the level of Unconscious Competence, the analytical and informational foundations that allow them to use AI’s outputs rather than simply accept them.
These professionals exist. They are, in every organization, the people that their colleagues bring their hardest problems to — not because they produce the fastest answers, but because their answers can be trusted.
They are not the most visible professionals in the AI era.
They are the most valuable.
A Word to Leaders
If you are responsible for the performance of others, the Competence Mirage is your problem in a way that it is not simply the problem of the professionals you lead.
You are the evaluator. And if your evaluation methodology has not changed — if you are still measuring Production and inferring Competence from it — you are not evaluating competence at all. You are evaluating the quality of the AI prompt.
The organizational imperative is not to ban AI or to restrict its use. It is to rebuild the measurement systems that AI has broken. Define what analytical rigor looks like in your organization, specifically and behaviorally. Create conditions in which the Analysis Skill and the Information Skill are tested, not merely assumed. Ask your professionals not just what they produced, but how they verified it, what assumptions they challenged, what alternative conclusions they considered and rejected.
These are not new questions. They are the questions that distinguish genuine competence from its appearance. The only thing that has changed is the urgency. The Competence Mirage makes those questions easier to skip. It also makes skipping them more expensive than it has ever been.
The Eight Critical Skills are not supplemental in the AI era. They are more foundational than ever — because the skills at the top of the architecture, the ones AI cannot replicate, are now the last line of defense between an organization that thinks clearly and one that only appears to.
Competence has never been about production speed.
It has never been about the elegance of the output.
It has always been about what happens when the output meets reality.
AI makes the output faster, cleaner, and more convincing. It does not change what reality demands of the professional who stands behind it.
Build the skills. Govern the tool. Know the difference.
KEY TAKEAWAYS
- AI has permanently decoupled Production from Competence. Organizations that measure output and infer competence from it are no longer measuring competence at all — they are measuring the quality of the AI prompt.
- The Eight Critical Skills require no revision for the AI era. AI changes the weighting of several skills — particularly Production, Information, and Analysis — but not the architecture of the framework.
- The Analysis Skill is the ultimate gatekeeper. AI can simulate professional judgment. It cannot exercise it. The professional who cannot evaluate AI’s outputs is not more capable because of AI — they are more vulnerable.
- The Competence Mirage is an organizational failure mode, not just an individual one. Leaders who evaluate with outdated measurement systems will promote the mirage and institutionalize the gap it conceals.
- The new Competence Divide runs between professionals who use AI as a substitute for judgment and professionals who use their analytical rigor to govern AI’s output. The outputs often look identical. The difference becomes visible when they are tested by reality.
- The Continuous Education Skill now demands the sustained development of the skills AI cannot replicate — Analysis, Information, and judgment-based Communication. These are not learnable through a platform certification. They are built through deliberate practice and honest feedback, across a career.
—
This article is Part Two in the Competence Series adapted from WANTED: Eight Critical Skills You Need to Succeed. Part Three examines what organizations must do, structurally and culturally, to rebuild the measurement systems AI has broken.
Copyright © 2026 by Charles Cranston Jett. All rights reserved.