2026 and The Rise of the Guardians Part 2: How Experience and AI Create Force Multipliers
The framework to build THE foundational team
âThey canât leave,â echoes down the corridors and across the Slack channels. âThey are the only ones who know how toâŚâ
Weâve all been there. The âpanicâ moment your team feels after the one person with all of the tacit knowledge in your organization is moving on. Whether itâs the current round of layoffs, a better opportunity or they won the lottery and are moving on to a sunny destination with white beaches and turquoise waters.
Itâs an age-agnostic and a tenure-agnostic moment.
âTheyâ were the person who knew which levers to instinctively pull when the system goes down, who remembered why the 2019 initiative failed and how to not repeat that mistake, and who could navigate the invisible politics between departments. They werenât just experienced, they deeply understood the WHY behind every decision, not just the how.
What I will share with you today:
What is a Guardian
Five core competencies of a Guardian
How the Trifecta works together
How to find, develop, or become a Guardian
AI can process data. New people can learn processes. But nobody can replicate the scar tissue of having lived through decisions that went right, decisions that went wrong, and understanding the invisible human dynamics that made the difference.
A âGuardianâ is part of a trifecta, including junior professionals and AI. But itâs the Guardiansâ wisdom and tribal knowledge that accelerates growth the fastest.
âGuardians will be the hottest hire of 2026,â was my bold prediction that I wrote about in Part 1 of âThe Rise of the Guardians.â Based on both research data and industry observation of company AI adoption extremes in 2025, companies struggled with balancing the need for wisdom and AI.
A Guardian is the key role where people whoâve seen enough cycles to know what AI canât teach, but who are also willing to learn the tools. A Guardian exists at every level. People in leadership who know the âsecrets,â people who understand tactical connections and people who understand immediately what will break the code in your product.
In Part 2 today, I share how to actually build this and advance your organization ahead of the game, achieve high customer satisfaction scores and increase revenue by truly solidifying the foundation you need to scale.
Questions to consider:
What competencies make someone Guardian-level? (Hint: itâs not tenure or age.)
How do you identify them when theyâre sitting across from you in an interview?
How do you become one if youâre experienced but worried youâre behind?
What does a team built on the trifecta of Guardians, junior professionals, and AI tools actually look like in practice?
Thatâs what this article gives you is the framework. It includes the questions to ask, the red flags to watch for, and the development path if youâre building this capability in yourself or your team. Use this framework to tailor a playbook to your organization today.
The Data and a Sense of Urgency
Before I get to the framework, I want to reiterate why this matters, leveraging both research data (sources below) and personal experience.
Gartner predicted that 30% of GenAI projects would be abandoned after proof of concept by the end of 2025.1 (too soon to get results, but check for them here.) Harvard Business Review found that 41% of workers have encountered AI-generated âworkslopâ costing nearly two hours of rework per instance.2 Meanwhile, only 14% of workers use GenAI daily despite widespread availability.3
Fully acknowledging there is a margin of error in this data, depending on the original of their sample set, I stick by âwhere thereâs smoke, thereâs fire.â While data from legitimate resources is critical, over the last several years as AI ramped up to a fever, I personally witnessed, across multiple companies, from startups, nonprofits and Big Tech, and within my tech circle, that those numbers are most likely a lot higher if you look holistically at global trends and consider âboots on groundâ personal experience.
The panic problem to adopt AI with no plan isnât about technology, but having the right people who can understand it, curate it, validate it and know when to use instinct to override it.
(This is where Guardians come in.)
Hereâs the productivity gap that creates a sense of urgency. For anyone who has worked closely with me, they know Iâm a huge advocate of John Kotterâs 8-Step Change Model.4 And Iâd argue the most important one to get attention is âcreate a sense of urgency.â PwCâs research shows productivity growth nearly quadrupled in AI-exposed industries, from 7% to 27%, while actually in a slight decline in the least exposed industries (from 10% to 9%).5 The most AI-exposed industries are not seeing 3x higher revenue per employee growth. The gap is widening.
And what separates the winners from the losers? Itâs not the AI tools themselves. According to Harvard Business Publishing, AI-fluent workers report being 81% more productive, 54% more creative and 53% better at solving complex challenges.6 But fluency isnât about knowing HOW to write a prompt. Itâs knowing when NOT to use AI, how to validate outputs, and when to trust your judgement over the algorithm.
Thatâs competency, not tenure.
The Guardian Profile: What Success Looks Like
A Guardian isnât someone with a specific number of years in their role, or even an age. Itâs someone who demonstrates these five core competencies. Some 20-year veterans canât answer the questions below, while some 5-year professional can. They know why Customer x in 2019 left, why the 2022 strategy didnât succeed and what is going to happen when the code base breaks.
Use competency, not tenure, as your metric.
1. Initiative Ownership (Experience + Execution)
What this means: A Guardian is someone who has taken initiatives from problem definition through adoption and iteration. Not just someone who has âused AI tools,â but owned end-to-end outcomes. Theyâve seen enough cycles to know what actually works, versus what just sounds good in a meeting. This isnât about participation or contribution.
Ownership is being able to âwork the problemâ and âown the outcome.â
I donât know about you, but I have interviewed many people over the past couple of years via video conference instead of in person, as a post-COVID era relic. The challenges we face is that candidates canât always tell that you can tell theyâre reading answers from AI as you ask them questions.
Questions to ask (for hiring or self-assessment):
âTell me about an initiative that you led end-to-end. How did it start, and what was actually working 6-12 months later?â
âWhat were the success metrics? Did they change over time?â
âDescribe a moment when you realized your original approach was wrong. What did you do? Who did you consult?â
âWalk me through your first 90 days in that role. What were your milestones?â
What good looks like (Green flags):
Fluid conversation
Frames the answer in the STAR format (Situation, Task, Action and Result)
Talks in specifics (metrics, timelines, use cases), not generic hype
Owns outcomes and has stories about what worked, what didnât and what they did about it
Can explain the difference between âshippedâ and âadopted.â
Has clear milestones with tangible deliverables for the first 30-60-90 days
Red flags:
Canât answer a question without first typing it into a chat bot
Uses jargon and buzz words for an answer
Only talks about success, never failure (failures are the best stories!)
Blames other people, âthe businessâ or âleadershipâ for every setback
Canât articulate specific metrics or timelines
Describes participating in initiatives, not owning them
2. AI + Domain Judgement (Tools + Wisdom)
What this means: Guardians can connect domain reality, data constraints and AI capabilities to the mission. They know where NOT to use AI, not just where to use it. They curate AI by building workflows, validating outputs, and knowing when to override the tool results with human judgement.
Right before I posted this, I read a great job description for a boutique startup hiring a Chief of Staff. âWe are not looking for prompt users. We are looking for operators who build repeatable, reliable, LLM-enabled workflows. You also exercise judgement. Not everything valuable should be automated.â Thats the different between AI-fluent and AI-dabbling.
To give you and idea of what this looks like in real life, the role specified building an LLM-enabled workflow to:
Convert meetings and transcripts into structured actions
Generate operational briefs, templates, checklists
Automate CRM updates, event workflows, and reporting
Maintain a curate library to train models (clean data will make or break your success)
It matters because itâs critical to know when human judgement is irreplaceable: solving customer pain points, what to build, client relationships, strategic decisions and nuanced communication.
Questions to ask:
âDescribe a use case where you decided AI was not appropriate. Why?â
âTell me about the data behind one of your AI initiatives. What was hard about it and how did you work around it?â
âWalk me through how you validate AI outputs. What makes you trust a result versus double-checking it?â
âWhat AI tools do you use daily? What workflows have you built, not just prompts youâve written?â
âGive me an example of when you overrode and AI recommendation. What did you see that the AI missed?â
What success looks like (Green flags):
Talks about building workflows, not just writing prompts
Can explain trade-offs (accuracy vs speed, automation vs oversight) in plain language
Shows respect for data quality and lineage, not just algorithms (doesnât take first AI results at face-value)
Knows current AI limits and failure modes (doesnât assume the model is magic)
Uses AI as a power tool: âI use it for X, but always validate Y becauseâŚâ
Can spot AI hallucinations because they know what the real pattern looks like
Has examples of what they chose NOT to automate
Red flags:
Treats AI as a black box that âjust worksâ
Canât name a project they chose NOT to do with AI
Uses AI for everything without discernment
Dismisses AI entirely (âI donât trust those toolsâ)
Talks about AI in general terms, not specific workflows theyâve built
3. Risk, Governance & Guardrails (Responsibility)
What this means: Guardians take responsibility for how systems behave, not just whether they appear to âwork.â They design guardrails that include human-in-the-loop workflows, escalation paths and monitoring for drift. Not just âmanage problems,â but âprevent them. Understand Risk Management 101, be able to âlook around corners,â and proactively propose solutions.
Question to ask:
âHave you ever slowed down or blocked a deployment for a risk reason? Walk me through that decision.â
âWhat guardrails did you put around your last AI-assisted workflow? Who could override it?â
âHow do you monitor outputs after they go live? What signals tell you that somethingâs wrong?â
âDescribe a time you caught an issue before it became a problem. How did you spot it?â
âWhat systems have you improved to prevent a recurrence of an issue?â
What success looks like (Green flags):
Can describe specific controls (approval workflows, thresholds and audit logs)
Emphasizes âaugment, donât replaceâ judgement
has caught and corrected AI errors before they caused problems
Can articulate the difference between âprobablyâ and âcorrectâ
Knows the difference between accuracy (CRM data discipline) and zero tolerance (event logistics)
Builds systems that surface risk early, not dashboards that light up when itâs too late
Red flags:
No examples of catching AI mistakes
Treats governance as if itâs âsomeone elseâsâ job
Canât explain risk mitigation strategies
Reactive firefighter, not proactive risk manager
âMove fast and break thingsâ mentality without guardrails
4. Mentorship & Knowledge Transfer (Force Multiplication)
What this means: Guardians donât just do the work. They multiple their impact by training others. They mentor junior professionals in BOTH institutional knowledge AND AI fluency.
Questions to ask:
âTell me about someone junior youâve helped grow. What did you teach them that they couldnât learn from AI?â
âHow do you explain the difference between âwhat AI saysâ and what will actually work here.â
âWhat do you delegate to junior people+ AI, and what do you keep doing yourself?â
âWhat systems have you build that enable others to work more effectively?â
âIf you left your current role tomorrow, what would break? What wouldnât?â
What success looks like (Green flags):
Can articulate what AI can teach vs what requires human instinct and reflection
Uses the model: Junior professional + AI Tools + Guardian mentorship = accelerated development
Actively documents tribal knowledge (playbooks, decision frameworks, process maps)
Creates learning moments, not just task delegation
Measures success by what others can do independently
Builds systems that scale without them (CRM discipline, even choreography, publishing machines)
Red flags:
âI just do it myself because itâs faster.â
No examples of developing others
Hoards knowledge as job security
Canât articulate what they delegate vs what they own
Everything breaks when theyâre out
5. Change Leadership (Bringing People Along)
What this means: Guardians get buy-in from skeptical stakeholders, set realistic expectations about AI will and wonât do, and adjust communication for different audiences. They navigate ambiguity with composure.
Itâs not just about âgetting along with peopleâ or âbeing well-liked.â The key differentiator is that you can work with strong personalities, maintain boundaries, build trust through reliability and bring people along even when the path isnât clear.
Questions to ask:
âWho was most skeptical about your last AI initiative, and what did you do about it?â
âHow did you explain the change to people whose workflow was affected?â
âTell me about a time you had to reset expectations about what AI could deliver.â
âDescribe a situation where you had to navigate competing priorities from senior stakeholders. How did you handle it?â
âWalk me through a time when the path forward wasnât clear. How did you maintain momentum?â
What success looks like (Green flags):
Talks about listening, co-design, and training
Can describe communication adjustments for different audiences (technical vs non-technical, skeptics vs early adopters)
Has navigated resistance with empathy, not be dismissing concerns
Knows how to translate technical concepts for non-technical stakeholders
Maintains composure and role clarity even in ambiguity
Shows examples of bringing skeptics to adoption, not leaving them behind
Red flags:
âPeople just need to adaptâ
Blames resistance on âtechnophobiaâ or ânot getting itâ
Canât describe how they brought skeptics along
Treats change management as âsend an email and move onâ
Gets defensive when questioned or challenged
The Trifecta: How the Three Groups Work Together
Once you understand the competencies that make up a Guardian, itâs easier to see how Guardians, junior professionals and AI tools actually work together in practice.
Letâs look at a real-life example of what this looks like by using a marketing team at a mid-size company.
The Guardian (Senior Marketing Ops Manager):
Owns campaign strategy and knows which messaging resonates with different customer segments (from years of testing)
Builds AI workflows for content generation and A/B testing
Validates all AI-generated customer-facing content before it goes live
Mentors junior marketers on brand voice an dhow to spot when AI outputs miss the mark
Knows when to override AI recommendations based on customer relationships and past campaign history
The Junior Professionals (Marketing Coordinators):
Execute campaigns using AI tools to draft content, analyze performance and generate reports
Learn to recognize when AI outputs sound âoffâ through Guardian coaching
Develop judgment by seeing what the Guardian catches and why
Move faster than they could alone, but with quality control from experience
The AI Tools:
Generate draft content variations for testing
Analyze and campaign performance and suggest optimizations
Automate reporting and data visualization
Handle repetitive tasks like social media scheduling and email formatting
The Force Multiplier:
Campaign velocity increases 3x because AI handles the repetitive work and junior professionals can execute quickly. But quality and brand coherence improve because the Guardian curates outputs, catches when AI misses nuances, and mentors others to develop the same judgement.
Itâs not three groups working separately. Thatâs 1 Guardian + 3 junior professionals + AI tools creating output thatâs faster AND better than either pure experience or pure AI ever could.
The Emotional Intelligence (EQ) Differentiator
Harkening my Navy days, during a particular critical training evolution, the instructor would stamp his foot, and say, âstomp, stomp.â Everyone would look up from their notes or any daydreaming and immediately tune in.
Stomp. Stomp.
Hereâs what makes the âForce Multiplierâ actually work: emotional intelligence.
Iâve seen brilliant people with 10+ years of institutional knowledge get let go anyway. They had the experience. They had all the connections. They understood the technology. But they were so difficult to work with that the organization would rather lose the tribal knowledge and wisdom than keep the toxicity.
Guardians donât just HAVE knowledge. They share it. They donât just BUILD workflows, but they teach others how to use them. They mentor. They donât throw their team âto the wolvesâ and just âlet them figure it out.â They donât just identify a risk, they communicate them in ways that get buy-in rather than defensiveness.
The Guardian with the high EQ:
Navigates difficult stakeholders (both the dismissive ones and the competitive ones)
Can adeptly engage the quiet ones in the room when the loud and possessive ones are consuming all the oxygen out of the air
Bring skeptics along instead of steamrolling them
Mentors without condescension
Builds trust through reliability, not heroics
The brilliant, but toxic, person without EQ:
Has all the answers, but no one wants to ask
Gossips and known for disrespecting others behind closed doors
Creates friction instead of force multiplication
Burns bridges faster than they build systems
Gets fired despite being âindispensableâ
The trifecta only works when the Guardian has the emotional intelligence to multiply impact through relationships, not just technical competence.
The Supply Problem (And How to Solve it):
Hereâs the challenge: Guardians are rarer than companies assumed in 2025.
Gartner found that 80% of the engineering workforce needs to upskill through 2027.7 Only 14% of workers use GenAI daily, and 37% donât use AI even when itâs available because their colleagues arenât using it.8 The supply isnât keeping pace with demand.
The Internal vs External Guardian Distinction
Internal Guardians (the ones youâre trying to retain): These are the people who already have YOUR companyâs tribal knowledge. They know which customer relationships are fragile and which leaders to trust. You develop them by adding AI fluency to their existing wisdom. Theyâre your highest-ROI investment because they already know the invisible context.
External Guardians (when you need to bring in talent): Youâre not hiring someone with YOUR tribal knowledge, youâre hiring someone with:
Domain/Industry expertise (theyâve seen the patterns in cloud, healthcare, finance, nonprofit, techâŚ)
The five Guardian competencies (they know how to BUILD a team and tribal knowledge quickly)
Transferable wisdom (they recognize which dynamics are universal vs company-specific
A Guardian from another nonprofit can learn YOUR donors in 6 months. What they bring immediately is knowing how donor relationships work, how to read stakeholder dynamics, what risks to watch for, and how to build systems that capture institutional knowledge before it walks out the door.
The key: Youâre hiring the competencies and domain expertise. Theyâll build your companyâs tribal knowledge faster than someone without Guardian-level pattern recognition ever could.
Where to find Guardians:
Internal development (your best source): Look for people already building AI workflows without being asked, who mentor others informally, and whoâve owned initiatives end-to-end
Adjacent roles: Chief of Staff, Director of Operations, Revenue Operations, Strategy & Operations, Founders officesâŚ
The âreturningâ talent pool: Experienced professionals laid off in 2025âs AI panic who are ready to work for organizations that value wisdom + tools
How to become a Guardian (if youâre building this capability):
Pick 2-3 AI tools relevant to your work. Become a power user, not a casual dabbler. And NOT a âI am a prompt-writing expert.â Build workflows, not just prompts. Track your time savings and quality improvements.
Document one lesson learned per month. Make your tribal knowledge transferable. Write the playbook that only exists in your head.
Mentor one junior person. Teach them both institutional knowledge AND how to validate AI outputs. Practice explaining âwhat the AI says vs what will actually work for you.â
Own one AI initiative end-to-end. Not âparticipated in a working group.â Own the outcome. Fail a couple of times. Build the scar tissue of what works and what doesnât.
Track your impact. Show where you caught AI errors before they caused problems, where you saved time while maintaining quality, and where you developed others who are now more effective.
Reality check. This is not an overnight success solution. However, done right, it will FEEL like an overnight success. It takes 6-12 months of deliberate practice. Youâre not going to become a Guardian by taking a weekend course and cutting and pasting from AI. Youâre failing fast, building judgement + AI fluency + the ability to multiply impact through others.
Hereâs the good news. EYâs research shows that 96% of organizations investing in AI are seeing productivity gains, but only 17% used those gains to reduce headcount. 9The majority (38%) are reinvesting in upskilling employees. The demand is there. Build the capability and you become the asset.
The 2026 Reality
The barbell curve I referenced in Part 1 will rebalance. Not because everyone suddenly agrees, but because the teams combining Guardians, junior professionals, and AI tools are going to outperform the extremes by margins too large to ignore.
Gartner predicts that by 2030, 75% of IT work will be done by humans augmented with AI, with 25% by AI alone and zero percent by humans without AI.10 The question isnât whether you will adopt AI. Itâs whether or not you have the Guardians to make it scale.
If youâre a hiring manager: Use the five competencies as your interview frameworks. Ask the questions. Watch for the red flags. Hire competency, not tenure.
If youâre building this capability yourself: Work the development path. Build workflows, mentor others, own initiatives end-to-end, and document your tribal knowledge.
If youâre a leader building teams: Recognize your Guardians. Theyâre often one level below where you think they are. Invest in making them AI-fluent. Let them mentor others. Give them time to build systems that scale. Recognize them for their drive and their value.
The companies that figure this out will pull so far ahead in 2026 that they will set the tone for all to follow.
Fail fast, my friends. This is how we grow.
Editorial Note: There are many people in my Substack community, see partial list in no particular order below, that I immediately think of when I hear the word âGuardian.â Check them out if you need a real-time example of domain experts sharing their learnings with others looks like. Iâm grateful for them every day.
https://www.gartner.com/en/newsroom/press-releases/2024-07-29-gartner-predicts-30-percent-of-generative-ai-projects-will-be-abandoned-after-proof-of-concept-by-end-of-2025
https://hbr.org/2025/09/ai-generated-workslop-is-destroying-productivity
https://www.pwc.com/gx/en/news-room/press-releases/2025/pwc-2025-global-workforce-survey.html
Kotter, John P. âLeading Change.â Harvard Business Review Press, 1996.
https://www.pwc.com/gx/en/news-room/press-releases/2025/ai-linked-to-a-fourfold-increase-in-productivity-growth.html
https://www.harvardbusiness.org/insight/learning-through-experimentation-why-hands-on-learning-is-key-to-building-an-ai-fluent-workforce/
https://www.gartner.com/en/newsroom/press-releases/2024-10-03-gartner-says-generative-ai-will-require-80-percent-of-engineering-workforce-to-upskill-through-2027
https://www.gartner.com/en/newsroom/press-releases/2025-12-16-gartner-hr-survey-finds-65-percent-of-employees-are-excited-to-use-ai-at-work
https://www.ey.com/en_us/newsroom/2025/12/ai-driven-productivity-is-fueling-reinvestment-over-workforce-reductions
https://www.gartner.com/en/newsroom/press-releases/2025-11-10-gartner-survey-finds-artificial-intelligence-will-touch-all-information-technology-work-by-2030



Wow Karen, I'm a guardian??? THANK YOU â¤ď¸
I love the framework too. It solves a massive problem I see everywhere: companies want "AI transformation" but have no structure for WHO makes it work.
And it's not the tools. It's the people :) Nobody can become AI fluent in a weekend...
Great post! :)
Really smart article. I love the attention you give to junior professionals, it feels like an area that deserves more careful consideration.