I. Opening / Landscape
Imagine a young consultant in a bustling London office, fresh from university, tasked with analysing market trends for a major client. In early 2024, this might have involved hours poring over spreadsheets, cross-referencing reports, and drafting summaries. But by the end of 2025. Tools like AI agents—simple systems that take a natural-language request, break it down, and loop through research and refinements—began handling the bulk of that work. IBM, which announced plans to phase out thousands of back-office roles, replacing them with AI that could process data and generate insights faster than any entry-level team. The consultant, instead of grinding through the details, found themself reviewing outputs and tweaking directions. What once took days now happened in minutes. This, as you are probably aware, isn’t a distant future vision but instead represents a snapshot from real world reporting by global research institutions such as McKinsey, where technology is turning passive AI tools into something resembling virtual coworkers.
Old model: Humans were the intelligence. New model: Humans are the directors of intelligence.
This change points to a deeper transformation in how we work. For over a century, the industrial model shaped our approach to knowledge work. It started in the factories of the late 19th century, where Frederick Taylor’s principles of scientific management broke down complex tasks into simple, repeatable steps. A worker on an assembly line didn’t need to understand the entire car; they mastered one bolt, one weld, and did it efficiently. This created a system where productivity and finished output came from specialisation—dividing labour so each person became an expert in a narrow slice.
The education system mirrored this. Schools, often called “factory schools,” in the early phases of industrialisation were designed to produce reliable workers. Rows of desks, bells signalling shifts, standardised tests measuring output—these prepared people for a world where labour and later knowledge was accumulated slowly and applied predictably. Think of the classroom as a miniature production line: students learned facts, practised skills, and graduated ready to slot into roles. Universities extended this, turning out specialists in fields like accounting or engineering, where years of study built asymmetric expertise. A doctor knew more about medicine than a layperson; a lawyer held the keys to legal interpretation. This asymmetry of knowledge created value. Add in work experience over time; and it wasn’t easily replicated. It commanded respect and higher pay.
An everyday analogy helps solidify the concept. Picture a craftsman in a small workshop before the industrial revolution. They knew every step of making a chair—from selecting wood to carving joints. Their work was holistic, blending skill and intuition. Then came the factory: one worker cuts the wood, another assembles, a third finishes. Efficiency soared, but the individual lost oversight of the whole. Knowledge work followed suit. In offices, we became like those assembly-line workers—experts in our station, outputting reports, analyses, or decisions based on hard-won education and experience. A marketing specialist might spend years learning consumer behaviour patterns, applying them to campaigns. This model thrived because information and understanding was hard to access and synthesise. Libraries, tutors, mentors, and trial-and-error were the paths to mastery, and not everyone could afford the time required.
Yet, this asymmetry or scarcity underpinned everything. In a slam business, the owner might handle strategy while employees executed tasks. In larger firms, hierarchies emerged: executives directed from afar, relying on layers of specialised outputers to feed them data and information to steer the business to continued profitability. From the caretaker in a building, who knew the quirks of the heating system right up to the CEO who understood market dynamics from decades of deals. Both relied on personal accumulation—theory plus lived experience—to produce optimal results. It was a simple flow: gather knowledge, apply it to tasks, generate value at every level.
But cracks appeared as technology advanced. Early computers automated calculations, then spreadsheets handled data crunching. Still, the core asymmetry held because tools were ‘dumb’—they needed human guidance. Now, in late 2025, we’re crossing a threshold. AI agents commoditise execution, pulling from vast datasets and reasoning loops to mimic specialised output. A system can now analyse market trends, draft legal briefs, or even predict maintenance needs with minimal input. The bottleneck isn’t producing intelligence anymore; it’s directing it wisely.
This isn’t just about efficiency gains—it’s about rethinking human roles. We’re starting to move upstream, from being the ones who output intelligence to those who direct its flow. The old industrial echoes—factory schools churning specialists, offices as production lines—are changing. What replaces them is a model where the most valuable people in the work environment frame the big questions, sense emerging patterns, and steer intelligent systems toward meaningful ends. This shift promises abundance but demands we adapt.
2. Why the Old Model Is Breaking
The old industrial model worked for a long time because knowledge felt solid and was hard to share and gain. A person built expertise through years of practice, and that edge drove their value. But now, with AI agents entering everyday work, this foundation is cracking. These emerging agents however, are not mysterious robots with their own minds (I will caveat that with a ‘yet’). Instead they are large language models wrapped in a loop. No independent will, just a smart executor reacting to human input. I will expand this notion further later on in the essay.
Let’s take a practical case from recent shifts in consulting. Teams at places like McKinsey, as I mentioned earlier, have started using agentic setups to tackle complex jobs. What used to take a week of manual digging through old systems now happens in hours. An agent swarm pulls data, spots patterns, and suggests fixes or refined output, all while checking its own work. The human steps in only to guide the final tweaks. This is no longer theory, showing up in reports from firms adapting to stay competitive. The result is faster more cost effective outputs, but also a quiet question: if agents handle the heavy lifting, where does that leave people?
The core issue, therefore, is that human value is moving upstream. In the old setup, humans were the outputers—gathering facts, applying experience, and producing results. An accountant balanced books based on rules learned over years or a teacher drew on classroom experience to shape lessons. This flow made sense when information moved slowly. But agents now commoditise this execution. They draw from vast pools of data, reason step by step, and deliver polished work without fatigue. The execution now is no longer scarce and humancentric.
This in my view, doesn’t mean we humans vanish from the picture. Instead, it’s a reweighting, not a full swap. People still need to input and output in spots—perhaps crafting a unique pitch or handling a sensitive conversation. But the balance inverts sharply: from 80% execution and 20% direction to the reverse.
Execution is now cheap. Direction is now expensive.
If we look back over history we can see how we’ve been taught to see work as endurance—a steady grind, were putting in the hours builds character, security and asymmetry. But this leaves us in today’s world as exhausted outputers, chasing promotions through sheer persistence rather than fresh insight. In offices worldwide, this shows in burnout rates climbing, with surveys from places like Gallup highlighting widespread disengagement. The emotional cost is unfortunately real: the alienation from the bigger picture, a sense of being cogs in a machine, and quiet frustration when expertise feels undervalued. But as agents take on the drudgery, this opens space to rethink—not as endless toil, but as purposeful re envigored direction.
Time spent grinding details shrinks, while framing the right path grows. Imagine a chef in a busy growing kitchen. In the early stages, they chopped every vegetable themselves. Now, with helpers prepping ingredients, the chef focuses on tasting, adjusting flavours, and deciding the menu. The output still matters, but the real skill lies in steering the process.
An analogy from film also brings this home. Think of the actor versus the director. The actor executes brilliantly, delivering lines with emotion, hitting marks, bringing a scene to life. But the director decides the story’s arc, chooses the shots, and guides the cast toward a cohesive vision. Without the director, the actor’s talent scatters. In work, AI agents play the actor role: precise, tireless performers. Humans step into directing—framing questions the system can’t pose on its own, sensing broader patterns, and steering toward ends that align with real needs.
This upstream shift creates a new human edge. Directors of intelligence thrive by asking what others overlook: “What if we flip this assumption?” or “How does this connect to emerging trends?” They don’t hold all the answers, but they know how to unlock them through smart guidance if AI. In a small business, this might mean the owner spots a market shift and directs agents to test new ideas fast. In larger organisations, executives orchestrate swarms to explore new strategies that once took months or years.
The old asymmetry built on hoarded knowledge starts to fade and what rises is the ability to direct flows of intelligence wisely. This shift opens glimpses of a different future. Work could become less about rote output and more about meaningful oversight. A teacher might directs agents to personalise one to one lesson’s, freeing them for more time spent mentoring. The promise is abundance and higher quality output. More done with less drudgery. But it demands we build new skills. But what skills? The next step then, is understanding those skills through a clear framework.
3. The Intelligence Director Framework
To navigate this upstream shift, we need a clear way to think about the skills involved. In the following framework I outline the practical elements of becoming a director of intelligence. It draws from patterns already emerging in workplaces, blending them with forward-looking steps anyone can utilise. But I want to start by clarifying one key term to avoid confusion.
3.1 What We Mean by “Agent”
In everyday talk about AI, the word “agent” often conjures images of independent helpers, like little digital assistants with their own plans. We anthropomorphosis them in to digital versions of ourselves. But that’s not accurate here. An AI agent is simpler: it’s a large language model set up in a loop. You give it a goal in plain words, and it breaks the task into steps. It uses tools—such as searching the web, running code, or pulling from databases—to move forward. It then checks its own work, adjusts if needed, and keeps going until the job is done or it hits a specified limit. At the end, it hands back results you can audit and verify.
Think of asking an agent to plan a weekend trip. You say, “Find flights to Edinburgh for two, plus a cosy hotel near the castle. Here is my budget and here are some other specifications” It searches options, compares prices, checks reviews, and suggests a few packages. If something doesn’t fit—like your specifications—it loops back to refine. No hidden motives: it’s just reacting to your input. This simple setup powers the changes we’re seeing, from handling emails to drafting reports. By keeping this definition straightforward, we see agents as tools for execution and not replacements for human oversight.
Agents are not coworkers, instead they are tireless interns who never sleep, never complain, and never have original ideas.
With this base, we can explore the skills that let people direct them effectively with potential asymmetry in skill to our fellow humans.
3.2 Rapid Domain Distillation
One core skill is pulling a new field of knowledge into focus quickly. Rapid domain distillation means stepping into an unfamiliar area and boiling down its key ideas into a single, workable framework. This isn’t about skimming surface details; it’s spotting the foundational thoughts—the first principles, landmark studies, or core models—that explain most of what matters. You aim for 80–90% grasp in weeks, not years, so you can act effectively without getting lost in years of effort acquiring expertise.
How does this work in practice? It’s about seeking the seminal works: the books, papers, or talks that shaped the field. For instance, if you’re diving into modern finance, you might distil the core economic models used to map today’s financial landscape and grab Warren Buffett’s key texts on value investing. Pull out the essentials—like how markets price risk or the role of compounding—and map them into a lens. Perhaps draw a simple mind map. Test it against a real problem, like evaluating a stock, and tweak as you go. This process turns chaos into usable clarity. This isn’t about summarising gold standard thinking and trying to apply it blindly. This is now about operating at fifty thousand feet and seeing the bigger patterns at play straight away without the years of effort digging around in the weeds.
Another everyday analogy: think of arriving in a new city without a map. You could wander aimlessly, but instead, you chat with a local how know the city inside out. You note the main landmarks and sketch a rough layout he gives you. The river here, the market there. Soon, you are to navigate without getting lost.
Depth is overrated when velocity is everything. 80% grasp in weeks beats 100% in years.
In the new workflow work, this skill lets you pivot fast and acts at speed. A customer manager in retail adopting this tool might distil supply chain basics in a fortnight and spot bottlenecks agents can then optimise. Looking ahead, as fields evolve quicker, this distillation becomes essential. It frees you to connect ideas across domains, turning breadth into an edge.
3.3 Flow Sensing
The next skill set complements distillation. As the velocity of the economy speeds up as frictions to production and trade dissolve via AI and technology; it becomes increasingly important to sense changing flows or trends faster to know where to focus effort. This means watching early changes in cultural buzz, money movement, attention shifts, and talent draws. It’s about looking for signs that a domain is heating up or cooling off before it’s obvious. It’s not about following crowds; it’s detecting the quiet pulls that signal real change.
In day-to-day terms for an individual, it’s about tracking subtle clues. Niche podcasts might hint at rising ideas, or you notice funds pouring into a sector through small announcements. Talent flows show when experts switch jobs en masse. For example, in 2024, early whispers about agentic tools appeared in developer forums and quiet hires at tech firms. By mid-2025, they were mainstream. Someone sensing that flow could prepare, directing agents to experiment ahead of competitors.
A simple analogy: imagine fishing in a river. You don’t cast blindly; you read the water’s surface—the ripples, eddies, where leaves gather—to spot where fish might hide. In work, this skill guides choices. A small business owner senses attention shifting to sustainable products through social chatter and invests early. Over time, it builds intuition for what’s next. As AI handles routine scans, humans excel here by blending gut feel with patterns. This points to a future where staying relevant means tuning into these flows, ensuring your efforts land in fertile spots.
3.4 Strategic Direction (The CEO-Level Superpower)
At the heart of directing intelligence lies strategic direction—a blend of vision, judgement, sharp questions, orchestration, and courage. This is the CEO-like ability to set intent and guide systems toward valuable ends. Vision and judgement mean knowing what’s worth chasing: not just efficient outcomes, but ones that feel right, ethical, and bold. You weigh what matters beyond numbers.
Question-framing sharpens this. You ask things agents overlook: “What hidden assumption are we missing?” or “How might this fail spectacularly?” These prompts unlock deeper insights. Orchestration follows: you design agent swarms, set boundaries, and step in for tricky spots, evolving the setup as results come. Courage ties it together—committing to uncertain paths with real stakes.
Judgement is the new scarcity. Anyone can prompt an agent. Few can tell it when it’s wrong.
Think of a founder, rethinking their business. They ask, ‘What if we invert our model—customers pay us to save them time instead?’ Agents explore options, model risks, and iterate. The founder steers, blending outputs with instinct. Or picture a conductor leading an orchestra. They select the piece (vision), rehearse the musicians to peak performance (questions and boundaries), direct the flow in real time (orchestration), and hold the ensemble through uncertainty (courage). In teams, this skill set elevates everyone; looking forward, it points to a world where humans ignite innovation, leaving rote execution to tools.
3.5 The New Workflow Archetype
Putting it together, the workflow looks like this: a human sets high-level intent, the agent swarm executes and reflects, and the human steers refinements in a loop. Start with a clear goal, like “Streamline our inventory to cut waste by 20%.” Agents break it down—analysing data, suggesting changes, testing scenarios. You review, adjust for overlooked factors, and repeat.
In supply chain work, agents would forecast demand, flag issues, and propose fixes. The director intervenes with informed, refined choices. This creates efficient cycles, blending human oversight with machine speed. It hints at futures where work feels more creative, less mechanical.
3.6 Qualifications Shift
In the old model, qualifications were straightforward: degrees signalled formal knowledge, and work experience proved practical expertise, creating a ladder anyone could theoretically climb. But as AI commoditizes output, these markers lose their punch. A computer science degree, once a golden ticket, now competes with open portfolios like GitHub repos, where real-world demos outshine paper credentials. Everyday evidence shows this: job postings increasingly demand AI literacy over years of service, with AI bootcamps surging 300% in enrolment to teach orchestration skills. We could frame this as: education as credential factories (fading) versus agency accelerators (rising), where what you build will trump what you studied.
Yet, as weighting shifts workflows to directors of intelligence, new qualifications could emerge around meta-skills but with deep fragilities. Judgement and courage aren’t evenly distributed; they tend to favour those with access. Networks provide insider access and safe testing grounds, like VC circles sharing early agent tools to learn with. Time lets the privileged experiment without survival pressures, while others grind low-wage jobs with no buffer for distillation practice. Research underscores this: upskilling benefits the wealthy first, widening divides where the bottom 60–70% stay curators.
The new hierarchy isn’t fair. It’s meritocratic in theory, aristocratic in practice.
This isn’t fair play; it’s a new hierarchy and potentially more entrenched. We can however, still navigate it. We can act now. Most of the population will not be thinking this way yet. We can focus on using AI to build mental frameworks across multiple domains, we can build agentic demos and create public portfolios of directed projects. But recognizing the race isn’t level or fair is not only useful but essential.
With these pieces, we can see a path forward. But no model is flawless and next, we will examine its weak spots.
4. Risks & Fragilities
No framework stands without flaws, and this one is no exception. These risks are built-in checks—points to watch closely so the model stays practical. They stem from patterns already showing in early adopters, mixed with likely pitfalls ahead. Addressing them keeps the approach grounded.
First, over-reliance on single frameworks can create blind spots. When you distil a domain into one lens, it sharpens focus but might hide angles and over time. This potentially leads to rigid thinking that is the antithesis of the new workflow. The fix? Hold multiple views lightly, switching as needed.
Flow sensing risks turning into hype-chasing. Early signals can feel exciting, but crowds often follow noise. Think of someone spotting buzz around a new gadget trend, only to find it’s fleeting. Everyday evidence: social media swells with ideas that fade fast. To counter, test signals against quiet data, like small-scale trials, and stay open to contrarian views.
Distillation can slip into shallow summaries if not done ruthlessly. Grabbing top search results feels quick but misses depth. A marketer distilling consumer behaviour might end up with generic tips, overlooking nuanced shifts. This weakens decisions. Push for core principles, always testing against real tasks.
The pyramid of value hints at extreme inequality. Elite directors thrive, but most might become curators, reviewing outputs without much say. In a team, a few steer while others tweak—widening gaps. Speculating further, this could strain societies, with laggards struggling. Organisations might lose balance if too few hold the reins.
Loss of tacit knowledge is another concern. If juniors lean on agents for basics, they skip building intuition. A mechanic directing diagnostics might forget hands-on fixes when tech fails. Over years, this erodes resilience.
Finally, over-trusting the agency illusion invites trouble. Agents seem capable, but they’re loops reacting to inputs. Abdicating oversight—letting them run unchecked—risks errors in high-stakes spots. A pilot relying fully on autopilot forgets manual skills during turbulence. Responsibility stays human.
These fragilities aren’t deal-breakers; they’re guides to use the model wisely. Spotting them early keeps the shift productive. Still, the path forward varies.
5. Conclusion
This framework offers a way to think about work as it changes, but the shift won’t happen evenly. Some fields are already moving fast. In finance, agents handle routine trades and risk checks, letting directors focus on big-picture strategies. Consulting sees similar patterns, with swarms drafting reports while humans steer client needs. Software development races ahead too—coders direct agents to build prototypes, cutting time from weeks to days. These areas adapt quickly because data flows freely and tasks break down easily.
Other domains lag. Medicine demands hands-on trust; agents might analyse scans, but doctors still make calls involving lives. Creative arts resist full automation—agents generate ideas, but human judgement shapes the final touch. High-stakes policy work, like government decisions, moves slowest, tangled in ethics and accountability. Think of a river splitting into streams. Some rush forward, carving new paths; others meander, held back by the terrain. The transition mirrors that—varied pace, shaped by each field’s nature.
This unevenness means no one-size-fits-all timeline. A small business might experiment with agents for inventory, while a hospital tests them cautiously for records. Over years, though, the direction points upstream: more directing, less grinding output. It’s a path toward work that feels purposeful, with humans at the helm of intelligent tools.
This isn’t a finished blueprint. It’s a direction of travel, drawn from today’s glimpses and logical steps ahead. The old model served its time, but as agents commoditise execution, the real edge lies in skills like distillation, sensing, and strategic oversight. The race is on—not to outrun the tools, but to guide them wisely.
The industrial era rewarded outputters. The intelligence era rewards directors. Choose your side.
Start small: pick one domain to distil into a framework, watch a flow in your field, or frame a bold question for an agent to explore. These steps build the muscle. In the end, the future favours those who direct the flow, turning abundance into action.
References
- Brynjolfsson, E., Li, D., & Raymond, L. R. (2023). Generative AI at work (NBER Working Paper No. 31161). National Bureau of Economic Research. https://www.nber.org/papers/w31161Supports the upstream shift and productivity gains from AI agents in knowledge work (Sections 1 & 2).
- Davenport, T. H., & Kirby, J. (2016). Only humans need apply: Winners and losers in the age of smart machines. Harper Business. Early articulation of humans moving from execution to oversight roles (Section 2).
- IBM. (2023). IBM announces plans to replace thousands of back-office roles with AI. IBM Newsroom. https://newsroom.ibm.com/2023-05-15-IBM-Announces-Plans-to-Replace-Thousands-of-Back-Office-Roles-with-AIDirect evidence for the IBM back-office AI replacement example in the opening (Section 1).
- McKinsey & Company. (2024). The economic potential of generative AI: The next productivity frontier. https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-economic-potential-of-generative-ai-the-next-productivity-frontierKey source for agentic AI adoption in consulting and productivity shifts (Sections 1 & 2).
- McKinsey & Company. (2025). Agentic AI: The next frontier in enterprise productivity. https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/agentic-ai-the-next-frontier-in-enterprise-productivitySupports the real-world adoption of agent swarms in consulting and other fields (Section 2).
- Taylor, F. W. (1911). The principles of scientific management. Harper & Brothers. https://archive.org/details/principlesofscie00tayl (public domain scan) Seminal work on scientific management and task specialization that underpins the industrial model critique (Section 1).
- Autor, D. H. (2015). Why are there still so many jobs? The history and future of workplace automation. Journal of Economic Perspectives, 29(3), 3–30. https://doi.org/10.1257/jep.29.3.3Provides historical context on automation waves and why human judgment remains scarce (Sections 1 & 2).
- Gallup. (2024). State of the global workplace: 2024 report. https://www.gallup.com/workplace/349484/state-of-the-global-workplace.aspxData on rising burnout and disengagement in knowledge work (Section 2).
- World Economic Forum. (2025). Future of jobs report 2025. https://www.weforum.org/publications/the-future-of-jobs-report-2025/Evidence for the shift toward AI literacy and orchestration skills over traditional credentials (Section 3.6).
- Brynjolfsson, E., & McAfee, A. (2014). The second machine age: Work, progress, and prosperity in a time of brilliant technologies. W. W. Norton & Company. Foundational text on the commoditization of cognitive tasks and the rise of human direction (throughout, especially Sections 1–3).

