For Impact

Blog

The Leap to Agentic AI

WOW Email | | Nick Fellers

 

In December AI crossed a threshold into the fully agentic era and I spend a lot of time unpacking this with social impact leaders. This threshold exceeds all other ‘oh wow’ moments of the past few years combined.

Until you work with a fully agentic system it’s hard to understand what it all means in practice. But if you don’t understand the era into which we’ve stepped, you’re going to make outdated financial decisions, outdated staffing decisions, and jarring lurches as the workforce and systems shift around you.

In metaphorical speak: before December, most AI was an adolescent genie that could grant a limited set of wishes. It usually granted them well but sometimes it did not. In December, the genie grew up AND it figured out how to grant the wish of unlimited wishes.

As of today, most of this capability is only usable by tech-savvy individuals and teams (using things like Claude Code, Codex, Cursor, etc). Claude Cowork and Manus.im are leading the push to make this power accessible to everyone — and yesterday, Microsoft announced Cowork is coming to Copilot.

When you use fully agentic AI you are limited only by your creativity, clarity, and access to data. I use this slide to illustrate what it looks like when you connect agentic AI to your data.

 

The genie just decided to go places I didn’t ask it to go and it paved a path to get there.

A world where AI can build entire systems is here today. Whether or not you ‘go agentic’ this month, the proximity of these capabilities should be factoring into your planning, your systems, and your questions.

Three shifts in thinking I keep coming back to:

  1. It reshapes how you define work and systems.

    In the hands of someone with a little experience, it’s going to be normal to ask: What if we had the AI read our SOPs and automate work using tools we already have? What if we recorded every training and built an AI coach from it? What if we asked it to build a CRM around our actual needs?

    Our AI services team is building systems like this every day. They’re not turnkey yet, but they’re here — and once you understand that, it unlocks real possibilities.

    I’m not advocating dramatic process changes today. But this gives every leader a lens for looking forward.

  2. It reshapes how you build your team.

    The implications start to reorganize how your team works — fast.

    Look at any job description and some portion of it can be handled by AI. You don’t need to rethink your org chart on a Friday but I’m raising this constantly with leaders who are figuring out how to build a team, hire for a new position, or write a job description.

    “Here’s how to think about that role with the expectation of agentic AI systems over the next 6/12/18 months…”

    Here’s what makes me optimistic: After the initial investment in building systems, leaders and managers have more time for the human part of team building. Data and metrics are aggregated in real time, which accelerates integration, alignment, and action. And when you streamline the systems work, teams have more conversations with better prospects.

  3. It demands constraints.

    Agentic AI is creative. It finds paths and builds solutions you and I wouldn’t conceive. That’s largely great — except that if we can’t anticipate the path, who’s to say the AI won’t build one we wouldn’t want?

    While my slide example is benign, what’s to stop an agent from cross-referencing private data?

    This isn’t a reason to be afraid. It’s a reason to be proactive. On our team we handle it with training, human-in-the-loop checkpoints (meaning a human reviews before the AI acts), and writing guardrails directly into the process. The good news: AI is actually very good at following rules, so long as you take the time to put them in place.

    Agentic AI is an instrument. In the hands of someone who understands where it needs boundaries, it becomes an instrument of exponential good. Without constraints, it becomes an instrument of unintended consequences. Guardrails aren’t a limitation on the genie. They’re what make the genie safe to use.

This isn’t someday. The genie is here now. If you’re not already, start looking at your systems, your roles, and your decisions through a post-November 2025 lens. Mastery isn’t the goal — I’m not sure it’s even possible. The lens shift is the point.

N.B. Last week the Pentagon and Anthropic had a very public showdown. The Department of Defense asked Anthropic to strip the guardrails and human-in-the-loop requirements from AI involving lethal force. Anthropic refused. I wrote about that separately on LinkedIn. But the connection to everything above is simple: I don’t think someone with awareness of how agentic AI actually works would have made that request in the first place.