Will Digital Analysts Become AI Agent Managers?
On Saturday 15th November 2025, I had the pleasure of leading a session at MeasureCamp Brussels on a topic that’s becoming central in the Digital Analytics community.
As AI agents begin to take over an increasing share of analytical tasks, the role of digital analysts is evolving fast. Instead of focusing solely on data collection and analysis, we may soon find ourselves managing the agents that do this work for us.
This raises an exciting question: What new skills will we need to guide, evaluate, and even “coach” our AI-powered digital coworkers?
This session builds on the conversations I launched at MeasureCamp Paris (June 2025) with “Is Generative AI the End of the Digital Analyst?” (https://lnkd.in/eTWmC6cf) and continued at MeasureCamp London (September 2025) with “Superpowers for Digital Analysts: Can Generative AI Really Deliver?” (https://lnkd.in/eKfASgeJ)
I opened the MeasureCamp Brussels session with two quick surveys to gauge where the community stands today:
Are you currently using AI Agents to automate tasks and workflows for Digital Analytics & Optimization?
- Yes: 25%
- No: 75%
Will Digital Analysts become AI Agent Managers?
- Yes: 74%
- No: 26%
This immediately revealed a tension: most analysts believe their role is about to transform, yet very few have taken the first steps into automation. As someone in the room said right away, “We all know where this is going — but most of us haven’t started the journey.”
Which Tools and Workflows Are We Actually Using Today?
When I asked participants what they currently automate or plan to automate with AI agents, the list was impressively diverse.
Tools Already in Use
- MCPs (Model Context Protocol agents)
- Gemini + BigQuery
- ChatGPT integrated into analytics workflows
- Amplitude AI features
- Cloud-native AI assistants
- n8n for workflow automation
- Copilot
Tasks & Workflows Being Automated
- Documentation & lineage
- Analysis summaries & key insights extraction
- Data quality monitoring
- Code generation & automated reporting
- UTM governance & campaign tracking
- Funnel diagnosis
- Statistical analysis
- GTM container duplication
- Code documentation & review
- Next Best Action recommendations (e.g., Piano Analytics)
Yet even with all these tools, one participant put it bluntly: “We automate bits and pieces, but nobody is running a fully automated analytics pipeline yet.”
What’s Blocking Full AI-Agent Automation?
This was the most passionate part of the session, with dozens of contributions. Below is the consolidated view — enriched with several quotes captured in the room.
1. Technical Blockers
Many shared that “AI is powerful, but unreliable — you need a second agent to check the first.” Common challenges included:
- Unstable uptime
- Model updates that unexpectedly change behavior
- Hallucinations and inconsistent outputs
- Poor integration between tools
- Lack of contextual understanding (weather, PR events, strikes, influencers)
- Uncertainty around tool selection
- Complex data ownership and lineage gaps
Another participant added: “Tools still live in silos — they don’t talk to each other well enough to automate anything end-to-end.”
2. Data, Security & Privacy Blockers
The strongest blockers came from security and compliance concerns.
- Data privacy uncertainty
- IT security restrictions
- Fear of data exposure
- Compliance constraints (GDPR, enterprise governance)
- Potential data poisoning
- Black-box uncertainty
One attendee explained the discomfort clearly: “Even when vendors swear the data is shielded, none of us are completely sure.”
3. Organizational & Cultural Blockers
This is where many nodded in agreement.
- Lack of trust
- Fear of losing control
- Limited resources (time, money, knowledge)
- Lack of internal readiness
- Slow or unclear processes
- No time to experiment
- Low maturity in data and workflow practices
Someone summarized it powerfully: “Everyone wants to use AI, but nobody has the time to learn how.”
4. Strategic Blockers
- Hard to know where to start
- Uncertainty about which tools to invest in
- Organizational implications (budgets, roles, headcount)
- Hype vs. real value
- Fear of fully automated decision-making
One comment captured the root issue: “If you automate everything, your entire operating model changes — and that scares organizations.”
5. Human Oversight Blockers
Despite increasing automation, humans remain critical.
- Need to validate accuracy
- Analysts must still understand fundamentals
- Weak agent-to-agent verification
- Unclear accountability
- Lack of oversight frameworks
A participant expressed the paradox perfectly: “AI saves time only after you spend a huge amount of time learning to use it.”
What Skills Will We Need to Guide, Evaluate, and “Coach” Our Digital Coworkers?
This question sparked one of the richest conversations of the day.
Participants agreed that analysts will evolve from operators to orchestrators, requiring a new blend of technical, analytical, and human skills.
1. Core Cognitive Skills
These are the skills no AI can replace:
- Critical thinking
- Creativity & out-of-the-box problem solving
- Troubleshooting
- Delegation (assigning tasks to agents)
- Defining clear business questions
- Understanding causation vs. correlation
Someone summarized this perfectly: “Critical thinking is the number one skill — everything else builds on that.”
2. AI Awareness & Functional Understanding
To supervise AI, analysts must understand how it behaves.
- Knowing how models operate & fail
- Understanding when outputs change and why
- Teaching context to AI agents
- Evaluating hallucinations
- Navigating security & compliance
- Knowing risks such as data poisoning
As one participant put it: “To evaluate AI, you must understand how it thinks — and why it sometimes thinks wrong.”
3. Prompting as a Professional Skill
Despite rumors that prompting is becoming obsolete, the room disagreed.
- Prompt engineering
- Writing modular, structured prompts
- Creating reusable prompt libraries
- Embedding SQL and analytical instructions
- Designing prompts that teach workflows
- User-centered prompting techniques
A participant captured this sentiment: “Prompt engineering isn’t dead — it’s becoming the new analytics literacy.”
4. Deep Domain Expertise
AI cannot replace domain judgment.
- Digital analytics fundamentals
- Marketing, product & business understanding
- Data sourcing & limitations
- Contextual interpretation
As someone said: “You still need to know how things should work. AI won’t tell you when it’s wrong.”
5. Workflow & System Thinking
Managing agents means designing systems.
- Workflow engineering
- Translating human processes into AI workflows
- Documenting logic and decisions
- Understanding data lineage
- Designing orchestrations, not isolated prompts
6. Communication & Storytelling
Even with automation, humans still make decisions.
- Data storytelling
- Simplifying complex AI outputs
- Documenting reasoning
- Acting as the “management consultant” of AI insights
What Should We Do on Monday Morning?
I closed the session with a practical question: “What should we actually start doing next week?”
Here’s the consolidated roadmap — enriched with participants’ own words.
1. Make Space for Learning
The strongest message: “Block time. Schedule learning. If we don’t make time, we’ll never catch up.”
- Dedicate weekly time for AI experimentation
- Encourage internal knowledge sharing
- Normalize trial, error, and iteration
- Promote awareness of AI risks & opportunities
2. Strengthen Governance & Compliance
AI adoption must be responsible.
- Explore ISO 42001
- Clarify internal risk & compliance expectations
- Understand data privacy boundaries
- Document governance rules
As someone said: “Compliance isn’t a blocker — it’s the guardrail we need.”
3. Build Prompting Skills
Prompts are now the interface between humans and agents.
- Train teams in prompt engineering
- Build prompt libraries
- Practice structured prompts
- Use prompts with SQL, validation, and reasoning
4. Develop Workflow Thinking & Automation Skills
Move from prompts to full workflows.
- Learn workflow optimization
- Design agent-based automations
- Start automating repetitive tasks using MCPs, n8n, and cloud functions
- Think like an orchestrator, not an operator
5. Understand How AI Really Works
Not just the ChatGPT UI — the underlying mechanics.
- Learn how LLMs generate outputs
- Understand hallucinations & context windows
- Study agent-to-agent coordination
- Identify predictable failure points
One participant noted: “If you don’t know how AI works, you can’t manage it — only consume it.”
6. Start Using MCPs and Agent Infrastructure
Hands-on experience matters.
- Learn how to install an MCP server
- Connect internal tools (Slack, Confluence, APIs)
- Experiment with multi-agent workflows
- Test validation loops between agents
7. Choose Tools Intentionally
Don’t try everything — start with the right things.
- Map available tools
- Identify gaps
- Pick tools for documentation, lineage, quality, reporting, and workflows
- Avoid tool bloat
8. Decide What to Automate First
Start with high-friction tasks:
- Repetitive reporting
- Tag & data quality checks
- UTM governance
- Funnel diagnostics
- Documentation
- Experiment summaries
As one participant said: “If the task bores you, an agent should probably do it.”
Conclusion: The Analyst Role Isn’t Dying — It’s Evolving
Our surveys say it all:
- Only 25% use AI agents today.
- But 74% believe analysts will become AI Agent Managers.
AI agents won’t replace us — but they will reshape our profession.
We will move from doing the work to
managing the agents that do the work.
MeasureCamp Brussels showed what makes this community so special: we learn, experiment, and evolve — together.
A big thank you to everyone who joined and openly shared their knowledge and experiences!
