SAMPLE REPORT — This is a fictional team report for demonstration purposes. Purchase a Team Assessment for your organization →
TDS
Executive Summary
Meridian Financial Group's leadership team scores in the Responsive band (8.5/16), placing the organization in the lower range of structured readiness for exponential change. With 15 of 15 team members responding, this is a comprehensive snapshot of your leadership alignment on AI and technology readiness.

The team's strongest pillar is Watch (2.0) and Verify (1.9), while Empower (1.8) and Adapt (1.9) trail behind. However, the real story is not the averages — it's the perception gap on Watch, where scores range from 1 to 4. Your Technology & IT team sees a fundamentally different signal landscape than Finance and Operations. This misalignment means strategic decisions about technology investment are being made without shared situational awareness.

For a financial services organization operating under APRA's CPS 230 operational resilience requirements and facing the rise of AI-driven neobanks, this gap is not academic. Competitors with aligned leadership teams are making faster, more coherent bets on AI integration. Every month this perception gap persists, Meridian falls further behind organizations that have achieved internal consensus on what's coming and how to respond.
Priority Gaps
1
Perception Gap on Adapt
Senior leaders rate agility 40% higher than mid-level managers — strategy isn't reaching execution.
2
Empower Variance
Innovation capacity concentrated in 3 of 5 departments. Risk and compliance teams score 1.4 on Empower.
3
AGI Workforce Preparedness
Zero team members scored above 2 on workforce displacement readiness.
Score Breakdown
Watch
2.0
Adapt
1.9
Verify
1.9
Empower
1.8
Watch leads at 2.0, all other pillars below 2.0. The perception gap on Watch between Technology (3.3) and Operations (1.5) is the critical finding.
Band Distribution

How your team members are distributed across the four maturity bands based on their overall WAVE score. A healthy team clusters in the upper bands; wide spread indicates misalignment.

2
4
3
1
Reactive: 2Responsive: 4Proactive: 3Architect: 1
Maturity Scale

The WAVE framework scores each pillar from 1 to 4. These bands define what each level means in practice — from ad hoc responses to technology as a strategic advantage.

1 — Reactive
Ad hoc, no system
2 — Responsive
Awareness emerging
3 — Proactive
Systematic processes
4 — Architect
Strategic advantage
Team WAVE Scores

How the team scores across the four WAVE pillars. Each axis shows the team average score and maturity band.

Watch 2.0 Responsive Adapt 1.9 Reactive Verify 1.9 Reactive Empower 1.8 Reactive

The team's diamond is compact and low: Watch (2.0) leads marginally, while Adapt (1.9), Verify (1.9), and Empower (1.8) cluster just below the Responsive threshold. The organization is aware but not yet structurally prepared — and the tight grouping masks wide internal perception gaps.

Next Steps

Strategic Interpretation Session

60-minute facilitated session with Dr. Mark van Rijmenam to surface assumptions behind your team's perception gaps and align on priorities.

Book Mark →

Now What? Masterclass

8-episode series based on the WAVE framework. Equip your entire leadership team with the mental models to navigate the Intelligence Age.

Start Free →

Futurwise Platform

Subscribe your team to Futurwise for curated AI intelligence — noise-cancelling for your brain, tailored to financial services.

Explore Futurwise →
Perception Gap Analysis

Perception gaps reveal where your leaders disagree about organizational readiness — the single most valuable insight a team assessment provides that individual reports cannot.

  • Watch has the lowest alignment (43%) — Technology sees a 3.3 landscape while Operations sees a 1.5 landscape
  • The C-suite scores 0.5–1.5 points above middle management across every pillar — a consistent optimism bias
  • The team agrees Adapt is weak (no one above 3) — dangerous consensus because no one champions improvement

The perception gaps in this team are not random disagreements — they follow structural lines. Technology sees more because they're exposed to more. The C-suite scores higher because they set the direction (and may overestimate how well it's being followed). Closing these gaps requires structured cross-functional conversations, not more email.

Team Alignment Score — 100% = full consensus, 0% = complete disagreement
Watch
43%
Adapt
53%
Verify
53%
Empower
51%
Healthy alignment →
Biggest Gap: Watch

The 3-point spread on Watch (scores ranging from 1 to 4) is the most significant perception gap. Your CTO operates at Architect level — systematic scanning, AI-powered signal detection, 5+ year horizon. Your Operations managers rely on conferences and gut feel with a 12-month window. They are making decisions about the same organization based on fundamentally different views of the external world.

Dangerous Agreement

The team broadly agrees that Adapt is weak (no one scores above 3, 67% score 1–2). This consensus on weakness is concerning because it means no internal champion exists for adaptation — the pillar most critical to surviving the pace of change in financial services.

The perception gaps in this team are not random disagreements — they follow structural lines. Technology sees more because they're exposed to more. The C-suite scores higher because they set the direction (and may overestimate how well it's being followed). Closing these gaps requires structured cross-functional conversations, not more email.

Department Heatmap

The department heatmap reveals structural readiness patterns — showing which parts of your organization are future-ready and which are structurally behind.

  • Technology & IT is the only Proactive department (11.0/16) — an island of excellence that hasn't transferred capabilities
  • Operations is the weakest department (5.8/16 Reactive) — a 5.2-point gap from Technology that must be bridged
  • HR understands the Empower gap (2.5) but scores 1.5 on Watch — building training programs without visibility into what skills will matter

Meridian's departmental readiness is deeply uneven. Technology leads but in isolation; Finance and Operations lag in ways that will block any organization-wide AI initiative. The bridge between these departments is not more presentations from Technology — it's joint working teams where operational constraints are heard alongside technology vision.

DepartmentWatchAdaptVerifyEmpower
Technology & IT3.32.72.72.3
Finance1.71.31.72.0
Operations1.51.81.31.3
HR & People1.52.01.52.5
Legal & Compliance1.51.02.51.0
Marketing & Sales3.03.02.02.0

The most important cross-departmental pattern: Technology & IT is 5.2 points above Operations on total score. When the technology team proposes AI integration into operational workflows, they are speaking to a department that scores 1.3 on Verify — a department that doesn't yet have the governance instincts to evaluate what they're being asked to implement.

Meridian's departmental readiness is deeply uneven. Technology leads but in isolation; Finance and Operations lag in ways that will block any organization-wide AI initiative. The bridge between these departments is not more presentations from Technology — it's joint working teams where operational constraints are heard alongside technology vision.

Seniority Gradient

The seniority gradient reveals whether strategic intent translates into operational reality — or whether a communication gap exists between levels.

  • 6.5-point gap between C-suite (11.0 Proactive) and Managers (4.5 Reactive) — they live in different realities
  • The sharpest cliff is between VP-Directors (8.7) and Senior Managers (6.0) — where strategy stops translating
  • Managers score 1.0 on Empower — they neither see signals, validate outputs, nor feel authorized to act

The seniority gradient is where Meridian's 'willing but unable' pattern is starkest. The C-suite sets direction; the Managers can't follow. The VP-Directors are the bridge — and at Responsive maturity, the bridge is shaky. The 90-day plan targets this gap directly through reverse mentoring and translation meetings.

C-suite (2)
2.8 — Proactive
VP-Director (6)
2.2 — Responsive
Senior-Manager (5)
1.5 — Reactive
Manager (2)
1.1 — Reactive
Confidence Cliff

There's a confidence cliff between VP-Director level (total: 8.7) and Senior Managers (total: 6.0). This 2.7-point drop marks the point where strategic awareness stops translating into operational capability.

Permission Gap

Managers score 3.0 on Learning Velocity (self-assessment) but only 1.0 on Empower (WAVE score). They believe they can learn fast — but the organization hasn't given them permission, tools, or authority to apply that learning.

The seniority gradient is where Meridian's 'willing but unable' pattern is starkest. The C-suite sets direction; the Managers can't follow. The VP-Directors are the bridge — and at Responsive maturity, the bridge is shaky. The 90-day plan targets this gap directly through reverse mentoring and translation meetings.

Self-Assessment Insights

Self-assessment compares how individuals rate their personal readiness to how the organization actually performs — revealing where personal capability is being blocked by organizational structures.

  • Learning Velocity is highest (2.9/5) but Adapt is weakest pillar (1.9/4) — people learn fast, the org can't channel it
  • Risk Tolerance is lowest (2.1/5) — reinforced by organizational rigidity that punishes experimentation
  • Cross-Functional Awareness (2.5/5) is moderate, partly explaining the departmental perception gaps

The self-assessment data confirms the central thesis of this report: Meridian's people are more ready than Meridian's systems. Individual capability exists; organizational infrastructure does not. The 90-day plan must close this gap by redesigning permission structures, not by training individuals who are already willing.

Technology Confidence
2.7 — Responsive
Learning Velocity
2.9 — Proactive
Risk Tolerance
2.1 — Responsive
Cross-Functional Awareness
2.5 — Responsive
Change Readiness
2.5 — Responsive
Key Contradiction

The team rates Learning Velocity at 2.9/5 (highest self-assessment) but scores 1.9/4 on Adapt (weakest pillar). They believe they learn fast but the organizational evidence says they can't translate learning into action. The bottleneck isn't capability — it's structure.

The self-assessment data confirms the central thesis of this report: Meridian's people are more ready than Meridian's systems. Individual capability exists; organizational infrastructure does not. The 90-day plan must close this gap by redesigning permission structures, not by training individuals who are already willing.

Individual vs. Organizational Gap

This section compares how individuals rate their personal readiness versus how they rate their organization — revealing where personal capability exists but organizational structures prevent it from being used.

  • Learning Velocity (2.9/5) vs Adapt (1.9/4) — people can learn, the org can't channel it
  • Technology scores Proactive (11.0) while Operations scores Reactive (5.8) — islands of excellence, not a rising tide
  • Change Readiness self-assessment (2.5/5) exceeds organizational evidence (8.5/16 Responsive) — optimism outpacing reality

Meridian's individual-organizational gap is the central finding of this report. The people are ready; the systems are not. The 90-day plan addresses this by redesigning organizational structures (experimentation charters, cross-functional teams, distributed authority) rather than training individuals who are already willing.

Willing Individuals, Rigid Systems (11 members)

Team members rate their personal learning velocity at 2.9/5 but the organization scores 1.9/4 on Adapt. People can learn; the organization can't channel that learning into action. Budget rigidity, approval bottlenecks, and siloed authority prevent individual readiness from becoming organizational readiness.

Action: Create an experimentation charter: who can approve what, at what budget threshold, with what reporting requirements. Distribute decision-making authority to the level where signals are detected.

Islands of Excellence (7 members)

Technology & IT operates at Proactive maturity (11.0/16) while Operations is deep Reactive (5.8/16). These aren't two departments on the same readiness journey — they're on different journeys entirely. Technology's capabilities are not being transferred.

Action: Establish a formal Technology-Operations bridge team — 3 people from each department meeting weekly for 8 weeks with one deliverable: a joint AI readiness roadmap.

Meridian's individual-organizational gap is the central finding of this report. The people are ready; the systems are not. The 90-day plan addresses this by redesigning organizational structures (experimentation charters, cross-functional teams, distributed authority) rather than training individuals who are already willing.

Themes Across the Team

Themes are synthesized from qualitative responses across all team members. Individual responses are never quoted — only aggregated patterns.

  • Cross-departmental silos (60% of team) — the most widely shared concern, directly explaining perception gaps
  • Speed of decision-making (47%) — budget cycles and approval chains are the structural barrier to adaptation
  • No one mentioned AI governance despite Verify being the second-weakest pillar — the team doesn't know it has a governance gap

The qualitative data confirms the quantitative findings: silos drive perception gaps, slow processes drive low Adapt scores, and the absence of governance awareness explains the low Verify scores. The 90-day plan addresses all three themes directly.

Cross-Departmental Silos
Empower
9 team members (60%)

A majority described barriers between departments as the primary obstacle. Technology feels unheard by Finance; Operations feels excluded from strategy; HR sees skills gaps but can't get buy-in.

AI initiatives will be duplicated, contradictory, or abandoned when they require cross-functional cooperation.

Speed of Decision-Making
Adapt
7 team members (47%)

Nearly half described decision-making as too slow. Multiple members referenced delayed approvals, rigid budget cycles, and initiatives that lost momentum during governance reviews.

Without structural change — faster approvals, distributed authority, experimentation budgets — individual willingness to adapt will continue to be bottlenecked.

Regulatory Uncertainty
Watch
5 team members (33%)

Several members from Legal, Compliance, and Finance identified regulatory uncertainty as a barrier. A tension between wanting to move forward and uncertainty about upcoming APRA, ASIC, and Privacy Act requirements.

Regulatory scanning must be integrated with technology scanning. Currently, Legal monitors regulation independently while Technology monitors capabilities independently.

Silent Theme

No team member identified AI governance or verification as a concern, yet Verify is the team's second-weakest pillar (1.9/4). Several members described scenarios that are fundamentally governance problems — trusting chatbot outputs, data quality concerns — without recognizing them as governance gaps. The team doesn't know it has a Verify problem, which is the most dangerous form of the problem.

The qualitative data confirms the quantitative findings: silos drive perception gaps, slow processes drive low Adapt scores, and the absence of governance awareness explains the low Verify scores. The 90-day plan addresses all three themes directly.

AGI Readiness — Team View
2.1
Aware
ExposedAwarePreparingPositioned
Based on 12 of 15 team members with AGI assessment data.
4
5
2
1
Exposed: 4Aware: 5Preparing: 2Positioned: 1
Team Reality Check

Meridian's leadership team scores 2.1 on AGI Readiness, placing it in the Aware band. While the team recognizes that AI capabilities are advancing rapidly, there is no structured preparation for the scenario where AI systems can perform most knowledge work autonomously. The 12 leaders who completed the AGI assessment reveal a team that is thinking about the future but has not yet translated that thinking into concrete plans, governance frameworks, or workforce transition strategies.

Dimension Breakdown
Workforce Displacement
Range: 1 – 3
1.8/4
This is Meridian's most exposed dimension. The advisory business model depends on billing for human expertise in regulatory compliance and risk management. Four leaders scored 1, indicating no serious consideration of workforce displacement. Technology & IT sees the risk more clearly than Operations or Legal, creating a dangerous blind spot where the teams most affected are least prepared.
Decision Authority
Range: 1 – 4
2.3/4
The team shows moderate awareness but wide disagreement (range 1-4). C-suite members tend to assume humans will always make the final call, while Technology leaders are already mapping which decisions could be delegated. This 3-point spread is the largest across all dimensions and signals an urgent alignment conversation.
Economic Resilience
Range: 2 – 3
2.4/4
Meridian's leaders recognize the business model is at risk but disagree on severity. No one scored 1, suggesting baseline awareness. However, no one scored 4 either, meaning nobody has identified revenue streams that compound with AI capability rather than competing against it.
Institutional Speed
Range: 1 – 3
2.0/4
At 2.0, Meridian's adoption speed is typical of regulated financial services firms. The 6-12 month cycle from technology awareness to production deployment creates a vulnerability window during which faster-moving competitors or new entrants could capture market position. Three leaders in Operations scored 1, pointing to procurement and compliance bottlenecks.
Governance Beyond Human
Range: 1 – 3
1.9/4
Nearly as exposed as Workforce Displacement. Most leaders have not considered governance frameworks for AI systems that operate above human-level performance in financial analysis. This is not surprising for a financial services firm where governance frameworks are built around human accountability, but it means Meridian has no playbook for the transition period.
Consensus vs. Divergence

The team agrees most on Economic Resilience (range 2-3, variance 0.3) and diverges most on Decision Authority (range 1-4, variance 0.8). The Decision Authority split falls along functional lines: Technology and Strategy leaders are actively mapping delegation frameworks, while Operations and Legal assume the current human-in-the-loop model will persist. This is not a knowledge gap but a fundamental disagreement about how quickly AI capabilities will reach decision-grade quality in financial services. Three members independently named client trust erosion as their primary AGI risk, suggesting this is an organizational blind spot that multiple people see but nobody is actively addressing.

WAVE vs AGI Gap
2.6
WAVE avg
2.1
AGI avg
−0.5
Gap
Operationally strong but strategically blind

Meridian scores above average on operational AI readiness (WAVE 2.6, Proactive band) but below average on AGI preparedness (2.1, Aware band). This is the most common pattern for established financial services firms: strong execution capabilities today, weak preparation for what is coming. The 0.5-point gap means Meridian can adopt current AI tools effectively but has not begun preparing for the structural changes that AGI-level systems will bring to advisory services, compliance automation, and client relationship models.

What Your Team Sees Coming
Client trust erosion
4 members
Four team members independently identified the risk that clients will lose trust in human advisors when AI can produce equivalent or superior analysis faster and cheaper. This is Meridian's most widely recognized AGI risk and directly threatens the advisory fee model.
Middle management hollowing
3 members
Three members flagged that AGI could eliminate the coordination and information-synthesis roles that middle managers perform. In financial services, this layer handles regulatory interpretation, risk aggregation, and cross-functional communication. Removing it without a transition plan creates both operational and cultural risk.
Regulatory lag
2 members
Two members noted that governance frameworks will not keep pace with AI capability advancement, creating liability exposure. APRA and ASIC frameworks assume human decision-makers. If AI systems begin making substantive decisions before regulators adapt, Meridian faces compliance uncertainty.
Team AGI Recommendations
Week 1-2

Your Chief Risk Officer should convene a 90-minute AGI Scenario Planning session with all 15 leaders. Present the Workforce Displacement scores (avg 1.8, four leaders at 1) and ask: What happens to our advisory model if AI can perform 80% of the analysis our consultants bill for? Document assumptions and assign owners for the three highest-risk scenarios.

Week 3-4

Your CTO and Head of Operations should co-lead a Decision Authority Mapping exercise. For the top 20 decisions made weekly across the firm, classify each as: must remain human, could be AI-assisted, could be AI-led with human oversight. This directly addresses the team's widest disagreement (range 1-4 on Decision Authority).

Month 2-3

Your CEO should commission a Client Trust Impact Assessment: survey 10 key clients on their expectations for AI in advisory services. The four leaders who named client trust erosion as their top AGI risk need data, not assumptions, to plan the transition.

90-Day Team Action Plan
Days 1–30
  • Close the Adapt perception gap: present heatmap data to full leadership team
  • Pair CTO + Head of Compliance for a 2-week AI governance sprint
  • Commission workforce displacement impact assessment
  • Establish cross-functional AI working group (not IT-only)
Days 31–60
  • Redesign compliance review process: from checkpoint to enabler
  • Launch Empower pilot in Risk & Compliance (dedicated experimentation budget)
  • Build pre-approved response playbooks for top 3 regulatory scenarios
Days 61–90
  • Measure: re-assess Adapt scores at senior manager level
  • Establish AGI Preparedness Committee with board sponsorship
  • Publish internal Decision Authority Framework for AI systems
  • Report to board: perception gap closure progress + AGI readiness baseline
Quick Wins — This Week
  • Subscribe all leaders to Futurwise — takes 10 minutes, immediately expands signal intake.
  • Share the departmental heatmap at the next leadership meeting. Let the data start the conversation.
  • Ask 3 senior managers: "What AI tool would save you 2 hours per week?" Act on the best answer within 5 business days.
  • Create a #ai-signals channel where anyone can post developments. Leadership posts first.
Department-Specific Actions
Technology & IT

Document your scanning sources and methods in a 1-page playbook. Share with Operations and Finance. Your Proactive maturity is an asset — but only if transferable.

Finance

Join Technology's next sprint review. The goal: understand what AI capabilities exist today so investment evaluations are grounded in current reality.

Operations

Identify the single highest-volume manual process. Invite Technology to observe for one day. Joint assessment: where would AI reduce error rate by 30%+?

HR & People

Co-design an AI literacy program with Technology. HR knows learning design; Technology knows capabilities. First cohort: Senior Managers (the cliff level).

Legal & Compliance

Map APRA CPS 230 and CPS 234 to current AI use cases. Identify governance gaps. Share with the AI Governance Working Group.

Marketing & Sales

Document your AI personalization experiments. Present at the next leadership meeting — you may be the bridge between Technology's vision and the business.

Strategic Interpretation Session

Your team's perception gap on Adapt — where senior leaders score 2.7 and mid-level managers score 1.6 — requires a facilitated conversation to surface assumptions. The confidence cliff between C-Suite and Senior Managers means strategy isn't translating into execution.

Strategic Interpretation Package: AI-generated pre-analysis + 60-minute session with Dr. Mark van Rijmenam + written strategic recommendations + priority access to boardroom workshop scheduling.

Book Your Session →
Watch: 2.0 — Responsive
Team spread: 1–4
Signal Detection Without Consensus
4
6
3
2
■ Reactive■ Responsive■ Proactive■ Architect

The team's Watch scores reveal a deeply fragmented view of the external landscape. Four members score at the lowest level — relying on conferences and industry publications — while two members (both in Technology & IT) operate at Architect level with systematic scanning. The bimodal distribution is the clearest sign that Meridian's signal detection capability exists in pockets rather than as an organizational competency. In financial services, where algorithmic trading regulation, open banking mandates, and embedded finance models are reshaping the competitive landscape quarterly, this fragmentation means critical signals reach some decision-makers months before others.

Adapt: 1.9 — Reactive
Team spread: 1–3
Structural Rigidity Masking as Caution
4
6
5
■ Reactive■ Responsive■ Proactive

No team member scores above 3 on Adapt, and the majority cluster at 1-2. This is an organization that cannot reallocate resources in response to new signals. The absence of any Architect-level scores means no one on the leadership team has experienced what rapid, disciplined adaptation looks like. In a sector where neobanks iterate product quarterly while traditional institutions run 18-month development cycles, Meridian's adaptation ceiling is a competitive liability.

Verify: 1.9 — Reactive
Team spread: 1–3
Governance by Exception, Not Design
5
6
4
■ Reactive■ Responsive■ Proactive

Verify scores cluster in the Reactive-to-Responsive range with Legal & Compliance scoring highest (2.5 mean) and Operations scoring lowest (1.3). This pattern reveals that governance exists where regulation forces it but has not been extended to AI outputs, data provenance, or algorithmic decision-making. For a financial services organization, this is particularly concerning: APRA's CPS 234 mandates information security capabilities, and CPS 230 requires operational resilience including AI system governance.

Empower: 1.8 — Reactive
Team spread: 1–3
Capability Concentrated, Not Distributed
5
5
5
■ Reactive■ Responsive■ Proactive

Empower shows the most even distribution across scores 1-3, but the even spread masks a structural problem: C-suite members score 3.0 while Managers score 1.0. The people with decision-making authority feel empowered; the people executing feel constrained. HR & People scores 2.5 on Empower — the highest departmental score — suggesting awareness of the capability gap but not yet effective programs to close it.