
The team's strongest pillar is Watch (2.0) and Verify (1.9), while Empower (1.8) and Adapt (1.9) trail behind. However, the real story is not the averages — it's the perception gap on Watch, where scores range from 1 to 4. Your Technology & IT team sees a fundamentally different signal landscape than Finance and Operations. This misalignment means strategic decisions about technology investment are being made without shared situational awareness.
For a financial services organization operating under APRA's CPS 230 operational resilience requirements and facing the rise of AI-driven neobanks, this gap is not academic. Competitors with aligned leadership teams are making faster, more coherent bets on AI integration. Every month this perception gap persists, Meridian falls further behind organizations that have achieved internal consensus on what's coming and how to respond.
How your team members are distributed across the four maturity bands based on their overall WAVE score. A healthy team clusters in the upper bands; wide spread indicates misalignment.
The WAVE framework scores each pillar from 1 to 4. These bands define what each level means in practice — from ad hoc responses to technology as a strategic advantage.
How the team scores across the four WAVE pillars. Each axis shows the team average score and maturity band.
The team's diamond is compact and low: Watch (2.0) leads marginally, while Adapt (1.9), Verify (1.9), and Empower (1.8) cluster just below the Responsive threshold. The organization is aware but not yet structurally prepared — and the tight grouping masks wide internal perception gaps.
Strategic Interpretation Session
60-minute facilitated session with Dr. Mark van Rijmenam to surface assumptions behind your team's perception gaps and align on priorities.
Book Mark →Now What? Masterclass
8-episode series based on the WAVE framework. Equip your entire leadership team with the mental models to navigate the Intelligence Age.
Start Free →Futurwise Platform
Subscribe your team to Futurwise for curated AI intelligence — noise-cancelling for your brain, tailored to financial services.
Explore Futurwise →Perception gaps reveal where your leaders disagree about organizational readiness — the single most valuable insight a team assessment provides that individual reports cannot.
- →Watch has the lowest alignment (43%) — Technology sees a 3.3 landscape while Operations sees a 1.5 landscape
- →The C-suite scores 0.5–1.5 points above middle management across every pillar — a consistent optimism bias
- →The team agrees Adapt is weak (no one above 3) — dangerous consensus because no one champions improvement
The perception gaps in this team are not random disagreements — they follow structural lines. Technology sees more because they're exposed to more. The C-suite scores higher because they set the direction (and may overestimate how well it's being followed). Closing these gaps requires structured cross-functional conversations, not more email.
The 3-point spread on Watch (scores ranging from 1 to 4) is the most significant perception gap. Your CTO operates at Architect level — systematic scanning, AI-powered signal detection, 5+ year horizon. Your Operations managers rely on conferences and gut feel with a 12-month window. They are making decisions about the same organization based on fundamentally different views of the external world.
The team broadly agrees that Adapt is weak (no one scores above 3, 67% score 1–2). This consensus on weakness is concerning because it means no internal champion exists for adaptation — the pillar most critical to surviving the pace of change in financial services.
The perception gaps in this team are not random disagreements — they follow structural lines. Technology sees more because they're exposed to more. The C-suite scores higher because they set the direction (and may overestimate how well it's being followed). Closing these gaps requires structured cross-functional conversations, not more email.
The department heatmap reveals structural readiness patterns — showing which parts of your organization are future-ready and which are structurally behind.
- →Technology & IT is the only Proactive department (11.0/16) — an island of excellence that hasn't transferred capabilities
- →Operations is the weakest department (5.8/16 Reactive) — a 5.2-point gap from Technology that must be bridged
- →HR understands the Empower gap (2.5) but scores 1.5 on Watch — building training programs without visibility into what skills will matter
Meridian's departmental readiness is deeply uneven. Technology leads but in isolation; Finance and Operations lag in ways that will block any organization-wide AI initiative. The bridge between these departments is not more presentations from Technology — it's joint working teams where operational constraints are heard alongside technology vision.
| Department | Watch | Adapt | Verify | Empower |
|---|---|---|---|---|
| Technology & IT | 3.3 | 2.7 | 2.7 | 2.3 |
| Finance | 1.7 | 1.3 | 1.7 | 2.0 |
| Operations | 1.5 | 1.8 | 1.3 | 1.3 |
| HR & People | 1.5 | 2.0 | 1.5 | 2.5 |
| Legal & Compliance | 1.5 | 1.0 | 2.5 | 1.0 |
| Marketing & Sales | 3.0 | 3.0 | 2.0 | 2.0 |
The most important cross-departmental pattern: Technology & IT is 5.2 points above Operations on total score. When the technology team proposes AI integration into operational workflows, they are speaking to a department that scores 1.3 on Verify — a department that doesn't yet have the governance instincts to evaluate what they're being asked to implement.
Meridian's departmental readiness is deeply uneven. Technology leads but in isolation; Finance and Operations lag in ways that will block any organization-wide AI initiative. The bridge between these departments is not more presentations from Technology — it's joint working teams where operational constraints are heard alongside technology vision.
The seniority gradient reveals whether strategic intent translates into operational reality — or whether a communication gap exists between levels.
- →6.5-point gap between C-suite (11.0 Proactive) and Managers (4.5 Reactive) — they live in different realities
- →The sharpest cliff is between VP-Directors (8.7) and Senior Managers (6.0) — where strategy stops translating
- →Managers score 1.0 on Empower — they neither see signals, validate outputs, nor feel authorized to act
The seniority gradient is where Meridian's 'willing but unable' pattern is starkest. The C-suite sets direction; the Managers can't follow. The VP-Directors are the bridge — and at Responsive maturity, the bridge is shaky. The 90-day plan targets this gap directly through reverse mentoring and translation meetings.
There's a confidence cliff between VP-Director level (total: 8.7) and Senior Managers (total: 6.0). This 2.7-point drop marks the point where strategic awareness stops translating into operational capability.
Managers score 3.0 on Learning Velocity (self-assessment) but only 1.0 on Empower (WAVE score). They believe they can learn fast — but the organization hasn't given them permission, tools, or authority to apply that learning.
The seniority gradient is where Meridian's 'willing but unable' pattern is starkest. The C-suite sets direction; the Managers can't follow. The VP-Directors are the bridge — and at Responsive maturity, the bridge is shaky. The 90-day plan targets this gap directly through reverse mentoring and translation meetings.
Self-assessment compares how individuals rate their personal readiness to how the organization actually performs — revealing where personal capability is being blocked by organizational structures.
- →Learning Velocity is highest (2.9/5) but Adapt is weakest pillar (1.9/4) — people learn fast, the org can't channel it
- →Risk Tolerance is lowest (2.1/5) — reinforced by organizational rigidity that punishes experimentation
- →Cross-Functional Awareness (2.5/5) is moderate, partly explaining the departmental perception gaps
The self-assessment data confirms the central thesis of this report: Meridian's people are more ready than Meridian's systems. Individual capability exists; organizational infrastructure does not. The 90-day plan must close this gap by redesigning permission structures, not by training individuals who are already willing.
The team rates Learning Velocity at 2.9/5 (highest self-assessment) but scores 1.9/4 on Adapt (weakest pillar). They believe they learn fast but the organizational evidence says they can't translate learning into action. The bottleneck isn't capability — it's structure.
The self-assessment data confirms the central thesis of this report: Meridian's people are more ready than Meridian's systems. Individual capability exists; organizational infrastructure does not. The 90-day plan must close this gap by redesigning permission structures, not by training individuals who are already willing.
This section compares how individuals rate their personal readiness versus how they rate their organization — revealing where personal capability exists but organizational structures prevent it from being used.
- →Learning Velocity (2.9/5) vs Adapt (1.9/4) — people can learn, the org can't channel it
- →Technology scores Proactive (11.0) while Operations scores Reactive (5.8) — islands of excellence, not a rising tide
- →Change Readiness self-assessment (2.5/5) exceeds organizational evidence (8.5/16 Responsive) — optimism outpacing reality
Meridian's individual-organizational gap is the central finding of this report. The people are ready; the systems are not. The 90-day plan addresses this by redesigning organizational structures (experimentation charters, cross-functional teams, distributed authority) rather than training individuals who are already willing.
Team members rate their personal learning velocity at 2.9/5 but the organization scores 1.9/4 on Adapt. People can learn; the organization can't channel that learning into action. Budget rigidity, approval bottlenecks, and siloed authority prevent individual readiness from becoming organizational readiness.
Action: Create an experimentation charter: who can approve what, at what budget threshold, with what reporting requirements. Distribute decision-making authority to the level where signals are detected.
Technology & IT operates at Proactive maturity (11.0/16) while Operations is deep Reactive (5.8/16). These aren't two departments on the same readiness journey — they're on different journeys entirely. Technology's capabilities are not being transferred.
Action: Establish a formal Technology-Operations bridge team — 3 people from each department meeting weekly for 8 weeks with one deliverable: a joint AI readiness roadmap.
Meridian's individual-organizational gap is the central finding of this report. The people are ready; the systems are not. The 90-day plan addresses this by redesigning organizational structures (experimentation charters, cross-functional teams, distributed authority) rather than training individuals who are already willing.
Themes are synthesized from qualitative responses across all team members. Individual responses are never quoted — only aggregated patterns.
- →Cross-departmental silos (60% of team) — the most widely shared concern, directly explaining perception gaps
- →Speed of decision-making (47%) — budget cycles and approval chains are the structural barrier to adaptation
- →No one mentioned AI governance despite Verify being the second-weakest pillar — the team doesn't know it has a governance gap
The qualitative data confirms the quantitative findings: silos drive perception gaps, slow processes drive low Adapt scores, and the absence of governance awareness explains the low Verify scores. The 90-day plan addresses all three themes directly.
A majority described barriers between departments as the primary obstacle. Technology feels unheard by Finance; Operations feels excluded from strategy; HR sees skills gaps but can't get buy-in.
AI initiatives will be duplicated, contradictory, or abandoned when they require cross-functional cooperation.
Nearly half described decision-making as too slow. Multiple members referenced delayed approvals, rigid budget cycles, and initiatives that lost momentum during governance reviews.
Without structural change — faster approvals, distributed authority, experimentation budgets — individual willingness to adapt will continue to be bottlenecked.
Several members from Legal, Compliance, and Finance identified regulatory uncertainty as a barrier. A tension between wanting to move forward and uncertainty about upcoming APRA, ASIC, and Privacy Act requirements.
Regulatory scanning must be integrated with technology scanning. Currently, Legal monitors regulation independently while Technology monitors capabilities independently.
No team member identified AI governance or verification as a concern, yet Verify is the team's second-weakest pillar (1.9/4). Several members described scenarios that are fundamentally governance problems — trusting chatbot outputs, data quality concerns — without recognizing them as governance gaps. The team doesn't know it has a Verify problem, which is the most dangerous form of the problem.
The qualitative data confirms the quantitative findings: silos drive perception gaps, slow processes drive low Adapt scores, and the absence of governance awareness explains the low Verify scores. The 90-day plan addresses all three themes directly.
Meridian's leadership team scores 2.1 on AGI Readiness, placing it in the Aware band. While the team recognizes that AI capabilities are advancing rapidly, there is no structured preparation for the scenario where AI systems can perform most knowledge work autonomously. The 12 leaders who completed the AGI assessment reveal a team that is thinking about the future but has not yet translated that thinking into concrete plans, governance frameworks, or workforce transition strategies.
The team agrees most on Economic Resilience (range 2-3, variance 0.3) and diverges most on Decision Authority (range 1-4, variance 0.8). The Decision Authority split falls along functional lines: Technology and Strategy leaders are actively mapping delegation frameworks, while Operations and Legal assume the current human-in-the-loop model will persist. This is not a knowledge gap but a fundamental disagreement about how quickly AI capabilities will reach decision-grade quality in financial services. Three members independently named client trust erosion as their primary AGI risk, suggesting this is an organizational blind spot that multiple people see but nobody is actively addressing.
Meridian scores above average on operational AI readiness (WAVE 2.6, Proactive band) but below average on AGI preparedness (2.1, Aware band). This is the most common pattern for established financial services firms: strong execution capabilities today, weak preparation for what is coming. The 0.5-point gap means Meridian can adopt current AI tools effectively but has not begun preparing for the structural changes that AGI-level systems will bring to advisory services, compliance automation, and client relationship models.
Your Chief Risk Officer should convene a 90-minute AGI Scenario Planning session with all 15 leaders. Present the Workforce Displacement scores (avg 1.8, four leaders at 1) and ask: What happens to our advisory model if AI can perform 80% of the analysis our consultants bill for? Document assumptions and assign owners for the three highest-risk scenarios.
Your CTO and Head of Operations should co-lead a Decision Authority Mapping exercise. For the top 20 decisions made weekly across the firm, classify each as: must remain human, could be AI-assisted, could be AI-led with human oversight. This directly addresses the team's widest disagreement (range 1-4 on Decision Authority).
Your CEO should commission a Client Trust Impact Assessment: survey 10 key clients on their expectations for AI in advisory services. The four leaders who named client trust erosion as their top AGI risk need data, not assumptions, to plan the transition.
- Close the Adapt perception gap: present heatmap data to full leadership team
- Pair CTO + Head of Compliance for a 2-week AI governance sprint
- Commission workforce displacement impact assessment
- Establish cross-functional AI working group (not IT-only)
- Redesign compliance review process: from checkpoint to enabler
- Launch Empower pilot in Risk & Compliance (dedicated experimentation budget)
- Build pre-approved response playbooks for top 3 regulatory scenarios
- Measure: re-assess Adapt scores at senior manager level
- Establish AGI Preparedness Committee with board sponsorship
- Publish internal Decision Authority Framework for AI systems
- Report to board: perception gap closure progress + AGI readiness baseline
- →Subscribe all leaders to Futurwise — takes 10 minutes, immediately expands signal intake.
- →Share the departmental heatmap at the next leadership meeting. Let the data start the conversation.
- →Ask 3 senior managers: "What AI tool would save you 2 hours per week?" Act on the best answer within 5 business days.
- →Create a #ai-signals channel where anyone can post developments. Leadership posts first.
Document your scanning sources and methods in a 1-page playbook. Share with Operations and Finance. Your Proactive maturity is an asset — but only if transferable.
Join Technology's next sprint review. The goal: understand what AI capabilities exist today so investment evaluations are grounded in current reality.
Identify the single highest-volume manual process. Invite Technology to observe for one day. Joint assessment: where would AI reduce error rate by 30%+?
Co-design an AI literacy program with Technology. HR knows learning design; Technology knows capabilities. First cohort: Senior Managers (the cliff level).
Map APRA CPS 230 and CPS 234 to current AI use cases. Identify governance gaps. Share with the AI Governance Working Group.
Document your AI personalization experiments. Present at the next leadership meeting — you may be the bridge between Technology's vision and the business.
Your team's perception gap on Adapt — where senior leaders score 2.7 and mid-level managers score 1.6 — requires a facilitated conversation to surface assumptions. The confidence cliff between C-Suite and Senior Managers means strategy isn't translating into execution.
Strategic Interpretation Package: AI-generated pre-analysis + 60-minute session with Dr. Mark van Rijmenam + written strategic recommendations + priority access to boardroom workshop scheduling.
Book Your Session →The team's Watch scores reveal a deeply fragmented view of the external landscape. Four members score at the lowest level — relying on conferences and industry publications — while two members (both in Technology & IT) operate at Architect level with systematic scanning. The bimodal distribution is the clearest sign that Meridian's signal detection capability exists in pockets rather than as an organizational competency. In financial services, where algorithmic trading regulation, open banking mandates, and embedded finance models are reshaping the competitive landscape quarterly, this fragmentation means critical signals reach some decision-makers months before others.
No team member scores above 3 on Adapt, and the majority cluster at 1-2. This is an organization that cannot reallocate resources in response to new signals. The absence of any Architect-level scores means no one on the leadership team has experienced what rapid, disciplined adaptation looks like. In a sector where neobanks iterate product quarterly while traditional institutions run 18-month development cycles, Meridian's adaptation ceiling is a competitive liability.
Verify scores cluster in the Reactive-to-Responsive range with Legal & Compliance scoring highest (2.5 mean) and Operations scoring lowest (1.3). This pattern reveals that governance exists where regulation forces it but has not been extended to AI outputs, data provenance, or algorithmic decision-making. For a financial services organization, this is particularly concerning: APRA's CPS 234 mandates information security capabilities, and CPS 230 requires operational resilience including AI system governance.
Empower shows the most even distribution across scores 1-3, but the even spread masks a structural problem: C-suite members score 3.0 while Managers score 1.0. The people with decision-making authority feel empowered; the people executing feel constrained. HR & People scores 2.5 on Empower — the highest departmental score — suggesting awareness of the capability gap but not yet effective programs to close it.