Skip to content

Davos 2026: The new risk expectation is proof, not promises.

Davos has never lacked ambition. What stood out in 2026 was something quieter but more consequential: a shift in what counts as credibility.

Across discussions on AI, cyber resilience and sovereignty, the message was consistent. Trust is fragile. Interdependence is rising. And stakeholders are losing patience with “we’re working on it”. The expectation is moving from intent to evidence, from promises to proof.

That shift matters because risk management is no longer judged internally.

It is assessed in front of boards, regulators, customers, partners and, increasingly, a public that assumes systems will fail unless shown otherwise.

In that environment, resilience stops being a comforting word. It becomes a measurable claim.

When trust demands evidence, AI governance matters. Download our Managing AI risks: Turning uncertainty into advantage eBook to find out more:

Download now

When trust declines, proof replaces persuasion  

Davos 2026 did not merely reinforce that risk is growing. It reinforced that the burden of proof is rising:

  • Technological complexity is moving faster than institutional controls

  • Geopolitical alliances are changing faster than supply chains can adapt
  • Risk landscapes are multiplying faster than traditional governance cycles.

When everything accelerates, the gap between “we intend to manage this” and “we can demonstrate control” becomes visible very quickly.

Nowhere was that clearer than in three overlapping domains:

  • AI adoption

  • Cyber resilience
  • Dependency risk.

AI without guardrails is a governance failure   

One of the most explicit calls for proof came from IMF Managing Director Kristalina Georgieva. Describing AI as “like a tsunami hitting the labor market”, she asked a simple question:

“Where are the guardrails?”1

Her warning was grounded in evidence. IMF research suggests AI could affect around 60% of jobs in advanced economies and 40% globally, with particular pressure on entry-level roles and the middle class. The concern was not hypothetical change. It was pace and the absence of mechanisms to manage it.

At Davos, this was framed less as a workforce issue than a governance one. Because “we’re adopting AI” is no longer a neutral business decision. It immediately raises risk questions boards expect to be answered: which decisions AI influences, what failures are plausible, who is accountable for outcomes, and what evidence exists that controls are in place and working.

‘Guardrails’, in this context, is shorthand for ownership, controls, testing and oversight: the basic machinery of risk management.

From regulation to demonstrable trust  

That logic was reinforced in a separate Davos discussion among legal industry leaders, which framed the “real AI challenge” as trust and alignment rather than ever-more regulation2. The key is accountability and transparency inside systems.

For risk leaders, that distinction matters. It shifts attention away from policy theatre and towards operational proof.

Trust becomes tangible when organizations can show which AI use cases are approved, how risk is assessed before deployment, what data, models and third parties are involved, and whether exceptions, issues and remediations are tracked to closure.

The implication is blunt: an AI policy is not enough.

An AI risk operating model that can withstand scrutiny is.

Cyber resilience meets sovereignty and dependency

Cyber resilience, meanwhile, was discussed at Davos in geopolitical terms. Switzerland’s National Cyber Security Centre framed its engagement around digital sovereignty, technological leadership, and the risks of fragmentation in a sanction-heavy, regionally regulated world3.

Their assessment was explicit:

“Fragmentation and constrained technology choices raise costs and weaken resilience, particularly for small, open economies.”

That is classic dependency and concentration risk, not a narrow security concern.

This reframing explains why cyber resilience is now inseparable from questions about reliance on cloud providers, platforms and AI tooling that incorporates LLMs, and from the ability to demonstrate how services would fail under stress. As the Davos session on European tech sovereignty put it, sovereignty is about having choice in partnerships, not being forced into dependencies on “one country or one company”4.

That is a board-level lens, not an operational one.

What Davos 2026 adds up to   

Taken together, the signals are clear. AI is advancing faster than governance maturity. Trust is becoming the governing currency, and trust demands accountability that can be demonstrated. Cyber resilience is now interwoven with sovereignty and dependency risk, extending the proof requirement across entire ecosystems.

Overlay this with the World Economic Forum’s warning that declining trust undermines our ability to respond to shared challenges5, and the theme of Davos 2026 comes into focus:

Stakeholders increasingly expect organizations to prove resilience, not simply promise it.

What ‘proof’ actually looks like  

Proof does not mean a larger risk register, a new policy or another dashboard. It means governance that can answer hard questions quickly and consistently, especially under pressure.

That starts with named accountability rather than diffuse responsibility. It requires controls anchored to real services and dependencies, not abstract frameworks. It depends on evidence of testing, not confidence in design, particularly for AI systems evolving in live environments. And it demands reporting that enables boards to decide, not just observe.

The message for risk leaders  

Davos 2026 did not introduce a single new existential risk. It surfaced a more uncomfortable shift: the world is less willing to take organizations at their word.

AI adoption, cyber resilience, and dependency risk are converging into one question that boards, regulators and customers will ask in different ways: can you demonstrate that you are in control?

In 2026, risk leadership is less about saying the right things. It is about building the operating discipline to prove them.

Proof is now the standard for AI risk. Explore how organizations are building governance-grade AI risk operating models that stand up to board, regulator and public scrutiny. Download Managing AI risks: Turning uncertainty into advantage now:

Download now

References

1 https://www.youtube.com/watch?v=qrKMrWMNVYo 

2 https://economictimes.indiatimes.com/industry/services/consultancy-/-audit/wef-davos-real-ai-challenge-is-trust-and-alignment-not-tighter-control/articleshow/127497675.cms 

3 https://www.news.admin.ch/en/newnsb/ohUuhoYxBNyyO62WDAnkP  

4 https://dig.watch/event/world-economic-forum-2026-at-davos/is-europes-tech-sovereignty-feasible 

5 https://www.weforum.org/press/2026/01/annual-meeting-2026-a-spirit-of-dialogue-ceb3ae9c08/ 

About the author

For over 20 years, Protecht has redefined the way people think about risk management with the most complete, cutting-edge and cost-effective solutions. We help companies increase performance and achieve strategic objectives through better understanding, monitoring and management of risk.