top of page

AI Risk Assessments: Same Game, New Buzzword

  • Thomas Jreige
  • 10 hours ago
  • 4 min read
Strategy AI Risk
AI Risk Assessments: Same Game, New Buzzword

Every few months, a new wave of “AI Risk Frameworks” sweeps through the industry.


Whitepapers appear overnight. Panels are formed. Consultants polish slide decks filled with bold fonts and even bolder claims. You could almost believe risk management has been reborn rebranded for the age of algorithms.


But it hasn’t.


AI hasn’t changed the definition of risk. It hasn’t rewritten probability, consequence, or impact. It has only added a new context. And that context, not the technology, is what determines whether a risk assessment succeeds or fails.


The rush to create “AI risk” frameworks has become a norm. Each new model promises a future-proof way to manage risk in the era of intelligent machines, yet many miss the simplest truth: risk management doesn’t start with technology.

 

“It starts with understanding what you are assessing, why, and in what context.”

-- Dr Thomas Jreige: Managing Partner, Shimazaki Sentinel

 

Without that foundation, the rest is guesswork.


The Cult of Overcomplication


The industry loves complexity. It makes things sound sophisticated, urgent, and expensive. The same simple principle - identify, analyse, evaluate, treat, monitor, has now been rebranded a hundred times over, each with its own “AI twist.”


In reality, if you replaced the word “AI” with “database,” most of these frameworks would still hold up. The structure of risk management hasn’t evolved; the vocabulary has.


There’s nothing inherently wrong with AI creating new challenges. Bias, hallucinations, data poisoning, model drift, and lack of transparency are real issues. But they aren’t foreign to the discipline. They’re variants of the same old themes being accuracy, control, accountability, and misuse.


The problem is that too many organisations are trying to manage AI as if it were a sentient being rather than a tool. They’ve begun treating the system as the risk itself, instead of analysing how it’s designed, deployed, and governed within their ecosystem.


Context Is Still King in Risk Management


If you fail to define your context, the entire assessment collapses.


Context tells you what’s important, who’s involved, what’s at stake, and how much risk you can tolerate. It’s the difference between an assessment that drives decisions and one that fills a binder no one reads.


Every credible risk framework begins with context. AI doesn’t change that. In fact, it makes context even more critical. What’s the model’s purpose? Who maintains it? Where does the data originate? Who validates its outputs? And our favourite? What is the organisation’s information strategy for the next 5 years? And that drives more conversations of context.


These are not AI-specific questions; they’re fundamental governance questions. The same logic that applies to a financial model, a data warehouse, or a third-party supplier applies here. The medium has changed. The responsibility hasn’t.


When organisations skip context, they start chasing symptoms instead of structure. They focus on the “AI problem” rather than the organisational one. That’s when controls become arbitrary and risk assessments turn into box-ticking exercises dressed in technical language.


AI Is a Tool, Not a Stakeholder


Somewhere along the way, we started anthropomorphising AI. It became an actor in the story. Something to manage, engage, even fear. But it’s still a tool. A very powerful one, yes, but a tool, nonetheless.


AI doesn’t make ethical decisions. It doesn’t understand consequence. It doesn’t negotiate context. Those remain human functions. Treating AI as a participant in the governance process instead of an object of it is a category error.


When you strip away the marketing, AI systems process data, apply logic, and produce outputs. They reflect the design choices of their creators and the governance culture of their operators. Risk doesn’t emerge from the machine but from how people design, train, and trust it.

The smartest model in the room is still only as good as the person who configured it.


The Real Weak Link


Every risk professional knows where the real vulnerabilities lie. In behaviour first and foremost. This leads to nearly every decision being made and the how impacting the decisions are being made.


The biggest threat isn’t a rogue algorithm; it’s blind trust. It’s the team that accepts an AI output without validation because it looks impressive. It’s the Board that assumes “AI governance” is a checkbox someone else is handling.


Technology magnifies human strengths and weaknesses. A well-designed control environment makes AI safer. A complacent one makes it catastrophic. The core problem isn’t AI; it’s how organisations fail to align people, process, and purpose around it.


The Mirage of the New Framework


There’s an irony in all this. The more frameworks we invent, the less clarity we seem to have.

We now have dozens of “AI governance models,” “ethical AI standards,” and “trustworthy AI” initiatives, yet few address the root cause of poor risk outcomes: weak understanding of the system and lack of discipline in applying fundamentals.


Adding another framework doesn’t make you safer. Itmerely gives you another binder, headache and more work to keep you up at night. Real governance is consistency, applying the same principles of accountability, validation, and assurance across all technologies, whether they run on silicon or spreadsheets.


Back to the Fundamentals


Here’s what matters:


  • Define your context. Know exactly what you’re assessing and why.

  • Map your assets and dependencies. Understand what the AI system connects to and how it influences outcomes.

  • Validate your data and outputs. Don’t trust black boxes; verify them.

  • Embed governance. Apply your existing frameworks, controls, and escalation paths.

  • Maintain consistency. Treat AI the same way you treat every other critical system - with structured oversight, not hype.


That’s it. The fundamentals haven’t changed. The most advanced organisations aren’t those building new frameworks. They’re the ones applying existing principles consistently across new frontiers.


Closing Thoughts


AI has changed the scale and speed of risk, but not its nature. It has introduced new complexity, but not new laws. Risk management remains what it has always been. The discipline of context, clarity, and consequence.


Before we rush to rewrite the rulebook, maybe we should just read the one we already have.

At Shimazaki Sentinel, we help organisations focus on what matters. An understanding of your environment, defining context, and applying real governance where it counts. Everything is real, practical and dependable.


Because in risk management, the future belongs to those who understand it.

 
 

Discreet by Design

  • We do not list client names.

  • Our relationships are based on trust, necessity, and strategic alignment.

  • Every engagement is handled under the highest standard of confidentiality.

  • Our operations are truly global.

  • We advise and protect organisations across continents.

  • Providing strategic and tactical risk services wherever the stakes are highest.

Exchange Tower

Level 17

2 The Esplande

Perth WA 6000

Australia

Al Nasr Technical Trading Agencies (ATTA)

M43 Mussafah

Abu Dhabi

United Arab Emirates

  • LinkedIn

Contact us

 

Copyright © 2025 by Shimazaki Sentinel. Powered and secured by Wix 

 

bottom of page