top of page

Just Because You Call It Risk-Based, Doesn’t Mean It Is

  • Thomas Jreige
  • Nov 7
  • 3 min read
Risk Robot AI

The term “risk-based” has become fashionable. It appears in every new policy paper, cybersecurity framework, and AI regulation as if its mere presence guarantees wisdom and foresight. The European Union’s Artificial Intelligence Act, the world’s first comprehensive AI law, is a perfect example.


It is being praised as a “risk-based framework”. But if you read closely, it is not. Not really.


The AI Act is, at its core, a classification framework, not a risk framework. It divides AI systems into four boxes: unacceptable, high, limited, and minimal risk. On paper, that sounds methodical. In practice, it is a set of predefined categories that do not evolve with context, intent, or behaviour.


And that is a problem.


True risk is not static. It is situational, fluid, and often unpredictable. It is shaped by who uses the system, how it is deployed, what data feeds it, and how it can be weaponised. A “low-risk” chatbot can become a vector for psychological manipulation. A “minimal-risk” recommender algorithm can influence elections. Context changes everything.


The Act does not measure or model risk. It classifies based on function, not on exposure. It dictates controls according to category, not likelihood or impact. It is risk by name and compliance by design.


This distinction might seem small or even semantic, but it is not. Terminology is direction. When an entire global industry is being built around language, words matter more than ever.


Why This Matters


Regulators, engineers, and policymakers all speak different dialects of “risk.” In engineering, it is a measurable function: probability multiplied by consequence. In governance, it is often qualitative, built around trust and accountability. In law, words define reality.


When you label something “risk-based,” you imply a process of assessment, mitigation, and continuous evaluation. You suggest adaptability, the ability to respond to shifting threats, emerging technologies, and adversarial behaviours.


The EU AI Act does not do this. It assumes risk can be predicted and categorised in advance.


That might work for safety compliance in static systems, but not for dynamic, self-learning ones.


This is not just a European issue. Legislators worldwide, including in Australia, are already adopting the same terminology without questioning whether the term means what they think it means. That is how frameworks drift. That is how oversight turns performative.


Classification Is Not Risk


Classification is useful. It brings structure and helps policymakers set priorities. But classification without reasoning is like triaging patients without examining their symptoms.


The AI Act’s categories are based on intended use, not real-world harm. It does not ask:


  • How might this system evolve over time?

  • Who could misuse it?

  • What are the second- and third-order consequences?

  • How does this risk interact with human psychology, economics, or geopolitics?


It assumes that systems used in law enforcement or healthcare are “high risk,” while tools like content recommendation or translation are “limited” or “minimal.” That might be true in a narrow sense, but history shows that the most destabilising systems rarely appear dangerous at first.


A Better Way Forward


A real risk-based approach to AI governance would require:


  • Dynamic risk modelling that updates as AI learns, scales, or shifts context.

  • Human–adversary simulation that evaluates not just what the system does, but how it could be abused.

  • Quantitative and qualitative metrics that combine technical analysis with human understanding.

  • Cross-sector adaptability that allows frameworks to evolve across industries and jurisdictions.



This is not easy to legislate, but it is the only sustainable path forward.


Otherwise, AI regulation will end up full of paperwork, certification checklists, and conformity audits but lacking the one thing that actually prevents harm: understanding.


The Bigger Picture


We are at a point in history where language defines law and law defines technology. If we misuse the language of risk, we dilute the integrity of what risk management truly means. We create a global echo chamber where policymakers believe they have built a safeguard when, in truth, they have built a filing cabinet.


This might sound like semantics to some, but in an age where AI systems will make decisions about finance, security, and even human freedom, semantics are everything.


Once you legislate a word, you legislate its meaning and its consequences.


The challenge ahead is not only about regulating machines. It is about reclaiming the precision of language before it regulates us.


Talk to the People Who Actually Understand Risk


At Shimazaki Sentinel, we do not confuse compliance with protection or classification with control. We live and breathe risk. The real kind that shifts with adversaries, markets, and human intent.


Our team blends intelligence, law, psychology, technology, and global security insight to help organisations move beyond paperwork and truly understand where their exposure lies.


If you want clarity, confidence, and conviction in how you govern AI and digital systems, talk to us.


Because in a world of frameworks and definitions, we are the only people who can show you what risk really looks like.

 
 

Discreet by Design

  • We do not list client names.

  • Our relationships are based on trust, necessity, and strategic alignment.

  • Every engagement is handled under the highest standard of confidentiality.

  • Our operations are truly global.

  • We advise and protect organisations across continents.

  • Providing strategic and tactical risk services wherever the stakes are highest.

Exchange Tower

Level 17

2 The Esplande

Perth WA 6000

Australia

Al Nasr Technical Trading Agencies (ATTA)

M43 Mussafah

Abu Dhabi

United Arab Emirates

  • LinkedIn

Contact us

 

Copyright © 2025 by Shimazaki Sentinel. Powered and secured by Wix 

 

bottom of page