<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=2233467260228916&amp;ev=PageView&amp;noscript=1">

Responsible AI starts with data – not principles

Celise Skaar Celise Skaar leads Cegal’s global sustainability efforts. They focus on what responsible practice truly requires in day‑to‑day operations—from data strategy to AI—and regularly write about the intersection of technology, risk, and sustainability.
03/10/2026 |

Most organizations that say they work with responsible AI are actually working on formulating principles. It's not the same thing. Responsible AI easily becomes a document. A declaration. Something you stand behind in strategy presentations and annual reports. But a principle that is not rooted in how systems are actually built and operated is not governance - it's intention. And intention is not enough when it comes to accountability.

The principles are known. What they actually require is less discussed.

Most accountable AI frameworks are built on the same basic principles - fairness, reliability, security and accountability. There is broad agreement that these are important. Less attention is paid to what they mean in the day-to-day work of building and operating systems.

Because every principle, when you follow it far enough, ends up in the same place:

Principle

What it actually requires

Fairness

Representative training data

Reliability

Validated and up-to-date data

Security and safety

Secure data processing

Accountability

Traceable data documentation

 

The principles overlap and reinforce each other - but what they all have in common is that they cannot be met without conscious handling of the data.

Fairness requires that the training data is representative. An AI system learns from history, and data naturally reflects the context in which it was created - what events were recorded, what facilities were measured, what deviations were documented. That's not a problem in itself, but it's something that needs to be understood and actively managed.

Reliability is maintained through data quality over time. A predictive system is only as good as the data that feeds it. Where AI recommendations are included in maintenance plans or operational decisions, this is a direct prerequisite for the system to do the job it is intended to do.

Security today is about more than access management to systems. It includes how training data is protected, who has had access to it, and whether data supply chains are assessed with the same rigor as software supply chains.

Accountability requires traceability. Where AI systems are included in decision-making processes, it must be possible to understand the basis for the decisions afterwards. This requires that data is documented, that ownership is clarified, and that the models used are transparent enough to be managed.

A question of foundation

An AI strategy without an associated data strategy rests on a weak foundation. Not because the technology is bad, but because the technology is only as good as the data it is built on.

The practical question for any organization that takes responsible AI seriously is therefore not primarily "what principles do we stand behind" - but "do we actually have control over the data that drives our systems, and do we know what happens when something goes wrong?"

That question deserves an equally concrete answer.

Related articles

AI Oil and Gas Digitalization
Paying experts to think, not search: How AI unlocks subsurface...
Editorial staff
arrow
AI Digitalization
The “Magic Powers” trap: How Copilot is quietly killing your...
Mario Vaz Henriques Mario has 20 years of experience in...
arrow
AI Digitalization
From silos to synergy with AI
Mario Vaz Henriques Mario has 20 years of experience in...
arrow