Jump to content
  • Sign Up
×
×
  • Create New...

Recommended Posts

  • Diamond Member

The

This is the hidden content, please
EN 304 223 standard introduces baseline security requirements for AI that enterprises must integrate into governance frameworks.

As organisations embed machine learning into their core operations, this European Standard (EN) establishes concrete provisions for securing AI models and systems. It stands as the first globally applicable European Standard for AI cybersecurity, having secured formal approval from National Standards Organisations to strengthen its authority across international markets.

The standard serves as a necessary benchmark alongside

This is the hidden content, please
. It addresses the reality that AI systems possess specific risks – such as susceptibility to data poisoning, model obfuscation, and indirect prompt injection – that traditional software security measures often miss. The standard covers deep neural networks and generative AI through to basic predictive systems, explicitly excluding only those used strictly for academic research.

ETSI standard clarifies the chain of responsibility for AI security

A persistent hurdle in enterprise AI adoption is determining who owns the risk. The ETSI standard resolves this by defining three primary technical roles: Developers, System Operators, and Data Custodians.

For many enterprises, these lines blur. A

This is the hidden content, please
firm that fine-tunes an open-source model for fraud detection counts as both a Developer and a System Operator. This dual status triggers strict obligations, requiring the firm to secure the deployment infrastructure while documenting the provenance of training data and the model’s design auditing.

The inclusion of ‘Data Custodians’ as a distinct stakeholder group directly impacts Chief Data and Analytics Officers (CDAOs). These entities control data permissions and integrity, a role that now carries explicit security responsibilities. Custodians must ensure that the intended usage of a system aligns with the sensitivity of the training data, effectively placing a security gatekeeper within the data management workflow.

ETSI’s AI standard makes clear that security cannot be an afterthought appended at the deployment stage. During the design phase, organisations must conduct threat modelling that addresses AI-native attacks, such as membership inference and model obfuscation.

One provision requires

This is the hidden content, please
to restrict functionality to reduce the attack surface. For instance, if a system uses a multi-modal model but only requires text processing, the unused modalities (like image or audio processing) represent a risk that must be managed. This requirement forces technical leaders to reconsider the common practice of deploying massive, general-purpose foundation models where a smaller and more specialised model would suffice.

The document also enforces strict asset management. Developers and System Operators must maintain a comprehensive inventory of assets, including interdependencies and connectivity. This supports shadow AI discovery; IT leaders cannot secure models they do not know exist. The standard also requires the creation of specific disaster recovery plans tailored to AI attacks, ensuring that a “known good state” can be restored if a model is compromised.

Supply chain security presents an immediate friction point for enterprises relying on third-party vendors or open-source repositories. The ETSI standard requires that if a System Operator chooses to use AI models or components that are not well-documented, they must justify that decision and document the associated security risks.

Practically, procurement teams can no longer accept “****** box” solutions. Developers are required to provide cryptographic hashes for model components to verify authenticity. Where training data is sourced publicly (a common practice for Large Language Models), Developers must document the source URL and acquisition timestamp. This audit trail is necessary for post-incident investigations, particularly when attempting to identify if a model was subjected to data poisoning during its training phase.

If an enterprise offers an API to external customers, they must apply controls designed to mitigate AI-focused attacks, such as rate limiting to prevent adversaries from reverse-engineering the model or overwhelming defences to inject poison data.

The lifecycle approach extends into the maintenance phase, where the standard treats major updates – such as retraining on new data – as the deployment of a new version. Under the ETSI AI standard, this triggers a requirement for renewed security testing and evaluation.

Continuous monitoring is also formalised. System Operators must analyse logs not just for uptime, but to detect “data drift” or gradual changes in behaviour that could indicate a security breach. This moves AI monitoring from a performance metric to a security discipline.

The standard also addresses the “End of Life” phase. When a model is decommissioned or transferred, organisations must involve Data Custodians to ensure the secure disposal of data and configuration details. This provision prevents the leakage of sensitive intellectual property or training data through discarded hardware or forgotten cloud instances.

Executive oversight and governance

Compliance with ETSI EN 304 223 requires a review of existing cybersecurity training programmes. The standard mandates that training be tailored to specific roles, ensuring that developers understand secure coding for AI while general staff remain aware of threats like social engineering via AI outputs.

“ETSI EN 304 223 represents an important step forward in establishing a common, rigorous foundation for securing AI systems”, said Scott Cadzow, Chair of ETSI’s Technical Committee for Securing Artificial Intelligence.

“At a time when AI is being increasingly integrated into critical services and infrastructure, the availability of clear, practical guidance that reflects both the complexity of these technologies and the realities of deployment cannot be underestimated. The work that went into delivering this framework is the result of extensive collaboration and it means that organisations can have full confidence in AI systems that are resilient, trustworthy, and secure by design.”

Implementing these baselines in ETSI’s AI security standard provides a structure for safer innovation. By enforcing documented audit trails, clear role definitions, and supply chain transparency, enterprises can mitigate the risks associated with AI adoption while establishing a defensible position for future regulatory audits.

An upcoming Technical Report (ETSI TR 104 159) will apply these principles specifically to generative AI, targeting issues like deepfakes and disinformation.

See also:

This is the hidden content, please

This is the hidden content, please

Want to learn more about AI and big data from industry leaders? Check out

This is the hidden content, please
taking place in Amsterdam, California, and London. The comprehensive event is part of
This is the hidden content, please
and is co-located with other leading technology events. Click
This is the hidden content, please
for more information.

AI News is powered by

This is the hidden content, please
. Explore other upcoming enterprise technology events and webinars
This is the hidden content, please
.

The post

This is the hidden content, please
appeared first on
This is the hidden content, please
.

This is the hidden content, please

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Vote for the server

    To vote for this server you must login.

    Jim Carrey Flirting GIF

  • Recently Browsing   0 members

    • No registered users viewing this page.

Important Information

Privacy Notice: We utilize cookies to optimize your browsing experience and analyze website traffic. By consenting, you acknowledge and agree to our Cookie Policy, ensuring your privacy preferences are respected.