Taking its vantage point from the Post-Normal Science (PNS) framework, this paper examines how the governance of artificial intelligence (AI) has become a test case for democratic societies confronting conditions where ‘facts are uncertain, values in dispute, stakes high, and decisions urgent.’ It traces how PNS, originally developed for other science-policy domains marked by inadequate expert-driven policymaking and contested categories of risk and safety, offers analytical and normative resources for reimagining AI governance. The framework’s core principles - embracing irreducible uncertainty and constituting extended peer communities - find contemporary expression in United Nations Educational, Scientific and Cultural Organization’s (UNESCO) participatory assessment methodologies which treat AI systems as sociotechnical assemblages requiring ethical scrutiny rather than merely technical certification. By contrast, the European Union (EU) AI Act, despite its ambitious scope, embeds a troubling contradiction: it recognizes AI’s dynamic and unpredictable character while operationalizing oversight through risk categories, conformity assessments, and industry-led standardization processes that assume knowability and control. Now that the EU AI Act has entered into force, the decisive arena for responsible and democratic AI governance has shifted from legislative debate to the seemingly quiet, procedural machinery of implementation through standards. Standardization emerges not as a neutral technical exercise but as a political process determining whose expertise matters, which harms register as regulable and what remains invisible to oversight. Here, corporate influence threatens to calcify into epistemic capture, encoding industry priorities as objective technical requirements. The paper argues that critical scholarship must engage these seemingly procedural spaces as sites where the material and epistemic foundations of rights and freedoms are actively being constructed - and where knowledge itself becomes both a source and outcome of law-making. Only through co-regulatory processes that embrace the ‘uncomfortable knowledge’ and epistemic humility demanded by PNS can Europe realize its stated ambition to steer AI’s trajectory through a distinctive and responsible model of governance.

A Post-Normal Science Framework for Rethinking EU AI Governance

Amodio
Primo
2025

Abstract

Taking its vantage point from the Post-Normal Science (PNS) framework, this paper examines how the governance of artificial intelligence (AI) has become a test case for democratic societies confronting conditions where ‘facts are uncertain, values in dispute, stakes high, and decisions urgent.’ It traces how PNS, originally developed for other science-policy domains marked by inadequate expert-driven policymaking and contested categories of risk and safety, offers analytical and normative resources for reimagining AI governance. The framework’s core principles - embracing irreducible uncertainty and constituting extended peer communities - find contemporary expression in United Nations Educational, Scientific and Cultural Organization’s (UNESCO) participatory assessment methodologies which treat AI systems as sociotechnical assemblages requiring ethical scrutiny rather than merely technical certification. By contrast, the European Union (EU) AI Act, despite its ambitious scope, embeds a troubling contradiction: it recognizes AI’s dynamic and unpredictable character while operationalizing oversight through risk categories, conformity assessments, and industry-led standardization processes that assume knowability and control. Now that the EU AI Act has entered into force, the decisive arena for responsible and democratic AI governance has shifted from legislative debate to the seemingly quiet, procedural machinery of implementation through standards. Standardization emerges not as a neutral technical exercise but as a political process determining whose expertise matters, which harms register as regulable and what remains invisible to oversight. Here, corporate influence threatens to calcify into epistemic capture, encoding industry priorities as objective technical requirements. The paper argues that critical scholarship must engage these seemingly procedural spaces as sites where the material and epistemic foundations of rights and freedoms are actively being constructed - and where knowledge itself becomes both a source and outcome of law-making. Only through co-regulatory processes that embrace the ‘uncomfortable knowledge’ and epistemic humility demanded by PNS can Europe realize its stated ambition to steer AI’s trajectory through a distinctive and responsible model of governance.
2025
Amodio, Claudia
File in questo prodotto:
File Dimensione Formato  
AMODIO-1.pdf

solo gestori archivio

Tipologia: Altro materiale allegato
Licenza: NON PUBBLICO - Accesso privato/ristretto
Dimensione 465.02 kB
Formato Adobe PDF
465.02 kB Adobe PDF   Visualizza/Apri   Richiedi una copia
11-italj-2-2025-Amodio.pdf

accesso aperto

Descrizione: versione editoriale
Tipologia: Full text (versione editoriale)
Licenza: Copyright dell'editore
Dimensione 1.24 MB
Formato Adobe PDF
1.24 MB Adobe PDF Visualizza/Apri

I documenti in SFERA sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11392/2613272
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact