The implementation of the AI Act requires practical mechanisms to verify compliance with legal obligations, yet concrete and operational mappings from high-level requirements to verifiable assessment activities remain limited, contributing to uneven readiness across Member States. This paper presents a structured mapping that translates high-level AI Act requirements into concrete, implementable verification activities applicable across the AI lifecycle. The mapping is derived through a systematic process in which legal requirements are decomposed into operational sub-requirements and grounded in authoritative standards and recognised practices. From this basis, verification activities are identified and characterised along two dimensions: the type of verification performed and the lifecycle target to which it applies. By making explicit the link between regulatory intent and technical and organisational assurance practices, the proposed mapping reduces interpretive uncertainty and provides a reusable reference for consistent, technology-agnostic compliance verification under the AI Act.
翻译:《人工智能法案》的实施需要切实可行的机制来验证法律义务的合规性,然而从高层次要求到可验证评估活动的具体且可操作的映射仍然有限,这导致各成员国间的准备程度参差不齐。本文提出了一种结构化映射,将《人工智能法案》的高层次要求转化为适用于人工智能全生命周期的具体、可实施的验证活动。该映射通过系统化流程推导得出,在此流程中,法律要求被分解为可操作的子要求,并基于权威标准和公认实践进行锚定。在此基础上,验证活动沿着两个维度被识别和描述:所执行验证的类型及其适用的生命周期目标。通过明确监管意图与技术及组织保障实践之间的联系,所提出的映射降低了诠释不确定性,并为《人工智能法案》下一致、技术无关的合规性验证提供了可复用的参考框架。