The Konfer EU AI Act Control Questions Catalog, an industry-first, is specifically tailored to provide Compliance and Risk officers with crucial insights for navigating the new EU AI Act.
Our AI Generator was able to analyze the entire EU AI Act, and present the guidance and mandates as control questions that enterprise leaders can share with their teams. The Control Questions can also consume the team responses and give organizations a comprehensive picture of their AI risks, risk posture, and mitigation decisions.
© 2024 Konfer, Inc. All rights reserved.
The regulations detailed in the EU AI Act distinguish AI systems into three (3) categories, viz., Prohibited AI, High-Risk AI, and Moderate/Low Risk AI.
Title II of the EU AI Act establishes the list of prohibited AI. The list includes AI systems that, for example, could violate fundamental rights, manipulate persons through subliminal techniques, or assist in social scoring of natural humans for general purposes.
The control questions, generated from the texts in Articles 4, 4a, 4b, and 5, are meant to assist an organization in properly classifying the AI systems under development or under consideration for deployment or being put into market.
© 2024 Konfer, Inc. All rights reserved.
Title III pertains to AI systems that could create a high risk to the health or safety or fundamental rights of natural persons. These AI systems are permitted on the EU market, subject to compliance with certain mandatory requirements and conformity assessment prior to putting the systems on the market.
Chapter 1, consisting of Articles 6–7, sets the classification rules and identifies two main categories of high-risk AI systems, viz., AI systems to be used as a safety component of a product and standalone AI systems that could affect the fundamental rights of natural persons. The systems in the former category are subject to third-party assessment BEFORE putting them on the market. The control questions, generated from the texts in Articles 6 and 7, assist the organization's stakeholders in classifying their high-risk systems appropriately.
© 2024 Konfer, Inc. All rights reserved.
Title III pertains to AI systems that could create a high risk to the health or safety or fundamental rights of natural persons. These AI systems are permitted on the EU market, subject to compliance with certain mandatory requirements and conformity assessment prior to putting the systems on the market.
Chapter 2 sets out the legal requirements for high-risk AI systems in relation to data and data governance, documentation and recording keeping, transparency and provision of information to users, human oversight, robustness, accuracy, and security. The control questions, generated from the texts in Articles 8–15, detail the conformance requirements of each of the specific aspects and guide the organization in developing and deploying their AI systems correctly.
© 2024 Konfer, Inc. All rights reserved.
Title III pertains to AI systems that could create a high risk to the health or safety or fundamental rights of natural persons. These AI systems are permitted on the EU market, subject to compliance with certain mandatory requirements and conformity assessment prior to putting the systems on the market.
Chapter 3, consisting of Articles 16–29a, delineates the obligations on the providers, users, importers, distributors, etc., of the high-risk AI systems. The control questions, generated from the text of the Articles 16–29, pertain only to the high-risk AI systems under consideration. Controls pertaining to an organization's infrastructure and other such topics are out of scope for this download.
© 2024 Konfer, Inc. All rights reserved.
Title III pertains to AI systems that could create a high risk to the health or safety or fundamental rights of natural persons. These AI systems are permitted on the EU market, subject to compliance with certain mandatory requirements and conformity assessment prior to putting the systems on the market.
Title III, Chapter 5, consisting of Articles 40–51, details the conformity assessment procedures to be followed for each type of high-risk AI system. Konfer's control questions enable the organization's development and compliance teams to stay compliant with the conformity assessments throughout the life cycle of their AI systems.
© 2024 Konfer, Inc. All rights reserved.
Title IV of the EU AI Act is concerned with the transparency obligations for certain AI systems that interact with humans, are used to detect emotions, determine association with (social) categories based on biometric data, or generate or manipulate content (“deep fakes”). These systems have to inform their users of their use of AI. The control questions, generated from the text of Article 52, provide a checklist for the organization’s stakeholders to ensure the discharge of their obligations toward development and deployment of AI systems with transparency.
© 2024 Konfer, Inc. All rights reserved.
Title VIII pertains to the post-market monitoring and reporting obligations for providers and deployers of AI systems. The control questions derived from the text of the Articles 61-68e, are scoped towards enablement of the necessary capabilities in the AI systems, facilitating the providers to achieve compliance with the requirements in this section of the EU AI Act.
The Konfer NIST AI RMF Control Questions Catalog is a pioneering resource developed to equip AI Governance, Compliance and Risk officers with vital insights for mastering the NIST AI Risk Management Framework.
Konfer’s patent-pending AI deciphers the entirety of the NIST AI RMF, translating its guidelines and principles into actionable control questions. This enables leaders to disseminate critical information among their teams effectively, fostering a thorough understanding of AI risks, evaluating risk posture, and guiding strategic mitigation decisions.
© 2024 Konfer, Inc. All rights reserved.
In collaboration with the private and public sectors, NIST has developed a framework to better manage risks to individuals, organizations, and society associated with artificial intelligence (AI). The NIST AI Risk Management Framework (AI RMF) is intended for voluntary use and to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.
Released on January 26, 2023, the Framework was developed through a consensus-driven, open, transparent, and collaborative process that included a Request for Information, several draft versions for public comments, multiple workshops, and other opportunities to provide input. It is intended to build on, align with, and support AI risk management efforts by others.