The EU AI Act has arrived – What do GRC professionals need to know?

aijourn.com

Businesses – and governance, risk, and compliance (GRC) professionals in particular – should prepare now for the EU AI Act’s (AIA) implementation. The law has been approved by the Council of EU Ministers, further paving the way for the AIA to be adopted and is expected to enter into force later this year. A phased rollout will see legal requirements for businesses increase between now to 2027.

Additionally, the UK’s national standards body has published new advice for safely managing AI. As such, organisations using these tools will need to prepare for the responsibilities and obligations they will have.

While it’s true that AI provides businesses with opportunities and benefits, the challenges certainly persist. Concerns regarding data privacy, misinformation, ethical dilemmas, and lack of transparency, remain paramount. As a result, we are seeing a monumental shift in focus to establish AI policies and regulations.

However, questions linger around the AIA’s governance, its ability to adapt to evolving AI, and how the regulation compares with existing frameworks underway such as the US Blueprint for an AI Bill of Rights.

Why is it important to prioritise AI safety?

To put it simply, AI can speed up the pace of doing business. For boards and GRC professionals in particular, AI tools can be used to automate tasks such as risk assessments and documentation, data analysis, and decision-making support. However, as technology continues to evolve and advance, it has begun to cause major concern among individuals, businesses, and governments. In fact, our recent survey found that while 61% of boards and executive teams believe their investment in technology, data, or AI has improved their processes, AI still requires human oversight throughout.

One of the main challenges with AI use is data privacy. Companies often collect vast amounts of data to fuel AI, raising concerns about user information being used without explicit consent. A key factor for implementing AI regulations in Europe is to encourage responsible data privacy. This is also reflected in the Blueprint for an AI Bill of Rights issued by The White House, highlighting a global consensus for respecting user expectations of privacy. Businesses should ensure that they are complying with the AIA by collecting only the necessary data strictly mentioned in the regulations. This includes implementing safeguards throughout the data lifecycle – this comprises of collection, use, access, transfer, and deletion of user data.

Other key challenges include the ethical and legal responsibilities. For instance, do organisations need to ensure customers are aware of when they use AI and give informed consent? How should industries address the possibility of biases or other shortcomings in the data that is being used? To combat this, organisations should prioritise transparency when it comes to using AI and complying with AI regulations. While the AIA doesn’t ban deepfakes entirely, it does require creators to be transparent about their artificial origin and provide information on what techniques were used. Organisations should also be open about labeling “deepfakes,” notifying people that they’re interacting with an AI system like a chatbot, or if emotional recognition or biometric categorisation systems are being applied to them. Similarly, the providers of AI systems should maintain up-to-date technical information, register the system in the EU’s database, and monitor the system after it enters the market.

What AI governance will mean for businesses?

With so many risks and considerations surrounding an organisation’s AI system, regulatory standards such as the AIA will ensure that AI systems are safe and responsible. While much of the Act covers the providers of AI systems, companies that use these tools have responsibilities and obligations as well. For instance, some uses will be prohibited in the first place as detailed by the Confederation of European Data Protection Organisations (CEDPO), this includes social scoring or the use of generative AI applications with certain data, such as sensitive personal information, or in some industries like healthcare.

The AIA will provide a regulatory framework with risk management at its heart. AI systems will be classified based on their level of potential harm: from no risk to high risk to prohibited applications. High-risk systems will require a host of risk management processes, including the use of relevant and representative data for training, validation, and testing, plus human oversight and assurances of robustness, accuracy, and cybersecurity. To further support risk management, systems will need to meet conformity requirements, and throughout a system’s life, providers must actively and systematically collect, document, and analyse relevant data on reliability, performance, and safety.

All industries must prepare for the AIA to come into force given it is industry-agnostic. It applies to all sectors, and its purview extends across the AI value chain. From manufacturing or providing AI systems to the importer, distributor, or authorised representative in the commercial value chain, all will be impacted. Companies outside of Europe will also need to prepare where the system or its output is still used in the EU.

Exercising due diligence for boards and GRC

There is a clear need for a consolidated view of governance, risk, and compliance across organisations and their use of AI. With the AIA bringing with it high penalties for non-compliance—fines of up to 35 million euros, or 7% of an organisation’s global turnover—careful oversight is a necessity.

In preparation for the AIA, companies should build and implement an AI governance strategy. Next, they should map, classify and categorise the AI systems that they use or are under development based on the risk levels in the AIA framework. High-risk systems require a conformity assessment to ensure compliance before being placed in the EU market. GRC professionals will then need to perform gap assessments to evaluate if current policies and regulations on privacy, security, and risk can be applied to AI. Finally, a strong governance framework, encompassing both in-house and third-party AI solutions, must be established. As AI policies and standards continue to evolve, GRC professionals will need to keep informed to ensure their business continues to remain compliant.

To support consolidating all of these processes, businesses should invest in a centralised GRC platform. This will not only enable businesses to stay ahead of ongoing and fast-changing regulations but also the increasing use of AI in the future. A centralised platform provides a unified perspective on risks and impactful insights that, as a result, guides better decision-making. This will also prevent boards and GRC professionals from being overwhelmed by vast amounts of information that are incomplete, inaccurate, and unmanageable.

Figure 1: Diligent One Platform

AI has the potential to help several industries excel however, organisations need to consider how to use AI responsibly and the importance of compliance. Successful AI use for business is striking the right balance between innovation and regulation. With the AIA set to stimulate investment and innovation of AI in Europe, businesses need to ensure they are prepared to both understand and meet its regulatory requirements. Through risk management, governance strategy, and a centralised approach, GRC professionals can lead their businesses in their readiness.

Author: Keith Fenner

Source: aijourn.com

Share This Article
Leave a comment