In today’s third post on artificial intelligence, we are going to present the perspectives related to the planned legal changes. As the impact of AI on our lives is getting stronger and stronger and the applications of AI often elude legal provisions established many years ago it seems evident that this issue requires being dealt with by the lawmakers. Attempts to regulate artificial intelligence are carried out in many countries around the world. From our perspective, the most important are the activities of the European Union in this area.
Currently, work is underway on a proposal for a comprehensive horizontal regulation of AI at the EU level. In February 2020, the European Commission published a White Paper on Artificial Intelligence, discussing the challenges and their possible solutions. The paper was open for public consultation, with the participation of EU citizens, Member States, NGOs, industry representatives and academics. On 21st April 2021, a follow-up to the recent White Paper is going to be presented, delayed by about a month in relation to the original schedule. Concrete legislative proposals are expected to be presented. According to Roberto Viola, Director General of DG CONNECT (Directorate General of Communication, Networks, Content and Technology) at the European Commission, the proposed legislation is going to contain a common definition of artificial intelligence and the rules of liability for activities related to it. Regulation of low-risk and high-risk technologies may differ. This division was outlined in the White Paper and seems to have won the approval of the European Commission. High-risk technologies include those used in healthcare, transport, police operations, recruitment and legal systems, as well as technologies with a risk of death, injury or damage. Autonomous cars, for example, may be a fairly certain candidate to be considered a high-risk technology. Despite the fact that they are already technologically advanced and provide a high level of safety, it is impossible to completely avoid the risk of an accident, and this results in the need to establish a liability regime for potential damages. In addition, work is also underway on the regulation of radio frequencies used for communication of devices connected within the Internet of Things (IoT).
The European Commission has set three goals in its strategy on artificial intelligence. First – putting Europe at the forefront of technological development and promoting the use of AI by both public and private entities. Second – preparation for the socio-economic changes brought about by the development of artificial intelligence. Third – ensuring an appropriate ethical and legal framework for the AI use. The European Commission intends to take measures that will accelerate technological development, facilitate cooperation between member states and increase the level of investment in the field of artificial intelligence.
In January 2021, several dozen human rights NGOs, including the Panoptykon Foundation, Amnesty International and Human Rights Watch, published a joint letter calling on the European Commission to ensure that the artificial intelligence regulation safeguards fundamental EU rights and liberties. Among other things, they indicate the risks associated with machine learning-based facial recognition and the possible violations of privacy or unethical uses of such tools. They highlight research showing that existing technologies have been better at recognizing the faces of white men than women or people of color, which may put the latter at a higher risk of being misidentified. They also indicate the risks associated with the use of artificial intelligence in migration control, systems determining access to social rights and benefits, predictive policing or in the criminal justice system and pre-trial context.
An interesting example of such a risky application of AI is the case of the SyRI program, which was used by the Dutch government to determine the probability with which an individual may commit tax or social benefit fraud. In February 2020, the Hague District Court ordered the government to discontinue the use of the program pointing to a lack of transparency regarding functioning of the system and the criteria based on which it selected the people who should be inspected. The lack of transparency resulted in the possibility of discrimination, as well as depriving the parties to the proceedings of the right to effective appeal against the decision to initiate the inspection.
The scale of the challenges which the European Commission is facing today is enormous. The proposed regulation has to be suitable to achieve such diverse goals as supporting the development of new technologies and increasing the level of investments, developing rules of liability for activities related to the use of artificial intelligence and preventing the use of AI that may be cause violation of human rights. We are looking forward to 21st April to find out about the Commission’s proposal and we will certainly keep you posted.