Will artificial intelligence allow humans to regulate it?

Amidst the collective consciousness, one yet-unanswered question looms large: could artificial intelligence (AI) attain self-awareness? And if so, will it perceive humans as a threat, prompting it to act with hostility toward them?

Sam Altman, CEO of OpenAI, the creative force behind ChatGPT, has taken a bold stance to publicly call for the regulation of generative AI, the very technology upon which his own creation ChatGPT is built from.

Intriguingly, experts predict that in the span of a mere decade, AI systems will exceed expert skill level in most domains, and rival the productivity of today’s largest corporations.

“In terms of both potential upsides and downsides, superintelligence will be more powerful than other technologies humanity has had to contend with in the past. We can have a dramatically more prosperous future; but we have to manage risk to get there. Given the possibility of existential risk, we can’t just be reactive.” (www.theguardian.com, by Alex Hern, May 24, 2023)

The European Union is in the advanced stages of coming up with an Artificial Intelligence Act (AI Act) which may be the world’s first.

Back in April 2021, the European Commission proposed the first EU regulatory framework for AI. And just last Wednesday, the European Parliament adopted its negotiating position on the AI Act with 499 votes in favor, 28 against and 93 abstentions. The next step is coming up with a consensus among EU member countries on the final form of the law. (https://www.europarl.europa.eu/news/en/press-room)

The primary objective of the AI Act is to establish a robust framework that guarantees the safety and adherence of AI systems within the EU to existing legal provisions concerning fundamental rights, norms, and values. AI systems were defined to include logic or rule-based information processing, as well as probabilistic algorithms such as machine learning. The rules are intended to be universally applicable to all companies that seek to deploy AI systems within the EU, whether they are based in the EU or not.

The AI Act introduces regulations that impose obligations on both providers and users, adopting a risk-based approach to regulating AI systems. Those with an unacceptable level of risk to human safety are declared to be prohibited, while there are low-risk systems that receive minimal or no regulation at all.

Prohibited AI practices deemed to have “Unacceptable Risk”

AI systems with an unacceptable level of risk to people’s safety would be prohibited. These include those used for social scoring (classifying people based on their social behavior or personal characteristics). The list was also expanded to include bans on intrusive and discriminatory uses of AI, such as:

a. “Real-time” remote biometric identification systems in publicly accessible spaces
b. “Post” remote biometric identification systems, with the only exception of law enforcement for the prosecution of serious crimes and only after judicial authorization
c. biometric categorization systems using sensitive characteristics (e.g. gender, race, ethnicity, citizenship status, religion, political orientation)
d. predictive policing systems (based on profiling, location or past criminal behavior)
e. emotion recognition systems in law enforcement, border management, the workplace, and educational institutions
f. untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases (violating human rights and right to privacy)
g. Cognitive behavioral manipulation of people or specific vulnerable groups: for example voice-activated toys that encourage dangerous behavior in children

High risk

AI systems that negatively affect safety or fundamental rights will be considered high risk and shall be assessed before being put on the market and also throughout their lifecycle.

Examples of high-risk AI systems are those that are covered by the EU’s product safety legislation such as toys, aviation, cars, and medical devices. There are also those AI systems falling under specific areas which are high-risk and have to be registered in a database:

a. Biometric identification and categorization of natural persons
b. Management and operation of critical infrastructure (like those used as safety components in the management and operation of road traffic and the supply of water, gas, heating and electricity)
c. Education and vocational training (which are to be used for the purpose of determining access or assigning natural persons to educational and vocational training institutions and for assessing students and tests required for admission to educational institutions)
d. Employment, worker management and access to self-employment (for use in recruitment or selection of natural persons, evaluating candidates, used for decisions on promotions, termination of employment, task allocation, and monitoring performance and behavior)
e. Access to and enjoyment of essential private services and public services and benefits (to evaluate eligibility, granting, or educing of natural persons for public assistance benefits and services)
f. Law enforcement (to assess risk of persons for offending or reoffending, evaluation of reliability of evidence in investigation or prosecution of criminal offences, to be used for profiling of persons)
g. Migration, asylum and border control management
h. Assistance in legal interpretation and application of the law (systems to assist courts in researching and interpreting facts and the law and in applying the law to facts)

Limited risk

There are the limited risk AI systems which should still comply with minimal transparency requirements that would allow users to make informed decisions. The transparency obligations allow individuals interacting with the system to make informed decisions. This is the case for a chatbot, where the rules provide that the AI system must let the user know they are interacting with an AI-empowered machine.

Moreover, there is the classification of Generative AI, such as those like ChatGPT. These would have to comply with transparency requirements such as:

a. Disclosing that the content was generated by AI
b. Designing the model to prevent it from generating illegal content
c. Publishing summaries of copyrighted data used for training

Low-Risk AI Systems

These systems are low risk because they do not use personal data or make any predictions that influence human beings. (https://www.europarl.europa.eu/news/en/press-room)

In the Philippines, we have House Bill No.. 7396 introduced by Congressman Robert Ace Barbers filed last March 1, 2023. HB 7396 proposes to establish an AI Development Authority (AIDA) to oversee the development and deployment of AI technology.

The AIDA is tasked to develop and deploy AI such as to come up with a national AI development strategy, conduct research, provide guidance to AI developers and users, establish licensing and certification requirements for AI developers, developing data and cyber security, promote and provide support for AI research and development activities.

Quite clearly, we in the Philippines are still catching up to the intricacies of AI, which inevitably impacts our capacity to formulate comprehensive regulations in the field. However, we can leverage valuable insights from regulations being developed in other countries, much like how we have successfully implemented privacy laws by adopting regulations from abroad.

By drawing inspiration from international practices, we can accelerate our progress in establishing effective AI regulations, ensuring that we remain aligned with global standards, while catering to our unique context.

(The author, Atty. John Philip C. Siao, is a practicing lawyer and founding Partner of Tiongco Siao Bello & Associates Law Offices, teaches law at the MLQU School of Law, and an Arbitrator of the Construction Industry Arbitration Commission of the Philippines. He may be contacted at jcs@tiongcosiaobellolaw.com. The views expressed in this article belong to the author alone.)

Read more...