Be careful what you wish for (thoughts on AI)

Regulate AI!”

I often hear this call from well-meaning crusaders who fear that artificial intelligence (AI), a technology that was developed to help man, might end up controlling him. Strangely, the present screams for regulation worry me as much as the evils that AI can possibly inflict.

My anxiety stems from three things.

Firstly, regulating anything requires a complete understanding of what is being regulated. Otherwise, regulation becomes a solution looking for a problem. Putting a handle on a technology that is rapidly morphing with no calculable terminal form makes it even more difficult to figure out where the technology police should come in, if at all.

With all due respect, I get the sense that many of those calling for regulation are prematurely hitting the fear button without any idea what bigger monster they’re bringing unto themselves. For example, the loudest warnings I’ve heard about AI are from those who have played with ChatGPT and are disappointed by its occasionally misleading output.

Rants

I mean no offense but I’m sorry to remind them that ChatGPT is an early stage “use case” of AI and is by no means the entirety of AI. The rant should therefore be confined to ChatGPT with no sweeping call to build inhibitory fences around the AI universe.

Not only that. They should take ChatGPT for what its creators said it is meant to be, that is, a language model that scans all the data it has access to and puts them in a format required by the user/instructor. ChatGPT is not a fact checker or a quality control tool.

Two days ago, I encountered a clear example of the confusion surrounding AI and the way its entire ecosystem works. In a chat group, someone lamented the laughable search results of “AI Overview” on Google about cockroaches and the male anatomy. In quick reaction, someone dressed down Google for having “no quality controls.”

Guys, take a deep breath. Google is a search and sharing platform where all manner of information, right or wrong, is shared. It is not a publication that claims authorship of everything inside it. It can probably shut out patently harmful content detected by universally accepted filters but it cannot proofread, censor or editorialize the way content authors do.

By analogy, you can say that Google is like a superhighway through which vehicles of all shapes and sizes pass. Builders are responsible for ensuring that superhighways are constructed and equipped to enable safe and orderly navigation but they cannot control the physical condition of the vehicles that enter the highway nor the driving behavior of vehicle operators. That responsibility belongs to someone else.

Tool

The second reason I fear the clamor for AI regulation is that many of its fears appear to be misplaced. Saying that ChatGPT will discourage critical thinking among students assumes that the latter will use the tool indiscriminately and that teachers will uncritically swallow what they are fed.

I would not belittle the creative pride of today’s students whose generation bred the most amazing solutions to problems that their elders created. Nor would I underestimate the astuteness and sagacity of today’s teachers who acquired their perspectives in an era that saw the world change many times over.

Technology expands the range of tools available to man. It is up to man to figure out which tools serve his objectives best. I believe that ChatGPT can make a good student even better. In the same way, I do not think that ChatGPT will make a decidedly lousy student any better, at least not under the care of a responsible teacher.

I concede that AI can be a tool of deception but that just makes the technology double-edged. A sharp knife can be a great friend to both a chef and a murderer. A gun can be indispensable to both a law enforcer and a criminal. One can choose to either shun the tool or use it judiciously.

In my corporate life, I would jokingly tell the auditors that an “all clear” audit rating is a red flag that should make the Board ask whether business came through the door at all or Management simply shut the door to completely avoid risk.

We need not be reminded that natural intelligence is as guilty as AI when it comes to deception and nontransparency. It is quite amusing that people are only now up in arms against the evils that AI is capable of. Either people have forgotten or simply never found out that natural intelligence and AI worked like soulmates to condition minds and distort facts in several political exercises in the last few years.

No choice

At the barest minimum, we need to comprehend AI so that we can use it to make better persons of ourselves. One must not shun technology and hide behind the “nontechie” self-label. Come on, that cute air fryer and that trusted microwave oven required some degree of “techie-ness” when you first used it.

You learned because you had no choice. Remember when you first learned to drive? You probably got so intimidated by all the gauges and meters in the dashboard that you wished cars were never invented. Here’s more news for you: You have no choice but to confront and feel AI, maybe at your own pace, maybe ever so slowly, maybe with extreme pain, but never “never” unless you intend to live under a rock.

Heavy hand

The third and final reason I get frightened by shrill calls for AI regulation is that we invite grave danger every time we call for “regulation.” Often, regulation brings in the heavy hand of government. Never mind the heavy hand if it is matched by the best of intentions and a sensible mindset. It has been shown time and again that government regulation, with all the pragmatic horse-trading and special-interest motivations that get baked into it, can end up mangling beyond recognition its desired policy outcome.

We need not be painfully reminded that when the agriculture sector asked for help, Congress gave them the agri-agra law, which helped no one. When small enterprises cried for support, Congress gave them the Magna Carta for Small and Medium Enterprises, which supported no one.

In the United States, when the cryptocurrency community sought regulatory clarity and market order, they were choked by a dysfunctional Securities and Exchange Commission (SEC) that chose to “legislate by jurisprudence.” This retarded the full development of a new technology until the courts decided to call out the SEC chair for his folly and straightened him out.

Please don’t get me wrong. I believe that something as powerful and life-changing as AI needs a sound, orderly and just ecosystem. That is not possible if we let this beast run on its own. On the other hand, it will not be fair to mankind if we just cage this animal and cut its legs off because of fear.

As with anything, balance is healthy. But first, we need to fully understand this beast in order to develop the “right” fears. What might these fears be? In my view, we need to ensure that proprietary platforms do not get unilateral and unrestricted power to decide, based on their pure commercial interests, what a user sees and which audience he can reach. Data and its permutations are like water, energy and any other natural resource in today’s knowledge world.

Governments should come in strongly when there is visible appropriation of intelligence by monoliths for their sole benefit and to the exclusion of the very population that provided the data in the first place. On the other hand, governments should be absent when countries and peoples try to work transnationally to democratize intelligence and tear down all the chasms that cause inequitable distribution of wealth.

There is time to get our thoughts straight and ride this AI beast before rushing to call the technology police.

Regulate AI? Be careful what you wish for. INQ

This article reflects the personal opinion of the author and does not reflect the official stand of the Management Association of the Philippines or MAP. The author is past president of MAP. He is chair of Maybridge Finance and Leasing Inc.

Read more...