Mainstream

“AI” exploration is a test of human critical thinking | Not a test of “artificial intelligence”

Given recent excitement and apprehension around “artificial intelligence” (“AI”) triggered by the public release of ChatGPT, the popular platform powered by OpenAI’s algorithm, we must parse reality from sensation and prepare for pretenders jumping in to clutter the powerful and dangerous space. Internalizing what really constitutes “AI”, what can “AI” really do for us, and what should scare us will simplify the conversation.

John Oommen
Profile

I am a mainstream adopter of new gadgets. So, playing with ChatGPT early on was fun. As a novice, it appears to be a strong language model that is taught to speak well. It acts as a good “thought partner” replaying previously fed information in different language formats if we pose nebulous questions. As a user, it's a very confident and opinionated search engine. As a technologist, it is a monumental achievement to create a conversation engine that can understand myriad of inputs and communicate with anyone easily.

I hope we can all acknowledge a simple reality that ChatGPT‘s “main purpose is to use natural language processing to understand and generate human-like responses to text-based queries,” as explained by the tool itself. So, if the tool itself tells us it is a language model and is not “thinking”, then why are professors and testing institutions asking ChatGPT to take the BAR exam or SAT tests? ChatGPT is a conversational search engine that remembers previously fed information. If we search appropriately, Google Search will return the information to construct answers to SAT questions and BAR exams as well.

This brings us to the real conundrum: Why is ChatGPT passing the Turing Test even as a language processing research model?

Alan Turing, considered the father of the computer, postulated that it is not logical to ask whether a machine is intelligent. It is true. We have had intelligent machines for millennia. A mechanical trigger based on surrounding conditions is an intelligent machine. The Turing Test assesses the tester to understand whether the tester considers the subject’s responses human-like.

This is the theme that worries most people about “artificial intelligence.” We don't expect Megan, the scary robot from the namesake movie, to be walking amongst us yet. But Goldman Sachs is already predicting that 300 million jobs in rich countries will see deep impact in a few years based on a research model’s testing. Google search frequency for the term “artificial intelligence” had remained flat from 2004 to 2021. But it has quadrupled in the last year.

“AI” has the same problem as social media, but at a bigger amplitude. We created a solution in social media without a problem definition. It’s become a glorified sales and marketing vehicle because we couldn’t figure out a more relevant need for it. Tricks around keyword choices and provocation of herd mentality supersede the quality of content or truth on social media. The biggest social media companies have invested increasing dollars and staff to curb destructive behavior with limited success.

On that note, what is our problem definition that “artificial intelligence” will solve?

The first option: Do we want to just automate the most mundane tasks? But we have automated mundane tasks for centuries. A mousetrap is an automated mechanical device.

The second option: Do we want “AI” to predict the future based on information fed into it by humans? We could predict probabilistic outcomes for a long time. We have had increasingly faster and easier access to such predictions. Have you used weather forecasts in the past few years? They are immaculate.

These first two options have existed for a long-time, and we haven't bothered using the term “AI”. There is a third option. But we are far away from deploying it.

The third option: Do we want “AI” to make predictions and even make decisions where a machine has the freedom to access any information on the planet? But we would need advanced robotics for a machine to gather input from the online and offline world. We need global alignment to allow a machine to have such access, which seems highly unlikely.

So, which option are folks talking about when we use the term “AI”? The truth? We have absolutely no idea. And this highlights our first problem: What is the formal definition of “artificial intelligence”?

The lack of a good definition leads to the second problem. Anyone can package anything as “AI” and try to sell it. Customers don’t know what “AI” is. ChatGPT, which is clearly not thinking, is already passing the Turing Test with folks with advanced degrees because we train society to trust popular ideas and we don't internalize what objective testing implies. So, our second problem is that there is nothing to protect customers and investors from a bevy of so-called “AI” solutions marketed to monetize vaporware with spurious accuracy and intentions. This also explains the quadrupling of Google searches for “AI” in the past year.

“AI” exploration is a test of human critical thinking | Not a test of artificial intelligence. Source: Acumes

Key Insight

Would we handle a butter knife and a sushi knife the same way? No. A sushi knife is extremely sharp and requires special handling.

Alan Turing may be correct that our reaction to technology is a test of its intelligence. But it is also a reflection of human critical thinking skills. To master increasingly powerful tools, we must train to understand why they work the way they do and become smarter, savvier, and more analytical. Otherwise, we will be misled while handling something we don’t understand. Remember, people paid to get tea leaves interpreted because charlatans popularized it!

Every living thing or object, including technology, follows the same simple equation: Input + Process = Output.

We control tools if we understand the inputs that go into creating them and the integrity of the process that transforms those inputs into outputs. Taking the output of a predictive model at face value without critical thinking leads to poor productivity and investment allocation on the docile end. The implications on the harmful end are unthinkable.

Call To Action

Social media’s origin and impact offer a two-decade lifecycle view of "AI" pitfalls awaiting us. It's worth reiterating that social media was a product without a tangible problem definition. It amplified disinformation and misrepresentation to a level where our productivity growth is now lower than it has been in decades. Citizens of most countries live in the most divisive environments in history because folks with selfish intentions figured out how to turn citizens against citizens through effective segmentation. How can we prevent a further escalation of these trends with “AI”?

We can choose to be a master of tools during the upcoming “AI” wave.

First, ensure that we internalize the answer to these three fundamental questions to protect ourselves from touching misleading "AI" tools or overbuying tools we don't need.

  1. What does “artificial intelligence” mean?
  2. What is the range of widgets marketed under the term “artificial intelligence”?
  3. Which spectrum of this wide range do we have an application for?

Think about it. If we curl 30lb weights, would we buy 60lb weights just because we live next to Dwayne "The Rock" Johnson?

Second, let's embrace humility and put ourselves through a two-point critical thinking test to see how ready we are to use the “AI” tools we consider useful to avoid being taken for a ride by pretenders. I have included this two-point test in the infographic.

  1. How good are we at assessing the topical competence of a person or entity delivering information?
  2. How good are we at discerning the underlying incentives and level of objectivity of a person or entity delivering information?

Try to apply this two-point test in our daily lives. Imagine sitting down with a friend. Imagine interviewing a candidate. Imagine listening to a TV anchor. Imagine reading the newspaper. Practice these questions.

In Closing...

As we explore OpenAI’s ChatGPT or Google’s Bard, let’s remember these facts. First, they are research projects shared with the public as a work-in-progress. They are not products intended to solve any problems. We are treating them as solutions because we like shiny objects. Second, consider the inputs. These tools are fed with public domain information. Everything on the web is an interpretation of reality. Articles like mine, white papers, or scientific papers are all peppered with author biases. Third, they are language models; they are not thinking about the topic you are asking about at all.

It is not sensible to feed BAR exam or SAT questions to eloquent search engines that are not thinking and compare scores to humans. These random actions are tabloid fodder and distract us from necessary considerations.

We have been working on predictive models for a long time and we already have great practical applications like weather and traffic. It is also relatively easy to validate the outputs of algorithms for games like Chess or Go with fixed rules. No doubt we will have more valuable predictive models available for mass market applications soon enough.

But effective predictions require a large volume of inputs. Mass market applications have millions of data points to learn from. But most applications are niche. Niche use cases have spurious definitions of what ‘good’ outcomes look like, lack alignment on the process to create ‘good’ outcomes, and don't have effective definitions of inputs to feed models. Given this reality, “AI” will go through the same frenzy and uproar that social media and cryptocurrencies did in the next few years, unless individuals and companies elevate critical thinking to master tools.