I am a mainstream adopter of new gadgets. So, playing with ChatGPT early on was fun. As a novice, it appears to be a strong language model that is taught to speak well. It acts as a good “thought partner” replaying previously fed information in different language formats if we pose nebulous questions. As a user, it's a very confident and opinionated search engine. As a technologist, it is a monumental achievement to create a conversation engine that can understand myriad of inputs and communicate with anyone easily.
I hope we can all acknowledge a simple reality that ChatGPT‘s “main purpose is to use natural language processing to understand and generate human-like responses to text-based queries,” as explained by the tool itself. So, if the tool itself tells us it is a language model and is not “thinking”, then why are professors and testing institutions asking ChatGPT to take the BAR exam or SAT tests? ChatGPT is a conversational search engine that remembers previously fed information. If we search appropriately, Google Search will return the information to construct answers to SAT questions and BAR exams as well.
This brings us to the real conundrum: Why is ChatGPT passing the Turing Test even as a language processing research model?
Alan Turing, considered the father of the computer, postulated that it is not logical to ask whether a machine is intelligent. It is true. We have had intelligent machines for millennia. A mechanical trigger based on surrounding conditions is an intelligent machine. The Turing Test assesses the tester to understand whether the tester considers the subject’s responses human-like.
This is the theme that worries most people about “artificial intelligence.” We don't expect Megan, the scary robot from the namesake movie, to be walking amongst us yet. But Goldman Sachs is already predicting that 300 million jobs in rich countries will see deep impact in a few years based on a research model’s testing. Google search frequency for the term “artificial intelligence” had remained flat from 2004 to 2021. But it has quadrupled in the last year.
“AI” has the same problem as social media, but at a bigger amplitude. We created a solution in social media without a problem definition. It’s become a glorified sales and marketing vehicle because we couldn’t figure out a more relevant need for it. Tricks around keyword choices and provocation of herd mentality supersede the quality of content or truth on social media. The biggest social media companies have invested increasing dollars and staff to curb destructive behavior with limited success.
On that note, what is our problem definition that “artificial intelligence” will solve?
The first option: Do we want to just automate the most mundane tasks? But we have automated mundane tasks for centuries. A mousetrap is an automated mechanical device.
The second option: Do we want “AI” to predict the future based on information fed into it by humans? We could predict probabilistic outcomes for a long time. We have had increasingly faster and easier access to such predictions. Have you used weather forecasts in the past few years? They are immaculate.
These first two options have existed for a long-time, and we haven't bothered using the term “AI”. There is a third option. But we are far away from deploying it.
The third option: Do we want “AI” to make predictions and even make decisions where a machine has the freedom to access any information on the planet? But we would need advanced robotics for a machine to gather input from the online and offline world. We need global alignment to allow a machine to have such access, which seems highly unlikely.
So, which option are folks talking about when we use the term “AI”? The truth? We have absolutely no idea. And this highlights our first problem: What is the formal definition of “artificial intelligence”?
The lack of a good definition leads to the second problem. Anyone can package anything as “AI” and try to sell it. Customers don’t know what “AI” is. ChatGPT, which is clearly not thinking, is already passing the Turing Test with folks with advanced degrees because we train society to trust popular ideas and we don't internalize what objective testing implies. So, our second problem is that there is nothing to protect customers and investors from a bevy of so-called “AI” solutions marketed to monetize vaporware with spurious accuracy and intentions. This also explains the quadrupling of Google searches for “AI” in the past year.
Would we handle a butter knife and a sushi knife the same way? No. A sushi knife is extremely sharp and requires special handling.
Alan Turing may be correct that our reaction to technology is a test of its intelligence. But it is also a reflection of human critical thinking skills. To master increasingly powerful tools, we must train to understand why they work the way they do and become smarter, savvier, and more analytical. Otherwise, we will be misled while handling something we don’t understand. Remember, people paid to get tea leaves interpreted because charlatans popularized it!
Every living thing or object, including technology, follows the same simple equation: Input + Process = Output.
We control tools if we understand the inputs that go into creating them and the integrity of the process that transforms those inputs into outputs. Taking the output of a predictive model at face value without critical thinking leads to poor productivity and investment allocation on the docile end. The implications on the harmful end are unthinkable.
Social media’s origin and impact offer a two-decade lifecycle view of "AI" pitfalls awaiting us. It's worth reiterating that social media was a product without a tangible problem definition. It amplified disinformation and misrepresentation to a level where our productivity growth is now lower than it has been in decades. Citizens of most countries live in the most divisive environments in history because folks with selfish intentions figured out how to turn citizens against citizens through effective segmentation. How can we prevent a further escalation of these trends with “AI”?
We can choose to be a master of tools during the upcoming “AI” wave.
First, ensure that we internalize the answer to these three fundamental questions to protect ourselves from touching misleading "AI" tools or overbuying tools we don't need.
Think about it. If we curl 30lb weights, would we buy 60lb weights just because we live next to Dwayne "The Rock" Johnson?
Second, let's embrace humility and put ourselves through a two-point critical thinking test to see how ready we are to use the “AI” tools we consider useful to avoid being taken for a ride by pretenders. I have included this two-point test in the infographic.
Try to apply this two-point test in our daily lives. Imagine sitting down with a friend. Imagine interviewing a candidate. Imagine listening to a TV anchor. Imagine reading the newspaper. Practice these questions.