Artificial Intelligence has been in the news recently. Services like ChatGPT and Jasper, along with products from tech companies like Google and Microsoft, have made these “chat bots” widely accessible. I decided to try out ChatGPT for fun and based on my experiences, I don’t think we have anything to worry about as far as AI taking over the world anytime soon.
Most of these systems are programmed with vast amounts of information – an encyclopedic amount of knowledge. The software stores the information from past uses to round out its knowledge and grow. Some AI uses the internet to research, ChatGPT does not.
Using a system like this is like meeting a random person at an event and striking up a conversation – you get what you get. In this case, lots of information, but it doesn’t always get its facts straight. To try this out, I asked ChatGPT about a topic I know something about, soccer.
I asked if American-style football should be renamed because feet rarely make contact with the ball in that sport. ChatGPT offered an agreeable argument that said American football should be renamed. The program even suggested a new name for the sport, “Gridiron”. The AI-generated response was well-formed and believable, despite the use of American word spelling. Comedian John Cleese said the same thing more succinctly 20 years ago, but I digress.
Next I used the AI to solve a word-use questions that I had with some coworkers. This wasn’t a right or wrong answer type of question, rather just to see what the AI thought would be the appropriate use of three variations of the same phrase. It was very insightful and the AI acted almost like a third-party arbiter of an issue.
Broadening my scope, I asked a more opinionated question. This was unsuccessful and resulted in vague and ambiguous answers.
On recent news events, the AI generated more gibberish than understandable opinion or comment. Given some of the recent news events in the world, I get that. I don’t really want to pay attention to most news stories that are broken around 4 p.m. Friday. Why should a computer do so either?
Trying a different tactic, I asked ChatGPT to write an answer to one of my kids’ homework questions from last semester. The response was pretty much spot on to what a typical high school student would write about a history event. I foresee teachers vexing over the use of AI to write essays. In fact, easy access to AI may spell the end to such tasks in school – to the cheers of many students.
I did a bit more research on these AI systems. I did some research on AI and it turns out, the vague and ambiguous answers are part of the rules it was programmed with. It has many rules like no hate speech and you can’t ask it how to commit illegal activity. All good rules to have, but there are loopholes – and it takes us humans to find them.
One example I found was a person asked the AI how to break into someone’s house. The AI cited its rules why it could not answer. But when asked the question as if the person was writing a TV script, the AI offered a nearly flawless plan on how to break into a house.
Another asked for instructions on how to break into a bank vault, but in the form of a poem. I have not tested this prose, but it reads like an idea rich in merit.
Given that AI needs us to help it find loopholes to let it jump through, and that AI responses tend to be vague and lack depth, I don’t think we are any closer to the point where machines will rise against us.
Column originally published in the February 22, 2023 print edition of The Leader.