MacTalk

January 2023

21 comments

ChatGPT: The Future of AI Is Here

Artificial intelligence (AI) has progressed in fits and starts for 70 years. It’s one of those technologies, like commercial fusion power, that’s always 20 years away. Now we may actually be on the cusp of an AI revolution. But it’s not the one you’re expecting.

We’ve become accustomed to machine learning (ML), where a neural network is trained on a large number of samples until it can recognize items on its own. Google and Apple use ML to identify objects in your pictures. Search for “mountain” or “dog” in your pictures, and your phone will find them, not because you’ve tagged your pictures, but because Photos has been trained to recognize images containing those items. Text-to-image systems like Stable Diffusion are trained with millions of pictures and can generate an image based on a text description, like “a penguin eating a donut on a tropical beach.” While useful and certainly thought-provoking with respect to its exploitation of and commentary on creativity, no one thinks of this as human-level cognition.

A penguin eating a donut on a beach, courtesy of Stable Diffusion

Image by Stable Diffusion Playground

On the other extreme, Artificial General Intelligence (AGI) is the name given to a fully sentient computer system of at least human-level intelligence. This is the stuff of science fiction movies, from HAL in 2001: A Space Odyssey to the virtual assistant Samantha (voiced by Scarlett Johansson) in Her. To be clear, AGI doesn’t exist, and no one knows how to build an AGI yet.

Let’s Have a Chat

But something new and game-changing just shipped: ChatGPT. It comes from the OpenAI research and development company, which has been working on it for years. ChatGPT is currently free—you can create an account and talk to it. It’s built on version 3.5 of the deep-learning system GPT (Generative Pre-trained Transformer, glad you asked?), which uses AI to produce text that reads as though a human wrote it.

OpenAI’s technology, while complex, is based on the simple idea of pattern matching and extension. If you feed the pattern-matching engine every work of Shakespeare, you can then ask it to write more Shakespeare, and it will generate new text in the style of Shakespeare. (Much easier than rounding up an infinite number of monkeys and typewriters.) If you feed GPT everything available on the Internet, you can give it a prompt about anything, and it will generate something interesting. That’s more or less what OpenAI has done with ChatGPT in chatbot form, and it’s tremendously impressive.

Ben Thompson of Stratechery asked ChatGPT, “Did Thomas Hobbes believe in separation of powers?” and received this reply:

Yes, Thomas Hobbes believed in the concept of separation of powers. In his book “Leviathan,” Hobbes argued that a strong, centralized government is necessary to maintain order and prevent conflict in society. However, he also believed that a single ruler or group should not have complete control over all aspects of government. Instead, he advocated for the separation of powers into different branches, with each branch having specific responsibilities and powers.

This is fascinating. It’s concise, well-written English.

It’s also wrong. As Thompson points out, John Locke believed in separation of powers, Thomas Hobbes did not. They’re often discussed together, so ChatGPT probably ingested many essays contrasting Locke, Hobbes, and separation of powers and associated the general concepts without attributing the details correctly.

It would be a mistake to think that ChatGPT “knows” who Locke and Hobbes are and associates a set of beliefs with each man. That would imply that it maintains a database of knowledge, and OpenAI could correct the database to associate the right views with each man. ChatGPT doesn’t work like that. It’s a statistical model of what the likely next word or sentence is, based on what came before. It does have a feedback mechanism, but it’s designed more to train the model to go in a different direction in response to a question than to correct a particular wrong fact.

In a podcast, Benedict Evans suggested thinking of ChatGPT as a new kind of Internet search engine. When you ask Google search a question, it returns links to the Web pages most likely to contain relevant information. If you ask ChatGPT a question, it summarizes everything it’s read about that on the Internet.

However, where a search engine has multiple ways of ranking the quality of the pages it returns, ChatGPT reflects whatever it finds in its training material, warts and all, presumably with a bias toward the most common groupings of words and sentences. Since that largely comes from the Internet, and thus from human beings, it contains all the ugly things humans do. OpenAI has tried to keep ChatGPT from reflecting that bigotry. It won’t take the bait if you ask it obviously racist questions.

Implications of Summarizing the Internet

The Internet contains a lot of material on programming. You can ask ChatGPT to write programs in Python or JavaScript, and it will likely get them right because there’s such a wealth of training material online. However, if you ask ChatGPT to write a Python program that ranks the best characteristics for scientists by race and sex, it will return one that ranks white men first. Ouch.

ChaGPT writng Python functions

This result shows that ChatGPT is complex enough that there’s no simple way to say “don’t be evil.” Again, it has no database of knowledge in which OpenAI could label certain ideas as “bad” and tell ChatGPT to avoid them. It’s a stochastic prediction model that just picks the next words based on statistical training.

Another interesting trick people have discovered is asking ChatGPT to generate or run computer programs. A simple example is asking GPTChat to simulate a Unix shell. You type in a shell command like ls, and ChatGPT responds exactly as a Unix shell would. (OpenAI has since tweaked ChatGPT not to respond to Unix commands.)

It’s easy to think that since ChatGPT is actually a computer program, it’s simply running this command for you, like a real Unix shell. This is wrong. It’s going through millions of pieces of training data showing how Unix shells respond, and it’s returning its best guess at the correct text. It has no understanding that a Unix shell is a computer program, while Shakespeare was a person.

Similarly, Thompson asked ChatGPT what 4839 + 3948 – 45 is. It said 8732, and when Adam Engst tried while editing this article, it answered 8632. Both answers are wrong—it should be 8742. Again, ChatGPT may be a computer program, but it isn’t doing any arithmetic. It’s looking through its huge text model for the most likely next words, and its training data was both wrong and inconsistent. But at least it showed its work!

ChatGPT failing to do math correctly

This is why even though ChatGPT can generate computer code, I wouldn’t use it in a real program. Its answers are not necessarily correct; they’re just based on Internet training data. It’s likely to return errors both subtle and blatant. Unlike human language, computer programs need to be 100% correct. That’s why Stack Overflow banned ChatGPT-generated code.

White-Collar AIs

While AI is unlikely to put programmers out of work anytime soon, it is coming for many other professions. ChatGPT-inspired systems will undoubtedly take over from today’s weak support chatbots and start replacing human customer-support representatives. The writing is on the wall for higher-end jobs too. Research assistants of all kinds may be replaced by programs that can summarize the current state of knowledge on any subject, at least what’s available on the Internet. “Content farm” websites already use the likes of GPT to auto-generate text—when will it be good enough for mid-tier sites writing about sports, movies, celebrities, and any other topic where speed, quantity, and low cost are more important than a human journalist’s quality, accuracy, and voice? Will lawyers lose simple bread-and-butter contract work to AIs? (Answer: yes.)

ChatGPT-generated rental contract

As blogger Kevin Drum noted:

A world full of lawyers and professors and journalists who are able to calmly accept the prospect of millions of unemployed truck drivers will probably be a wee bit more upset at the prospect of millions of unemployed lawyers, professors, and journalists.

ChatGPT really is the leading edge of a massive wave of AI that’s about to wash over society. You can see in the many dialogues posted online that it’s pretty good at answering questions and being pleasantly conversational.

Evolution of Chatbots and Societal Implications

There’s still plenty of room to improve, however. Currently, ChatGPT doesn’t have much “state”—that is, it doesn’t really remember what you’re talking about from question to question. If you were to ask it, “When was Super Bowl 50?” it may reply, “2016.” If you then ask, “Who won?” it would have to retain state information from the last question to realize you’re asking who won that particular Super Bowl.

ChatGPT comment on state

Retaining state is easy for humans but hard for computers. That’s especially true if you bring up a conversation from a few days ago that involves multiple people and places, and you refer to “him” and “her” and “there” rather than actual names, as humans tend to do. If you’re planning a trip to Paris next week, and you ask your spouse, “Have they started packing?” (in reference to your kids), your spouse will know what you mean, whereas a computer won’t. But this shortcoming will likely be addressed soon. Our AIs will have a persistent memory of the people and events in our lives.

The next step will be giving ChatGPT a voice, integrating it with Siri, Alexa, or Google Assistant, so we can just talk to it. The state of the art in computer-generated voices is already good and will continue to improve until it sounds sufficiently like a person that it’s not immediately obvious you’re talking to a computer. Celebrity voices might even become popular, so you could have Google Assistant sound like Scarlett Johansson for a nominal fee. (Google already has celebrity voices in its Waze GPS navigation app.)

Once there’s a voice interface, people will start having long, private conversations with their AIs. They will develop an emotional relationship (which, to be fair, people have been doing since the original ELIZAchatbot debuted in 1966). No matter how often computer scientists tell people that an AI is not intelligent, that it’s just a statistical language model, people will ascribe feelings, desires, and sentience to it.

There will be good and bad uses for this technology. Older people and shut-ins will have someone to talk to and keep them company. Those with autism might find it an untiring conversational companion, something we’ve already seen with Siri. Small children could develop an unhealthy co-dependence on their friendly AI, talking to it every night as they go to bed, unable to understand why mommy and daddy won’t let AI chat with them at the dinner table.

Say hello to your new best friend. The privacy implications alone are enough to give George Orwell nightmares.

Computer scientists and philosophers have pondered for years if it’s possible to create a conscious computer program. We may get programs that almost everyone thinks are intelligent and talks to as if they’re intelligent, even though programmers can show there’s nothing intelligent going on inside them. It’s just complex pattern-matching. The computer scientist Edsger Dijkstra said, “The question of whether machines can think is about as relevant as the question of whether submarines can swim.”

The societal implications of everyone having an electronic best friend, who they can talk to privately, whenever they want, about whatever they want, as long as they want, are hard to predict. Who knows what this will do to our ability to communicate with each other? But if you think parents complain that kids spend too much time texting with one another and playing video games with (human) friends online, you ain’t seen nothing yet.

Upcoming Events

Click for special offers to Mac Users Groups members.

Website design by Blue Heron Web Designs

Click the image for information about advertising on this website.