top of page
Search
  • Writer's pictureDavid Baker

ChatGPT 4.0 is Here! But, does it really matter?

If I'm still having to write my own blog posts, then the promise of AI has been a false one

As you’ve undoubtedly heard (or read), the crew behind ChatGPT has been excited about the release of the latest version of its interactive, online “artificial intelligence” searchbot (yes, I used AI in quotes for a reason), claiming that the newest version is exponentially better than the previous version 3.5. I even saw one article claimng that when taking the Bar Exam (it didn’t specify which bar exam), the new version “consistently scored in the upper 10th percentile fo passing grades” while the earlier version could only boast of consistently placing in the lowest 10 percent of passing grades. Whether or not this is an accomplishment is an open question.



Still, I was intrigued, so I just visited my favorite AI searchbot and asked it, “What is ChatGPT 4.0 and why does it matter?”


Dutifully, ChatGPT, replied with the following response,


“As an AI language model, I do not have versions in the traditional sense, but I have been continuously trained and improved since my initial release in 2020. However, OpenAI has released various versions of its GPT (Generative Pre-trained Transformer) language model, including GPT-3, which is the most advanced version as of my knowledge cutoff date.


GPT-3 (and previous versions) matter because they are some of the most advanced language models ever created, capable of generating human-like responses to text prompts and performing a wide range of natural language processing tasks. They have the potential to revolutionize many industries, from customer service and content creation to education and healthcare. Furthermore, GPT-3 and other language models like it have the potential to help researchers better understand human language and cognition, as well as inform the development of future AI systems.”


By comparison, the same request to Google didn’t seem to elicit nearly the same level of enthusiasm,


“At its core, ChatGPT 4.0 is a neural network that has been pre-trained on a vast amount of text data to understand the structure and semantics of language. When you provide a prompt or input, the model uses this pre-trained knowledge to generate a response that is relevant and coherent.”



And senior editor Kelli Maria Korducki, writing for The Atlantic, seemed to share the skepticism when she wrote,


“… GPT-4’s facility with words and syntax doesn’t necessarily amount to intelligence—simply, to a capacity for reasoning and analytic thought. What it does reveal is how difficult it can be for humans to tell the difference.


“Even as LLMs are great at producing boilerplate copy, many critics say they fundamentally don’t and perhaps cannot understand the world,” my colleague Matteo Wong wrote yesterday. “They are something like autocomplete on PCP, a drug that gives users a false sense of invincibility and heightened capacities for delusion.”


How false is that sense of invincibility, you might ask? Quite, as even OpenAI will admit.

“Great care should be taken when using language model outputs, particularly in high-stakes contexts,” OpenAI representatives cautioned yesterday in a blog post announcing GPT-4’s arrival.


Although the new model has such facility with language that, as the writer Stephen Marche noted yesterday in The Atlantic, it can generate text that’s virtually indistinguishable from that of a human professional, its user-prompted bloviations aren’t necessarily deep—let alone true. Like other large-language models before it, GPT-4 “‘hallucinates’ facts and makes reasoning errors,” according to OpenAI’s blog post. Predictive text generators come up with things to say based on the likelihood that a given combination of word patterns would come together in relation to a user’s prompt, not as the result of a process of thought.”


Why It Matters. You tell me.

9 views0 comments
bottom of page