top of page
Search
Writer's pictureDavid Baker

ChatGPT Takes Over the Online World

Copyright infringement may be the least of concerns when it comes to AI



For decades, pundits have warned us of the dangers posed by robots, automated machines, and anything else powered by "artificial intelligence" ("AI" for short) and for just as long we have all ignored those warnings and made room for smart toasters, self-driving cars, and smartphones, in our daily lives.


But now, there is quite the kerfuffle as "chatbots," led by a chatbot known simply as ChatGPT (an abbreviation for "Chat Generative Pre-trained Transformer"), are being integrated into modern society right alongside that autopilot landing your Southwest Airlines flight to Phoenix or that Tesla automated driving system about to crash your EV into a stalled 18-wheeler on the Interstate.


Why the big concern all of a sudden?


Chatbots themselves certainly aren’t very scary. They’ve been around in one form or another for years and we’ve been interacting with them, in their simplest form, on customer service calls. Really, they’re nothing more than a computer program designed to simulate conversation with human users. But the problem has become how much they’ve improved.


In fact, some of them have improved so much that it’s difficult to differentiate them from a living, breathing real person, especially when the interaction takes place over the Internet.


And ChatGPT is even better.



According to a recent article on techopedia by Magaret Rouse, ChatGPT “is a complex machine learning model that is able to carry out natural language generation (NLG) tasks with such a high level of accuracy that the model can pass a Turing Test.”

The suddenly popular chatbot “was trained on massive amounts of unlabeled data scraped from the internet before 2022. The model is continually being monitored and fine-tuned for specific language-oriented tasks with additional datasets labeled by humans.”


And it appears that ChatGPT really does excel at certain specific tasks, such as providing answers to questions, completing a given text or a phrase, writing fiction and non-fiction content from prompts, and even producing humanlike chatbot responses. The problem is that humans, as with most other new technologies, have found a myriad of ways to misuse the chatbot technology for their own devices.


Students have used it to write entire term papers. Authors have used it to write articles and fictional stories for publication. Business executives have used it to write reports and projections. And, in a sense, there is nothing wrong with any of these uses except for the fact that the writing was done entirely by a chatbot and not by the student or author or businessperson who is being evaluated for the work they purportedly did themselves.


Further still, chatbots are only as good as the information available to them and much, if not all, of the information most chatbots can access is owned by someone else. Many times the owner of the original, underlying copyright is an artist or an author who deserves to be credited for the original work and, of course compensated for it.


But ChatGPT and other AI programs are not programmed to attribute original sources. And even if they were, anyone who is hoping to pass off ChatGPT-generated content as their own doesn’t want anyone to know that the content wasn’t created by themselves.

So, those copyright holders get shafted. By robots.


How this all will play out is anybody’s guess.



Why It Matters.


Originally, I had contemplated using ChatGPT to draft this very article. But then I thought it might be more interesting to have it draft the article while I drafted my own article and presented them both for comparison. Then, I realized no one actually reads any of these articles so I opted to just write it myself.


Nevertheless, I have no doubt ChatGPT would have done a much better job and in a fraction of the time.

42 views0 comments

Comments


bottom of page