ChatGPT - 2: How Is It Useful?
After storming the world with its dialogue-based artificially intelligent (AI) chat, OpenAI Foundation's ChatGPT, which is still a prototype, is throwing out 'chats', some useful and some full of misinformation. People are busy experimenting with it, and some users have created books using the AI-based chat, while a few have created period graphical images. However, as I mentioned in the first part of this series, ChatGPT answers are not always accurate or reliable. This is because ChatGPT is an extensive language model trained by OpenAI and does not have the ability to browse the internet or access information outside of what it was trained to do. 
 
Even if it is connected to the internet, the important point is its ability to verify whether the information it uses is accurate or false. Clearly, it is not entirely accurate all the time. As Sam Altman, chief executive officer (CEO) of OpenAI, mentioned, ChatGPT is "fun creative inspiration; great! Reliance for factual queries; not such a good idea."
 
ChatGPT is not connected to the internet and can occasionally produce incorrect answers. It has limited knowledge of the world and events after 2021 and may occasionally produce harmful instructions or biased content. 
 
Having said that, the output received from ChatGPT depends upon how you frame your queries. This is akin to posting the right question in your application filed under the Right to Information (RTI) Act. Remember, under RTI, any Indian citizen can ask for information that is available on the records of any public authority. However, instead of seeking information, many applicants ask questions or question decisions taken by the public authority. 
 
"ChatGPT is sensitive to tweaks to the input phrasing or attempting the same prompt multiple times. For example, if a question is phrased in a particular way, the model can claim not to know the answer, but rephrasing it slightly can lead to a correct answer. Ideally, the model would ask clarifying questions when the user provided an ambiguous query. Instead, our current models usually guess what the user intended," OpenAI says in a blog post.
 
According to OpenAI, ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. "Fixing this issue is challenging for the following reasons: (1) during reinforcement learning (RL) training, there's currently no source of truth; (2) training the model to be more cautious causes it to decline questions that it can answer correctly; and (3) supervised training misleads the model because the ideal answer depends on what the model knows, rather than what the human demonstrator knows."
 
Having said that, some exciting features of ChatGPT are asking follow-up questions, admitting mistakes, challenging incorrect premises, and rejecting inappropriate requests.   
 
For example, when someone asked it about fixing a code, which was not working as expected, ChatGPT replied, "It's difficult to say what's wrong with the code without more context. Can you provide more information about what the code is supposed to do and what isn't working as expected? Also, is this the entire code or just a part of it?"
 
 
Many find it easier to write an essay and even a storybook using ChatGPT. One Ammaar Reshi published a children's book co-written and illustrated by AI tools, including ChatGPT and MidJourney. 
 
 
However, when it comes to serious writing like a thesis, ChatGPT can provide reasonably-sounding explanations and reasonably-looking citations, which may be non-existent. 
 
Teresa Kubacka (@paniterka_ch) says, "Today, I asked ChatGPT about the topic I wrote my PhD about. It produced reasonably-sounding explanations and reasonably-looking citations. So far so good – until I fact-checked the citations. And things got spooky when I asked about a physical phenomenon that doesn't exist."
 
 
 
Andrew Ng (@AndrewYNg), co-founder of Coursera, feels, "This overconfidence - which reflects the data they (large language models like Galactica and ChatGPT) are trained on - makes them more likely to mislead."
 
 
Some users like Àlex Corretgé (@corretge) feel "ChatGPT explains things so well that if you do not know the subject, you are convinced!" 
 
As these examples show, not everybody is pleased with the answers provided by ChatGPT. The main reasons are ChatGPT is not connected to the internet and has limited knowledge of the world and events after 2021. Also, what is available today is the initial research preview of ChatGPT which, in other words, means that it is work still under progress. 
 
Aleksandr Volodarsky (@volodarik), who runs a marketplace with a network in Europe and Latin America, listed 10 things we could do with ChatGPT. They include generating code from scratch, debugging code, tracking fitness, mak it a personal assistant, creating a marketing plan or search engine optimisation (SEO) strategy, creating a virtual machine, writing essays, music, scripts and jokes, developing plugins, developiing games, and developing a chatbot for treatment for few medical issues. 
 
 
While ChatGPT is a good tool to take care of routine things and creative jobs, it will be a few more years before it can replace experts in respective fields. For example, journalism is not merely about writing news but also what not to write, like protecting the identity of a minor or victim of a rape crime. 
 
Having said that, as Mr Altman from OpenAI says, "...once a technological revolution starts, it cannot be stopped. But it can be directed, and we can continually figure out how to make the new world much better. Best game ever!"
 
 
You may also want to read...
 
Comments
Free Helpline
Legal Credit
Feedback