[ad_1]
If you weren’t paying attention to the disclaimers with AI chatbots, it’s time you should. OpenAI’s ChatGPT took the world by storm and the company recently announced that it is now used by 100 million users weekly. However, researchers have a ‘warning’ if you are using the free version of ChatGPT.
There are two versions of ChatGPT – a free version and a paid one.The free version is powered by GPT 3.5, a relatively older model compared to GPT-4 which is a much more powerful and capable version.
Given that there is a cost involved in getting access to the more capable model, and that there is a free version available, it is obvious that people will want to go for the latter. OpenAI has repeatedly highlighted that people should fact-check the responses by its AI chatbot and researchers have now provided another strong reason to follow these instructions.
ChatGPT’s medical information may be inaccurate
According to research conducted by pharmacists at Long Island University, the free version of ChatGPT may provide inaccurate or incomplete responses to medication-related questions. This behaviour could put patients in a dangerous position.
The study demonstrates that patients and healthcare professionals alike should be cautious about relying on OpenAI’s free chatbot for drug information. The pharmacists posed 39 questions to the free ChatGPT but found that only 10 of the responses were “satisfactory” based on the criteria they established.
They found that ChatGPT’s responses to the questions either did not directly address the question asked or were inaccurate, incomplete or both. The researchers advised users to follow OpenAI’s advice to “not rely on its [free ChatGPT’s] responses as a substitute for professional medical advice or traditional care.”
Google CEO’s words of caution
Earlier this year, Google CEO Sundar Pichai also used a medical reference to provide the gravity of the dangers posed by current AI chatbots. One of the reasons he gave for being late to the ‘AI party’ was a sense of caution within Google.
“We have to figure out how to use it [chatbot] in the correct context, right? For example, if you come to Search, and you’re typing in Tylenol dosage for a three-year-old, it’s not okay to hallucinate in that context,” he pointed out in an interview, adding that “There’s no room to get that wrong”.
AI chatbots are becoming better
Microsoft just announced a slew of features for Copilot and reports suggest that Google is also preparing for a virtual preview of its GPT-4 rival language model Gemini this week. This suggests that tech giants working in the AI space are changing gears in providing more accurate information.
There are two versions of ChatGPT – a free version and a paid one.The free version is powered by GPT 3.5, a relatively older model compared to GPT-4 which is a much more powerful and capable version.
Given that there is a cost involved in getting access to the more capable model, and that there is a free version available, it is obvious that people will want to go for the latter. OpenAI has repeatedly highlighted that people should fact-check the responses by its AI chatbot and researchers have now provided another strong reason to follow these instructions.
ChatGPT’s medical information may be inaccurate
According to research conducted by pharmacists at Long Island University, the free version of ChatGPT may provide inaccurate or incomplete responses to medication-related questions. This behaviour could put patients in a dangerous position.
The study demonstrates that patients and healthcare professionals alike should be cautious about relying on OpenAI’s free chatbot for drug information. The pharmacists posed 39 questions to the free ChatGPT but found that only 10 of the responses were “satisfactory” based on the criteria they established.
They found that ChatGPT’s responses to the questions either did not directly address the question asked or were inaccurate, incomplete or both. The researchers advised users to follow OpenAI’s advice to “not rely on its [free ChatGPT’s] responses as a substitute for professional medical advice or traditional care.”
Google CEO’s words of caution
Earlier this year, Google CEO Sundar Pichai also used a medical reference to provide the gravity of the dangers posed by current AI chatbots. One of the reasons he gave for being late to the ‘AI party’ was a sense of caution within Google.
“We have to figure out how to use it [chatbot] in the correct context, right? For example, if you come to Search, and you’re typing in Tylenol dosage for a three-year-old, it’s not okay to hallucinate in that context,” he pointed out in an interview, adding that “There’s no room to get that wrong”.
AI chatbots are becoming better
Microsoft just announced a slew of features for Copilot and reports suggest that Google is also preparing for a virtual preview of its GPT-4 rival language model Gemini this week. This suggests that tech giants working in the AI space are changing gears in providing more accurate information.
[ad_2]
Source link
More Stories
Google Maps: Three privacy features coming to Google Maps on Android, iPhones
Most-Downloaded IPhone App: This Chinese app was the most-downloaded iPhone app in the US in 2023
Ukraine’s largest mobile operator goes offline for millions of users after cyber attack