ChatGPT: Should we be scared?

By Nikki Hancocks

- Last updated on GMT

Userba011d64_201
Userba011d64_201

Related tags ChatGPT AI Opinion Internet

ChatGPT has many of us intrigued and nervous in equal measure but James Collier, co-founder of Huel, asks if this tool is any more scary than the current situation with “nonsense nutrition content” touted across social media by pseudoscientists.

The language processing tool from OpenAI can provide intricate answers to questions and assist with tasks like composing emails, essays, and code. Last month the new GPT-4 version was released, which can accept images as inputs and has the ability to handle over 25,000 words of text. 

Discussing potential implications of this software for the nutrition industry on LinkedIn recently, Collier notes that a recent article​ in Nature​ concluded that abstracts written by the bot fooled scientist and asks: “If experts in a particular field are unable to tell the difference, what hope does the lay public have? Indeed, the feasibility of ChatGPT’s use in healthcare is currently being analysed; I hope with some urgency.”

Speaking to NutraIngredients, he points out that he is first and foremost a registered nutritionist with a keen eye for nonsense nutrition content and this is where his key concern lies.

“If some influencer has read half a book and thinks they know it all, they can easily use ChatGPT to write an article which looks credible and well-referenced and put it out there with very little effort. This could definitely add to the misinformation out there,” he asserts.

“Equally, I enjoy nutrition writing but I don’t do a lot of it because I don’t have enough time. If I can reduce the time it takes then that’s a win and I can put more content out there that can fight the rubbish.”

Collier tested the tool by writing a short text on the topic of serotonin and mental wellbeing. He then gave Chat GPT the same task and put both the pieces on his LinkedIn for followers to guess which was AI generated.

The verdict - almost exactly 50% of respondents guessed correctly, suggesting the tool is extremely efficient.

Even so, where it stands at the moment, a ‘narrow AI’ tool which can only produce content that is as good as the question its given, isn’t particularly scary in Collier’s eyes.

He points out that the software can’t yet analyse the strength of the methodology of a study and so if it were given two to write about it would give them equal weight, whereas a scientist or nutritionist would know otherwise. Equally, it won’t challenge itself like the scientific community does so well.

“Emotions flaw us, but they also make us, and through our opinions and emotions we challenge each other to think outside the box. ChatGPT can’t do that.

 “At least it, unlike many social media influencers, won’t be guilty of falling for the Dunning–Kruger effect: it won’t believe it’s an expert when it’s not. This point alone gives considerable relief.”

Scientific writing

NutraIngredients-Asia recently reported that a group of scientific researchers has come up with a list of seven ‘best practices’ for using AI and ChatGPT when writing manuscripts.

Published in ACS Nano​the list was written by a total of 44 researchers, including high profile figures such as Professor Ajay K Sood, principal scientific adviser to the government of India.

The scientific journal publisher Elsevier​, has also recently written a policy on the use of AI.

It states: “Where authors use generative artificial intelligence (AI) and AI-assisted technologies in the writing process, authors should only use these technologies to improve readability and language.

“Applying the technology should be done with human oversight and control, and authors should carefully review and edit the result, as AI can generate authoritative-sounding output that can be incorrect, incomplete or biased.

“AI and AI-assisted technologies should not be listed as an author or co-author, or be cited as an author. Authorship implies responsibilities and tasks that can only be attributed to and performed by humans, as outlined in Elsevier’s AI policy for authors​.

“Authors should disclose in their manuscript the use of AI and AI-assisted technologies in the writing process by following the instructions below. A statement will appear in the published work. Please note that authors are ultimately responsible and accountable for the contents of the work.”

Related topics Views

Follow us

Products

View more

Webinars