In doing research using AI, have you ever wondered whether the information is accurate, and how do you know? That's what one author considered after asking AI to do research and found that getting the results was like talking to a teenager or student who doesn't know the right answer, makes up the results, and then lies about it. She describes her experiencing in overcoming some false and invented information and finally getting it right in an article on Medium at https://tinyurl.com/3zm6cnyf and in Substack a https://gini.substack.com/p/when-talking-to-ai-is-like-talking
As the author, Gini Graham Scott, PhD, describes, she had this experience after she asked ChatGPT for some data about the numbers and costs of prisoners and the police for a book she was writing for American Leadership Books, a company specializing in books on crime, politics, and social issues. The book was Our Costly Crime and Law and Order Problem – all about the ways in which the costs of crime and law and order are too high and what to do about it to reduce crime and its costs. On a second request, AI came up with the correct information, but the first time, some of the numbers were invented or wrong.
Both times, the conversation was very realistic, like talking to a real person who kept coming up with excuses, apologizing for taking longer than expected for completing the assignment, expressing appreciation for my trying to simplify the assignment, and praising my flexibility and patience.
She thought ChatGPT had trouble getting the data in the first place, explained it was difficult to get the requested information, provided some made up data to make up for that, and ultimately, said it was going to get some additional accurate data. ChatGPT said it would take a while to get the data, but after an hour, never returned with any information.
She finally started over with a new chat with ChatGPT, and got some information from another AI bot. It felt a little like I was asking another teenager or student to do the assignment, when the first one couldn't perform. Though eventually, I did get a chart with numbers, and later when I asked, I got a list of six sources used to obtain the data.
So can you really trust AI or not, since sometimes you may get really good information, but other times, you may not. And how do you know or distinguish when the information you get is true or not? She provides the highlights of her back-and-forth exchange with AI in her article, which concludes with these comments about one way to solve the problem of knowing if you are getting correct information from AI or not.
"So there you have it. What I asked, and how ChatGPT responded. It was like getting information from a more responsive and more intelligent teenager or student, after the first one couldn't perform, made excuses, promised to do better, and ultimately just went truant and didn't complete the job at all. So is the first information you receive correct? That's a key problem when you ask ChatGPT to do research for you. Is the research valid or not? And how can you check, without doing the research yourself, when you seek help from ChatGPT because you don't want to do that?
It took more time, but the book publisher did a spot check of some of the data points, which were the same as the information from ChatGPT and the sources ChatGPT indicated it used. So that's one way to check by randomly checking a few data points provided by ChatGPT; then if there's a match, that suggests the rest of the data points from that same source are fine. This way you don't have to do the research to obtain all the data yourself, which could take many hours or even days, whereas ChatGPT can come up with that same information in a few minutes.
"Thus, besides asking ChatGPT to provide the sources and doing some random checking to see if there's a match with what ChatGPT says, it sometimes can be difficult to know if you are getting accurate information from ChatGPT. So aside from doing these checks, how do you know which research search for data is correct? Maybe that's a question to ask ChatGPT."
For more information and to schedule interviews, email or call:
Karen Andrews
Executive Assistant
Changemakers Publishing and Writing
San Ramon, CA 94583
(925) 804–6333
Changemakerspub@att.net
www.changemakerspublishingandwriting.com
*********
Gini Graham Scott, Ph.D. is the author of over 50 books with major publishers and has published 200 books through her company Changemakers Publishing and Writing (http://www.changemakerspublishingandwriting.com). She writes books, proposals, and film scripts for clients, and has written and produced 18 feature films and documentaries, including Conned: A True Story and Con Artists Unveiled¸ distributed by Gravitas Ventures. (http://www.changemakersproductionsfilms.com). Her latest books include Ghost Story and How to Find and Work with a Good Ghostwriter published by Waterside Productions; The Big Con, I Was Scammed, Scams in the Digital Age, and Love and Sex in Prison, published by American Leadership Press; and Ask the AI Wizard, published by J. Michael Publishing.