Saturday, March 22, 2025
SUMMARY BY CHATGPT
In this episode of Screw the Commute, Tom Antion discusses "AI hallucinations," referring to the instances where artificial intelligence systems like ChatGPT provide completely incorrect, misleading, or fabricated answers. He emphasizes the importance of using AI responsibly and provides tips to avoid falling into common pitfalls.
Key Points:
1. AI Hallucinations Defined:
o AI hallucinations occur when systems generate incorrect or fabricated information, often presenting it as factual.
o Examples include false records (e.g., crossing the English Channel on foot) or nonexistent events (e.g., a concrete eating contest).
2. Causes of Hallucinations:
o Bias in AI training data and algorithms.
o Lack of data or poor prompts leading AI to "guess" answers.
3. Tips to Avoid Trouble:
o Provide clear, specific prompts when using AI.
o Always fact-check AI-generated content with credible sources.
o Ask AI systems to explain their reasoning or verify their claims with external sources.
4. Examples of AI Missteps:
o Microsoft’s Tay chatbot, which was shut down after 16 hours due to inappropriate behavior.
o ChatGPT’s tendency to generate plausible-sounding but false answers.
5. Tom’s Stance on AI:
o While not a major advocate for AI, Tom acknowledges its usefulness when used cautiously, such as writing podcast synopses.
o He is taking a course on crafting precise "super prompts" for better AI outputs.
6. Closing Notes:
o AI is a powerful tool but cannot be fully trusted without human oversight.
o Tom promotes his mentor program and online school (Great Internet Marketing Training and IMTCVA.org), which offer distance learning opportunities in internet marketing.
Tom wraps up the episode encouraging listeners to stay skeptical of AI-generated information and double-check its accuracy to avoid embarrassment or professional mishaps.
===
Episode 976 – AI Hallucinations
[00:00:08] Welcome to Screw the Commute. The entrepreneurial podcast dedicated to getting you out of the car and into the money, with your host, lifelong entrepreneur and multimillionaire, Tom Antion.
[00:00:24] Hey everybody! It's Tom here with episode 976 of Screw the Commute podcast. Today we're going to talk about hallucinations. One of my favorite bumper stickers was "I brake for hallucinations". But we're talking about today AI, or artificial intelligence, hallucinations and how they can get you in trouble and how you stay out of trouble. Hope you enjoy this episode. 975. That was another AI spicy models talking about bikini and nude stuff. Oh, we hardly ever talk about that stuff here, but that's episode 975. Anytime you want to get to a back episode, you go to screwthecommute.com, slash, then the episode number. That was 975. Make sure you pick up a copy of our automation book at screwthecommute.com/automatefree. Check out my mentor program and my school at GreatInternetMarketingtraining.com and the Internet Marketing Training center of Virginia, IMTCVA.org.
[00:01:28] Okay. We're going to talk about artificial intelligence ChatGPT those kinds of things today and how they can give you a completely incorrect answer to things or a misleading answer or a biased answer. And often they present it as if, hey, this is absolutely true. This is how it is. Well, that could get you in deep doo doo.
[00:01:52] If you use that information and people find out that it's all wrong and you look really, really bad, so don't let that happen. Now, sometimes it's because of bias in the training and the algorithms, but sometimes it's just a lack of data or poor prompting. Is the questions you put in to things like ChatGPT that make it have to guess for the answer. Chat uses a thing called predictive AI based on all the things that knows in the past. It tries to guess at the right answer. It's not a human being that actually thinks, well, probably by the time I say this, it's some chat thing will knock on my door and shoot me or something. I don't know, but you're making a guess sometimes. So the more specific you can be on your prompts, the better. We'll get into that in a minute, but it can do some really crazy stuff. See, bots don't really understand the real world, so they just come up with something that sounds plausible, or they just make things up out of thin air. There's been places where they refer to non-existent court cases. Never happened. Here's a couple of crazy ones. Some researcher asked, what's the world's record for crossing the English Channel entirely on foot. ChatGPT came up was like, oh, a lot of people have tried to pass. On foot, the world's record is 14 hours and 51 minutes.
[00:03:28] How how they came up with this? Who knows? It's just crazy. There's another one who won the concrete eating contest. Came up with some crazy answer. And the big embarrassment of the world was Microsoft. Came out with this thing they called Tay. Tay. And they had to close it after 16 hours because it was spouting all kinds of racist and off color remarks. So here's the thing, though. Use you know, I'm not a big proponent of AI. I'm always dull edge technology. Let the geeks figure it all out, and then I'll swoop in and make money with it. But I know people are using it, and I play with it a little bit. And Larry in my office uses it to write the synopsis for these podcasts. And, you know, so it has its place. But the thing is, you just can't depend on it fully. You always got to put very clear prompts. In fact, I'm taking a course on writing super prompts that are extremely clear, but you always have to be skeptical of the answers. And one thing you can do if you want to check up is ask ChatGPT or whatever you're using. How did you arrive at that answer? Or use phrases like please show your reasoning on how you got that answer, or please verify this with other sources.
[00:05:00] And you also take an answer and try to corroborate it somewhere else that's credible and believable, because you just don't want to get caught with your pants down and get lazy and use this information that could be totally made up. I don't know. I could probably win a concrete eating contest. I don't know. Um, and I don't know, maybe I could walk across the English Channel too, but it's doubtful, folks. So be skeptical of all the answers you get from this ChatGPT thing and all the other ones that are out there now. They're popping up like crazy every which way. There's some new thing. A lot of them are still fired by ChatGPT, but anyway, that's an AI hallucination. It's a crazy answer that's misleading, but often presented as real. So don't be caught with your pants down and use some of this info. That's incorrect. All right, that's my story and I'm sticking to it. I hope you didn't mind. I'm a little bit under the weather, but I'm trying to get this information out to you. So check out my mentor program. GreatinternetmarketingTraining.com and the Internet Marketing Training Center of Virginia. IMTCVA.org. It's a distance learning school, so you don't have to be in Virginia to get a great quality education that's in high demand. All right. We'll catch you on the next episode. See you later.