top of page
  • Andy Murphy

The AI Trust Factor: Should You Bet Your Life on Artificial Intelligence



Can you trust home security to AI chatbots?


The premise of this show is about how AI-generated content is now exploding. The release of ChatGPT has caused a ripple throughout our society.


If you’re not familiar, ChatGPT is the creation of a company called Open AI. It’s a chatbot that came onto the scene in November of 2022. It offers answers to many questions that are generated in seconds.


You can ask it to write a 500-word essay on black holes, it’s written a Christian worship song with accompanying guitar codes, and it even writes code. I’ve asked it a ton of stuff including an essay question about the significance of the coin flip in Cormac McCarthy’s No Country for Old Men. In about 10 seconds it gave me a great explanation.


So why is The Secure Dad covering this? First off this might be the start of Skynet and eventually, the machines will take over the planet for themselves.


I for one welcome our new robot overlords.

The bigger issue that we need to look at is whether can we trust the information that’s given to us by AI. AI doesn’t think for itself, it’s only a reflection of the information it aggregates for its algorithms. And we need to not value its writing above anyone else.

The work of ChatGPT is so good, it’s been banned in New York City Schools for students and teachers. The worry is that students will use the chatbot to write papers and skip out on doing their projects. If left unchecked, pretty soon it’s going to be banned in all schools because it will be hard to tell chatbot writing from student writing.


Researchers do say that they have a way to determine work created by ChatGPT for now, but who knows how long it will be relevant. As a side note, trying to access ChatGPT during school and work hours is very hard because of the demand for its services.


The Secure Dad vs ChatGPT


So what does ChatGPT know about home security, I’m going to share a question with you that I asked and we’ll break down how good it did. My very first question to the chatbot was: Write 10 ways to protect your home. This is what I got in less than 8 seconds:


  1. Install a security system: A security system can help deter burglaries and alert you to any suspicious activity.

  2. Secure your doors and windows: Make sure all doors and windows are locked, and consider installing deadbolts on exterior doors.

  3. Use exterior lighting: Proper lighting can help deter burglars and make it easier to see who is at your door.

  4. Trim your shrubbery: Overgrown shrubbery can provide cover for burglars, so make sure to keep it trimmed.

  5. Keep valuables out of sight: Don't leave valuable items in plain sight, as this can attract burglars.

  6. Use timers for lights and electronics: Timers can make it appear as though someone is home even when you're not.

  7. Get to know your neighbors: Building relationships with your neighbors can help create a sense of community and make it more likely that someone will notice if something seems off.

  8. Secure your garage: Make sure your garage is locked and consider installing a security system.

  9. Use reinforced locks: Reinforced locks, such as those made of brass or steel, are more resistant to tampering.

  10. Keep a record of your valuables: Keep a record of your valuable items, including make and model numbers, in case they are stolen. This will make it easier to identify and recover your belongings.


Now on the surface, these are good answers. I don’t disagree with anything written by the chatbot. But the last two answers are a little lacking.


Answer number 9 was about reinforced locks made of brass or steel. I was hoping this was going to be about having an encapsulated strike plate or a Flip Lok on your door, but it was more about the construction of the lock itself.


I don’t think that’s useful besides most locks are steel on the inside. You’d have to go way out of your way to find one that wasn’t. I did find when I Googled what are deadbolt locks made of the first result talked about brass and steel deadbolts. So there you have it.

Answer 10 is good which was to keep a record of your valuables. But no one does that. Yes, it’s helpful to the police after the fact, but it’s not a proactive solution. If I had asked for 20 ways to protect your home, I’m not sure ChatGPT would have been able to keep up.


Obviously, these are the not BEST ways to protect your home and they’re certainly not in-depth answers that fully explain each point. But if you worked for an online news site and needed quick content, this would pass snuff.


This top 10 list would probably get 8 thousand reads and a ton of traffic on Twitter. Which is frustrating and a little dangerous, to say the least. Because there’s a lot of context missing here.

And the ethics of just copying and pasting an AI-generated article are murky at best.


And yes, I’m sure that I could ask a more detailed question to get better answers, but at that point, aren't I doing more of the work? I really wanted to see what this AI could do on its own, with little human help.


Tipping Point for American Media


Another thing to consider is that we’re about to hit a tipping point in news and content creation. Can we fully trust what AI produces? I think I’ve proven here that it’s lacking, but still decent.


I think by the end of this year, we’ll have to see a warning at the top of every article that declares if the article was created at least in some part by AI. And I also think the flipside is true we’ll see labels that say the content was 100% generated by a human. In the same vein, social media accounts will have to declare their AI or human origins as well.


So now I’m going to share with you another question I asked ChatGPT. I asked it What are the dangers of artificial intelligence influencing American media and this is what it said:

There are several potential dangers in artificial intelligence (AI) influencing American media, including:

  1. Loss of control: AI algorithms can be designed to produce content that is biased or misleading, and these algorithms may be difficult for humans to understand or control. This can lead to the spread of false or harmful information, which can have serious consequences.

  2. Amplification of existing biases: AI algorithms are only as unbiased as the data they are trained on, so if the data used to train the AI contains biases, the AI will likely amplify those biases. This can lead to the perpetuation and amplification of harmful stereotypes or misinformation.

  3. Displacement of human workers: As AI becomes more advanced, it may be used to automate certain tasks that are currently performed by human workers in the media industry. This could lead to job loss and economic disruption.

  4. Reduction of diverse voices: If AI is used to produce content without human oversight, there is a risk that it will not adequately represent diverse voices and perspectives. This could lead to a homogenization of media content and a reduction in the diversity of viewpoints and experiences represented.

It is important for media companies and policymakers to be aware of these potential dangers and to take steps to mitigate them, such as ensuring that AI algorithms are transparent and accountable and that they are used in a way that promotes rather than undermines diversity and fairness.

Again, I think these are great answers, but I feel like there is more to it than that. AI chatbots are going to change how we see news media, education, and social media - just for starters.


Going forward we need to be aware that what we are reading and watching can be heavily influenced by AI. And AI can be influenced by the internet as a whole, so fake news articles and Twitter accounts can fully affect what’s going into AI which will influence the content we read.


You may argue that this isn’t new, that miss and disinformation have been a problem for a while, and yes it has. The greater issue is the lack of human interaction and oversight on the part of content creators.


AI can be used in a positive way, but humans are lazy and I don’t think we’re going to be good stewards of this new technology. So please beware of what you read online and follow your intuition if you think something is off.


Think for yourself and you’ll be just fine.


Watch The Podcast on YouTube



DeleteMe_250.png
Protect Your Family with Your Inbox.
bottom of page