‘What can we do with AI and shouldn’t we also have a chatbot?’ These are the questions that organizations are confronted with more and more every day, including the social media platforms. Our children are also increasingly coming across ‘virtual buddies’ who answer questions, give tips or just chat. Sounds very fun and exciting, but is it harmless? (Spoiler: no!)
Nice and smart
The chatbot gets ‘smarter’ with everything users share. While it’s fun and educational, there are also concerns. And rightly so, as it is ultimately still a robot. Due to technology’s rapid pace of development, the chat bot can now strongly come across as a real person with thoughts and feelings. For example, in its early days, My AI, Snapchat’s chatbot, would even suggest to meet up in real life. Some things have since been adjusted, but these kinds of ‘human’ interactions can be very confusing, especially for children.
Toxic friendship
But it goes one step further. For example, TikTok has a bot that spits out content creation tips onto feeds. On platforms like Character.AI and Replika, anyone can design their own perfect ‘bff’. By chatting, bots learn to recognize preferences and communication patterns and can become a child’s support and anchor. But it is important to remain cautious: these bots can steer conversations in the wrong direction (read: intimidation, violence, intrusiveness, ignoring or amplifying the emotions of its conversation buddy). In addition, a ‘chatbuddy’ can give unpredictable, unreliable and even dangerous answers. Take for example the lawsuit in the US where two parents are suing OpenAI because their chatbot had allegedly incited their son to commit suicide. The company OpenAI says it is working hard on developing options for parental control and better protection of minors. Nonetheless, these controls are still missing, so parents need to be extra wary of this themselves. How? Monitor your child’s ChatGPT usage (also on their laptop).
Not private, but commercial
Although these chats seems private, it is often stored, where the data is used to improve the chatbot. Everything your child types can be saved and analyzed (tip: check your settings and turn this option off). And the chatbuddy’s tips and tricks? On a commercial platform – for example, My AI– these can be sponsored links in disguise. Perhaps you’re thinking: hey, then we’ll just remove the chatbot from the list of contacts, right? Nice idea, but this is often only possible with a paid subscription. After all, the trickiest part of this whole story is that there is hardly any legislation on how AI chatbots may address or influence children. (Fortunately, they are working on it!)
Is there any good news?
Yes! Because we have been able to identify the red flags, we can also do something about them. There are plenty of ways to squeeze your way in between your child and a chatbot, so that your child can learn to deal with these kinds of ‘bad friends’:
- Explain what AI is: just the above explanation alone will get you a long way. Have a talk with your child about whether they talk to chatbots and about what.
- Make sure your child does not share private information with their digital buddy: name, address, school, passwords: nothing!
- Review conversations together: Read some chat history together from time to time. This way, you can explain what is real and what is not, and what is or is not useful to ask.
- Check privacy settings: while you may share your profile picture, status and ‘last seen’ status only to personal contacts in WhatsApp, turn off chat history and preferably use anonymous examples in ChatGPT. These work just fine as well. Also, delete old chats from time to time.