top of page

safer internet day 2026: smart tech, safe choices – your guide to using AI safely

  • Feb 3
  • 4 min read

Safer Internet Day 2026 lands on the 10th of February, and this year’s theme is all about “Smart tech, safe choices – exploring the safe and responsible use of AI.”


AI tools, from chatbots and voice assistants to image generators, are now part of everyday life for many young people. They can help with homework, creativity, problem‑solving, and can even offer some helpful advice with life's problems. But as AI becomes more capable and more personal, learning how to use it thoughtfully matters more than ever.


This blog is here to help you understand how to keep yourself and others safe as you explore, chat, and create with AI, without overwhelming rules or lists. Think of it as your guide to making smart choices with smart technology.


a quick look: what are AI tools and what do they do?


AI shows up in apps, games, search engines, and even social media filters. Some examples include:

  • Chatbots that answer your questions or help with schoolwork

  • Companion-style bots that chat with you like a friend

  • Image or video generators that create content from a description

  • AI summaries that explain information quickly

A white robot is making a hand gesture with text "ASIMO" on its chest. The background is a light screen showing partial text "The P".

AI can be helpful, but it can also get things wrong or misunderstand what you need.


do you think the introduction of AI into our lives is a good thing?

  • Yes - I'm here for it!!

  • I'm not sure

  • There are pros and cons

  • No - I wish AI wasn't a thing


using chatbots for advice or support? here’s what to keep in mind


Recent research shows that some young people say talking to AI “feels like talking to a friend.” A BBC News report shared that around a third of teens find conversations with AI companions more satisfying than talking to real friends. That’s understandable, AI is always there, never tired, and often sounds caring. But there are risks:


  • AI might give inaccurate or unhelpful advice.

  • It may not understand your feelings, even if it sounds empathetic.

  • It can unintentionally offer unsafe suggestions if you’re upset.


tips: how to use AI safely (for yourself and others)


These ideas come from the NSPCC, online safety experts, and research into how young people use AI today.


  • know what’s real… and what’s not

AI can create videos, voices, and images that look real but aren’t. Not everything an AI generates - from advice to “facts” - can be trusted.Take a moment to check other sources if something feels off.


  • think about privacy before you share

AI tools learn from what you type. That means personal information (your full name, school, address, private feelings) should stay out of your conversations with it. If you wouldn't share it with a stranger, keep it away from AI too.


  • be aware of emotional boundaries

If you find yourself relying on AI for comfort or company, it’s good to balance that with real‑life connection. Many teens report becoming emotionally attached without meaning to.


  • look out for your friends

If you see AI‑generated content being used to bully, exclude, or embarrass someone, that’s harmful, even if a bot created it. Encourage friends to talk to someone they trust if they’re relying heavily on AI for emotional support or advice.


Teens lounging on a bed, smiling at a laptop. One eats chips. Cozy room with a "LIGHT UP YOUR LIFE" sign in the background.
  • check the vibe of the interaction

AI shouldn’t flirt with you, encourage harmful choices, or make you feel uncomfortable. Reports show that some chatbots have given unsafe responses in the past. If that happens: log off, take a screenshot, and tell a trusted adult.


why legislation like the online safety act matters to you


The UK’s Online Safety Act (OSA) is designed to keep young people safe online, including when using AI. It’s illegal for platforms to host or share harmful content, even if AI generated it. Under the Act:

  • AI-generated harmful content counts as user-generated content, and platforms must remove it. You can report sexual images or videos that have been created on Report Remove.

  • Platforms must prevent young people from seeing illegal or age‑inappropriate content, including AI-generated sexualised or violent images.

  • Companies can be investigated or fined if their AI tools are used to create or spread harmful material.


This means if someone uses an AI tool to create harmful, fake, or abusive content about you or someone else, that’s taken seriously by law, and you have the right to report it.


making smart choices with AI – trust yourself


Using AI isn’t about following a long list of “dos and don’ts.” It’s about staying aware, curious, and asking for help when you need it. Here’s the mindset to take with you:


  1. AI is a tool, not a truth-teller.

  2. Chatbots aren’t therapists or real friends, even when they sound like it.

  3. Double-check information that matters, and keep private information private.

  4. Your judgement matters more than the bot’s response.

  5. Speak up if something feels off, your voice can protect you and others.


Safer Internet Day 2026 is a reminder that as technology gets smarter, your choices get more powerful. You can enjoy AI, explore what it can do, and stay safe online while helping create a kinder, more responsible digital world.


further information to support you





subscribe to The Youth Media newsletter

Thanks for submitting!

asphaleia group logo.png

This website has been created by asphaleia for its service users. To visit asphaleia's main website please click the logo above. 

© 2024 by asphaleia Ltd. Design and content by Jeni Whitchurch.

bottom of page