PHOTO
eSafety Commissioner has released an urgent Online Safety Advisory following what it has called a proliferation of extreme violent material online.
With a spate of recent assassinations, brutal murders, mass casualty events and conflict footage the eSafety Commissioner released the advisory - Gore online: How violent content is reaching children and what you can do.
According to the Commissioner the 'gore' content is resurfacing with disturbing frequency on young people's devices via autoplay, recommendations, direct messages and reposts.
When the images and videos are posted online the same clips can circulate across social media and video sharing sites such as X, Facebook, Instagram, Snapchat, TikTok and YouTube as well as through direct messages and chats.
As was previously reported on in eSafety's latest research 22 per cent of children between 10 and 17 of years have seen extreme, real life violence online.
Growing exposure to and accessibility of gore has led to the popularisation of dedicated gore websites with searchable libraries of content, follower tools, chat functions and recommendation loops.
With many of the websites situated in 'permissive jurisdiction's they have complex hosting arrangements to evade removal by authorities.
Commissioner Julie Inman Grant said young people are often drawn to impress or outdo peers with users not fully understanding the nature of the material, its impact and its long term consequences.
"The advisory explains how gore circulates online and the risks it poses for children and young people," Ms Inman Grant said.
"My concern is not just how fast this material spreads, but how algorithms amplify it further."
Ms Inman Grant explained algorithms reward engagement, even when it is driven by shock, fear and outrage.
"While most social media networks have policies that require the application of sensitive content labels or interstitials to blur gore rather than exposing innocent eyes to such visceral and damaging content, we have seen the major platforms fail to deploy these filters quickly or consistently," she said.
"Advanced AI tools should help aid detection, blocking and removal of content and increase the speed in which such protective filters can and should be deployed.
"Instead, as a likely result of decreased investment in trust and safety personnel and tools, a rollback of content moderation policies and clear latency in detection, the application of these filters often lags the content's virality.
"We expect the major platforms to do better."
Ms Inman Grant said eSafety is currently implementing the Social Media Minimum Age (SMMA), requiring platforms to take reasonable steps to prevent Australian children under 16 from having social media accounts.
The organisation has also recently registered Phase 2 industry codes designed to protect children from age inappropriate material, including pornography, extreme violence and gore, suicidal ideation and self-harm.
"The codes will provide further protections against exposure to such material on services which are either not subject to the SMMA or accessible without an account," Ms Inman Grant said.
"They will also complement Phase 1 industry codes and standards, which address the worst of the worst online material, such as child sexual abuse and pro terror material."
The Online Safety Advisory includes practical steps families, schools and platforms can take to help prevent exposure and support children and young people who are affected.
"eSafey has also updated its guidance for educators, parents and carers on how to speak to children or young people who may have come across graphic or violent material online," Ms Inman Grant said.
For more information or to report harmful material directly visit www.esafety.gov.au.





