Tech giants face scrutiny as eSafety calls for action on child abuse

Australia’s eSafety Commissioner has issued legal notices to tech giants including Apple, Google, Meta and Microsoft, requiring the companies to report to the regulator every six months on the measures they have in place to tackle child sexual abuse online.

Issued under Australia’s Online Safety Act, the notices were also sent to Discord, Snap, Skype and WhatsApp services and require all recipients to explain how they are dealing with child abuse material, live abuse, online caution, sexual extortion and when applicable the production of “synthetics”. ” or counterfeit child abuse material created using generative AI.

For the first time, the announcements will require technology companies to report periodically to eSafety for the next two years, with eSafety publishing regular summaries of findings to improve transparency, demonstrate security vulnerabilities and drive improvements.

eSafety Commissioner Julie Inman Grant said the companies were selected in part based on responses many gave to eSafety in 2022 and 2023, exposing a range of safety concerns when it came to protecting children from abuse.

“We are increasing the pressure on these companies to lift their game,” Ms Inman Grant said. “They will be required to report to us every six months and show us that they are making improvements.

“When we sent notices to these companies in 2022/3, some of their responses were alarming but not surprising as we had long suspected that there were significant gaps and differences in service practices. In our subsequent conversations with these companies, we have yet to see any significant changes or improvements to these identified security flaws.

“Apple and Microsoft said in 2022 that they do not try to proactively detect child abuse material stored on their widely used services iCloud and OneDrive. This is despite the fact that it is well known that these file storage services serve as a haven for child sexuality, abuse and pro-terrorist content to continue and thrive in obscurity.

“We also learned that Skype, Microsoft Teams, FaceTime and Discord did not use any technology to detect the live streaming of child sexual abuse in video chats. This is despite evidence of widespread use of Skype, in particular, at this time tall, crime on the rise.

“Meta also admitted that it did not always share information between its services when an account is banned for child abuse, meaning that offenders banned on Facebook may be able to continue to commit abuse through their Instagram accounts, and Offenders banned on WhatsApp cannot be banned on either. Facebook or Instagram”.

eSafety also found that eight different Google services, including YouTube, are not blocking links to websites known to contain child abuse material. This is despite the availability of databases of these known abuse websites that use many services.

Despite eSafety investigators regularly monitoring the use of Snapchat for grooming and sexual extortion, eSafety found that the service was not using any tools to detect grooming in conversations.

“The report also found wide disparities in how quickly companies respond to user reports of sexual exploitation and child abuse on their services. In 2022, Microsoft said it took an average of two days to respond, or up to 19 days when these reports required review, which was the longest of all providers Snap on the other hand reported that it responded within 4 minutes.

“Speed ​​is not everything, but every minute counts when a child is in danger.

“These notifications will let us know whether these companies have made any improvements to online safety since 2022/3 and ensure that these companies remain accountable for the harm that is still being done to children on their services.

“We know that some of these companies have made improvements in some areas – this is the opportunity to show us progress across the board.”

The main potential security risks considered in this round of announcements include the ability of adults to contact children on a platform, the risks of sexual extortion, as well as features such as live streaming, end-to-end encryption, generative AI and systems of recommenders.

The transparency notices under Australia’s Basic Internet Safety Expectations are designed to work hand-in-hand with eSafety codes and industry standards which require the online industry to take meaningful action to combat child abuse and other material 1st class for their services.

Compliance with a notice is mandatory and there can be financial penalties of up to $782,500 per day for non-responsive utilities.

Companies will have until February 15, 2025 to provide their first round of responses.

/Public Notice. This material from the original organization/author(s) may be current in nature and edited for clarity, style and length. Mirage.News does not take institutional positions or sides and all views, opinions and conclusions expressed herein are solely those of the author(s). Watch it in full here.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top