Content that breaks Instagram’s own rules on promoting and glamourising eating disorders and self-injury can still be found on the platform, an investigation has revealed.

The Telegraph newspaper was able to find and follow profiles that were in breach (or very close to being in breach) of the social media platform’s rules on these topics.

It created test profiles in the UK and US and searched for keywords. After following a “handful” of such profiles, Instagram algorithms began promoting “suggested accounts” despite them flouting the rules, and content began to appear on the “Explore” screen.

The Telegraph reported that after it contacted Instagram, more than 100 accounts were removed by the company, and five hashtags were blocked from appearing in search results. A spokesperson confirmed plans to block all searches for certain terms and impose a warning notice.

But Instagram’s head of policy for Europe and the Middle East, Tara Hopkins, said:

“In the EU and the UK, we use image-based technology to find graphic self-harm… outside the EU, we’re able to use a more sophisticated range of technology.

“There are questions about whether this more sophisticated technology is allowed under GDPR, because it’s considered to be potentially making a judgement on someone’s mental health.

“We have a different view on this, and believe it is absolutely within the public interest that we are able to run it so we can look for this kind of content and remove it or take action on it.”

A spokesperson said that Instagram’s owner, Facebook, is now discussing a compromise proposal with Ireland’s Data Protection Commission (DPC), its chief data regulator in Europe.

According to the DPC, it had “strongly cautioned” Facebook against its initial proposal due to concerns over “adequate safeguards” for sensitive data and “a lack of engagement with public health authorities”.

It said: “Facebook was informed that we are open to discussing the matter further with a view to exploring other viable solutions… the DPC has remained open.”

Outside the EU, Facebook uses AI to scan images and text for suicidal intent, and may even flag mental health support organisations to the user or call emergency services in some cases.

But, the DPC said: “The expansion which Facebook proposed to the DPC goes much further than simply the removal and moderation of content.”

“However, Facebook has recently approached the DPC proposing a much more limited use of this tool for removal of content which contravenes [rules]. The DPC is currently reviewing this and expects to respond shortly.”

Eating disorder images were outside the scope of the legal issue, but Instagram had claimed it had prioritised self-injury images.