This page provides guidance for sites and platforms hosting user-generated content on developing policies around self-harm and suicide content. Companies must have clear and comprehensive policies for managing self-harm and suicide content on their platform. This can reduce the likelihood of harmful self-harm and suicide content being shared and ensures clear and robust mechanisms are in place to deal with it if it is.
A self-harm and suicide content policy sets out the expectations for users of what is and isn’t acceptable on the platform, and explains what will happen if these expectations are not met.
Having a robust policy in place can improve the safety of users on the platform and minimise potential harm. From a commercial perspective, these policies also mitigate against the reputational consequences that are likely to arise from hosting harmful content.
Policies should reflect the latest evidence around self-harm and suicide content and should be developed in consultation with subject matter experts (see our page: Promoting online excellence in suicide prevention through collaboration and transparency for further guidance).
Considerations when developing the policy
Functions of the site or platform
Self-harm and suicide content will manifest in different ways depending on functionality. See Understanding self-harm and suicide content for risks associated with specific functionality.
The vulnerability of users
Sites aimed at vulnerable users, such as children and young people, will require stricter content policies.
Responses to self-harm and suicide content will be influenced by principles of safeguarding, data protection and the removal of illegal content.
How realistic and actionable the policy is
Companies should ensure they have the necessary resource to implement their policy effectively.
What should a self-harm and suicide content policy include?
Polices should include clear definitions of self-harm and suicide in order to establish what content is in scope. For definitions of self-harm and suicide, see our information page: Understanding self-harm and suicide content.
Content covered by the policy
It is important that policies are explicit about what content is and isn’t covered. While sharing experiences of self-harm and suicidal thinking and behaviour can be helpful for many people, in order to protect users, all sites and platforms should remove and limit access to self-harm and suicide content considered to be harmful.
Content considered harmful
- Promotion or encouragement of self-harm and suicide. Content that intentionally encourages the suicide or attempted suicide of another person is considered illegal under the 1961 Suicide Act
- Graphic descriptions or depictions of acts of selfharm or suicide, such as open wounds and blood
- Detailed methods or instructions for self-harm and suicide, including descriptions and depictions of equipment, and the evaluation or comparison of the effectiveness of different methods
- Suicide pacts and challenges, where users may be encouraged to harm themselves
- Mockery or bullying of people who have self-harmed or attempted, or died by suicide
- Text: posts, private messages, instant chat, quotes, blogs information pages
- Visual: images, video, artwork, memes, emojis, TV/film stills
- Livestreaming: audio and visual content that is broadcast in real time to a live audience
- Disappearing content: posts that expire after a defined period of time
Content policies should also cover how they will respond to self-harm and suicide content where less is known about the impact on users, such as:
- Quotes about self-harm and suicide
- Lived experience accounts of self-harm and suicide
- Depictions of self-harm and suicide such as artwork and memes
- Images of self-harm scars
- Sharing methods of concealment
- Online memorials for people who have died by suicide
For more information about the risks associated with different types of content, see our information page: Understanding self-harm and suicide content online.
Criteria for deciding whether self-harm or suicide content could be harmful for users
Deciding whether self-harm or suicide content could be harmful for users can be complex. Whilst some types of content may be obviously harmful, other types may require more nuanced thinking and a judgement on what is appropriate for the platform. The following questions may help to decide which content should be prioritised for removal or review based on the impact it may have on users:
Does it show a risk of imminent threat to life?
Content of this kind should be urgently addressed. Consider whether immediate removal of the content could prevent the user from receiving urgent help from others.
Who is viewing the content?
The more vulnerable the audience, the stricter the content policy should be.
How graphic is it?
Content that contains graphic descriptions or depictions of self-harm or suicide should be prioritised for review as it can be distressing and triggering for other users.
Does it encourage, promote or glamorise self-harm or suicide?
Content that promotes self-harm and suicide or portrays them as effective ways of ending distress could encourage other users to try it.
Is there an evidence base that shows this type of content is harmful?
Is there research that indicates that the content may encourage people to harm themselves, or cause a contagion effect?
Does it stigmatise self-harm or suicide?
Content that shows prejudice against people experiencing self-harm or suicidal feelings could be triggering and hurtful, preventing users from reaching out and sharing their experiences.
How common is the content on the platform?
Some types of content, like a self-harm quote, may have minimal impact as an isolated post, but viewing large volumes of this content may have a much larger impact on users.
How are users reacting to it?
The way users respond to content should be considered – is it being reported? Is it being shared more widely?
Mechanisms for responding to content covered by the policy
There are multiple ways in which sites and platforms can respond to self-harm and suicide content, from monitoring the content, reducing access or removing it from the platform. Our information page: Reducing access to harmful self-harm and suicide content online provides further guidance.
For example templates of self-harm and suicide content policies, please contact Samaritans’ Online Harms Advisory Service.
Details of when it was last updated
Include the date and name of the team or person who reviewed the document.
All sites and platforms must translate their self-harm and suicide content policy into accessible community guidelines for users, explaining what content is and isn’t allowed on the site and the reasons for this.
Sites must also implement their policy effectively through content moderation, ensuring that content breaking community guidelines is detected and responded to safely. This can be achieved using human moderation and artificial intelligence (AI) approaches. Moderators should receive high-quality training and have clear, up-to-date guidelines to ensure their decisions are in-line with policy. More information about moderation can be found on our page: Thoughtful approaches to content moderation.
Self-harm and suicide content policies should be regularly reviewed to reflect emerging online harm issues and changes in platform functionality.
As best practice, policies should be reviewed at least annually, with updates made as needed in response to emerging evidence, online trends and changes in regulation. Companies should also regularly review repeatedly flagged content or issues and amend their policy if needed. Critical issues or gaps emerging should be immediately addressed.
Questions to consider when reviewing policy:
- Is there any self-harm or suicide content on the platform that is not covered by the policy?
- Have changes in law or regulation made it necessary to update the policy?
- Has the functionality of the platform or site changed since the last policy was published in ways that could affect the sharing of and access to selfharm and suicide content?
- Has there been any substantial changes in the users of the product or service eg, is it attracting a younger audience and may therefore need stricter policies or additional signposting?
- Have there been any advancements in research regarding our understanding of what content causes harm and to whom?
When updating policy, users should be made aware of any significant changes, explaining why and how the policy change effects the way they post or search for self-harm and suicide content.
Download our information sheet on developing and implementing policies around self-harm and suicide content: