This page provides guidance for all sites and platforms hosting user-generated content on implementing user friendly reporting processes. All sites and platforms should ensure that users accessing their site can easily report concerning self-harm and suicide content, and the behaviour of other users that worries them. This helps platforms limit the potential for harm by identifying and responding to content promptly.
Note: This section refers to ‘reporting’ as this term is widely understood. However, language around reporting should be considered carefully for self-harm and suicide content: ‘report’ can suggest wrongdoing and users may worry about ‘reporting’ a person in distress for fear of getting them into trouble. This may cause delays in vulnerable users receiving help. Alternative language could include ‘flagging’, ‘raising concerns’ or ‘I’m worried’.
Accessible information about reporting
Sites and platforms should provide clear and accessible information to users about reporting concerning content on their site, including suicide and self-harm. This should be clearly displayed to new users, and existing users should be regularly reminded to encourage them to make reports. All information should be in plain language and changes or updates should be transparently communicated. This will ensure that users proactively and responsibly report material that concerns them and emphasises that inappropriate content is taken seriously.
All sites and platforms should provide:
Explaining what constitutes inappropriate content, and what action will be taken if content breaking community rules is found on the platform.
Step-by-step information on the reporting process
Including why it’s important to report concerning content, how to make a report and what happens afterwards.
Accessible reporting processes
Reporting processes will differ depending on the size, purpose and functionality of a site or platform.
As a minimum, all sites and platforms hosting usergenerated content should have a dedicated email address or reporting form that users can access to flag concerns about self-harm and suicide content or user behaviour.
Larger sites and platforms should consider having more sophisticated reporting functions, such as:
- Trusted flagger functions, whereby credible organisations and users with a track record of making responsible and accurate reports can have their reports fast tracked.
- Self-harm and suicide content-specific reporting categories, covering topics such as:
- A person at imminent risk of harm.
- Promotion or encouragement of self-harm or suicide.
- Graphic or detailed content about self-harm or suicide, including descriptions of methods of harm.
- Harassment or mocking of individuals who have experienced self-harm and suicide.
Effective processes for reviewing user reports
Effectively reviewing user reports about suicide and self-harm and quickly removing harmful content will limit its audience, reducing its impact. Promptly responding to reports that indicate a user has harmed themselves or is in urgent need of help means information can be quickly passed to appropriate support.
To ensure user reports are effectively reviewed and responded to, sites must have:
- Prioritised reporting. Where possible, the filtering and prioritisation of reports should be automated and based on urgency to ensure human moderators focus on the most harmful content and users at greatest risk.
- Trained content moderators. Moderators should be trained and supported to effectively review and respond to reports in line with the community guidelines. Further information is provided on the following pages: Content moderation and Supporting staff wellbeing.
- Processes for identifying key themes and accuracy of reports. Identifying trends in reporting will make false or inaccurate reports easier to identify and manage.
Effective processes for responding to user reports
At a minimum, sites and platforms must provide the following to all users who have reported content about self-harm or suicide:
- Acknowledgement, that their report has been submitted successfully for review.
- Information about what happens next, such as the action that may be taken, eg, reviewed by moderators, removal of content or provision of support to the reported user.
- Action taken. Where appropriate, information should be given to users to assure them harmful content has been removed or addressed.
If a report indicates a user is at risk of imminent harm, the following should be provided to the person making the report:
- Information about contacting emergency services including contact details for the country where the user is based (eg, 999 in the UK).
- Signposting to support for themselves. Reminding them of the importance of looking after their own wellbeing when supporting others online and signposting to relevant information.
Download our information sheet about implementing user-friendly reporting on sites and platforms: