TikTok Is Warning Users About A Video Showing A Suicide

TikTok is trying to stop the video from spreading by taking down clips and banning users who repeatedly share

Curated via Twitter from BuzzFeed Tech’s twitter account….

Esses Cookies nos permitem coletar alguns dados pessoais sobre você, como sua ID exclusiva atribuída ao seu dispositivo, endereço de IP, tipo de dispositivo e navegador, conteúdos visualizados ou outras ações realizadas usando nossos serviços, país e idioma selecionados, entre outros.

Unlike other apps where users must subscribe to or befriend others to see their content, TikTok users frequently encounter videos from people they do not follow via their For You pages.

Utilizamos cookies, próprios e de terceiros, que o reconhecem e identificam como um usuário único, para garantir a melhor experiência de navegação, personalizar conteúdo e anúncios, e melhorar o desempenho do nosso site e serviços.

"We are banning accounts that repeatedly try to upload clips, and we appreciate our community members who’ve reported content and warned others against watching, engaging, or sharing such videos on any platform out of respect for the person and their family".

Some users on TikTok have been filming videos warning others of the footage by showing them a screenshot (a bearded man sitting at a desk) to know what to be on the lookout for.

The suicide video has been circulating on the app since at least Sunday night, TikTok spokesperson Hilary McQuaide told BuzzFeed News.

Caso não concorde com o uso cookies dessa forma, você deverá ajustar as configurações de seu navegador ou deixar de acessar o nosso site e serviços.

TikTok is trying to stop the video from spreading by taking down clips and banning users who repeatedly share it.

TikTok is warning users to be on the lookout for videos of a man killing himself that are spreading on the social media platform.

In 2017, BuzzFeed News found at least 45 instances of violence — suicides, shootings, murders, torture, and child abuse — that were streamed via Facebook Live since it first launched in December 2015.

"Our systems have been automatically detecting and flagging these clips for violating our policies against content that displays, praises, glorifies, or promotes suicide," she said.

Facebook now uses artificial intelligence to identify posts from users indicating thoughts of suicide or self-harm.

Link to original article….

Leave a Reply

Leave a comment
%d bloggers like this:
scroll to top