Teens Are Being Bullied ‘Constantly’ on Instagram

A woman stares at her phone in bed.

Excerpt from this article:

Because bullying on your main feed is seen by many as aggressive and uncool, many teens create hate pages: separate Instagram accounts, purpose-built and solely dedicated to trashing one person, created by teens alone or in a group. They’ll post bad photos of their target, expose her secrets, post screenshots of texts from people saying mean things about her, and any other terrible stuff they can find.

Sometimes teens, many of whom run several Instagram accounts, will take an old page with a high amount of followers and transform it into a hate page to turn it against someone they don’t like. “One girl took a former meme page that was over 15,000 followers, took screencaps from my Story, and Photoshopped my nose bigger and posted it, tagging me being like, ‘Hey guys, this is my new account,’” Annie said. “I had to send a formal cease and desist. I went to one of those lawyer websites and just filled it out. Then she did the same thing to my friend.”

 

Advertisements

Teens & Digital Self-Harm

Excerpt from this article:

…there’s also a relatively new form of online bullying that’s beginning to flourish among teens called ‘digital self-harm.’ Digital self-harm is the act of secretly posting hurtful or bullying comments about yourself online. The reasons why teens engage in such behaviors are complicated but simply stated, digital self-harm gives teens an outlet for all the insecurities and self-loathing emotions they have been keeping in their heads.

In a way, it is a safety valve for teen emotions and insecurities. When teens use an alias to self-bullying on social media, they are using it as a way to reconcile their internal thoughts with the external perceptions of what others think of them. Digital self-harm, a form of non-suicidal self-injury (NSSI), is a way for teens to safely garner attention and receive messages of validation and emotional support from friends (Klonsky, et al., 2014).

 

‘I felt relieved’ – What happens when you ditch social media

Instagram likes

Excerpt from this article:

They both had negative experiences online and a new survey has found that they are not alone.

Charity Ditch the Label asked 12-20 year olds about cyber-bulling and anxiety from using the networks.

The survey of 10,000 people suggests Instagram and Facebook were the worst for bullying.

The survey suggested nearly 70% of people admitting they had been abusive to another person online and 17% saying they had been bullied themselves.

One in three said they lived in fear of being bullied online, and most thought they’d get abuse for how they looked.

What the Kitty Genovese Killing Can Teach Today’s Digital Bystanders

Excerpt from this article, which is accompanied by a documentary film:

…the story of 38 people coldly ignoring a murder beneath their windows had a life of its own. It became emblematic of big-city apathy. The terms “bystander effect” and “Kitty Genovese syndrome” entered the language.

…“You think that if there are many people who are witness to something that other people certainly already have done something — why should it be me?”

…In the age of social media and instant communication, the potential rises for a Kitty Genovese syndrome on steroids.

The dark side of Guardian comments

Excerpt from this article:

How should digital news organisations respond to this? Some say it is simple – “Don’t read the comments” or, better still, switch them off altogether. And many have done just that, disabling their comment threads for good because they became too taxing to bother with.

But in so many cases journalism is enriched by responses from its readers. So why disable all comments when only a small minority is a problem?

At the Guardian, we felt it was high time to examine the problem rather than turn away.

We decided to treat the 70m comments that have been left on the Guardian – and in particular the comments that have been blocked by our moderators – as a huge data set to be explored rather than a problem to be brushed under the carpet.

This is what we discovered.

Play nice! How the internet is trying to design out toxic behaviour

Love and unicorns … can software make us nicer people?

Excerpt from this article:

The idea of a “nicer” net sounds a bit twee, guaranteed to enrage libertarians who fear the creation of bland, beige safe spaces where free speech goes to die. But it’s an idea with some big guns behind it, and what they are advocating isn’t censorship, but smarter design. This month at the Sundance film festival, the web pioneer Tim Berners-Lee called on platforms to start building “systems that tend to produce constructive criticism and harmony, as opposed to negativity and bullying”.

…For idealists such as Berners-Lee, the fact that the net has become an exhausting place to spend time is an affront to its founding values. Technology was supposed to make the world a better place, not a bitchier one. And for the big corporate players – Twitter, Instagram, online publishers and other businesses reliant on us spending more and more time online – it’s a genuine commercial threat. Few users and fewer advertisers enjoy hanging out in a room full of furious people spoiling for a fight.

“If Facebook wasn’t a safe place and people didn’t feel they could have a conversation that’s civil and respectful, why would anyone want to advertise in that place?” says Simon Milner, Facebook’s director of policy for the UK, Middle East and Europe. “The two things go together. It’s an important part of the business model.”

This is where Civil Comments, the startup Bogdanoff founded with Christa Mrgan, comes in.

The idea is simple (although the software is so complex it took a year to build): before posting a comment in a forum or below an article, users must rate two randomly selected comments from others for quality of argument and civility (defined as an absence of personal attacks or abuse). Ratings are crunched to build up a picture of what users of any given site will tolerate, which is then useful for flagging potentially offensive material.