Tech Without Violence Prevention Framework

×

Error message

  • Deprecated function: Array and string offset access syntax with curly braces is deprecated in require_once() (line 341 of /home/www/htdocs/virtual/techwithoutviolence.ca/includes/module.inc).
  • Deprecated function: Array and string offset access syntax with curly braces is deprecated in require_once() (line 341 of /home/www/htdocs/virtual/techwithoutviolence.ca/includes/module.inc).

Gender-based violence and harassment exist online and in social media – something known as cyberviolence or technology facilitated violence. The Ottawa Coalition to End Violence Against Women (OCTEVAW) and the Youth Services Bureau’s Purple Sisters, with support from Status of Women Canada, reviewed key social media platforms to identify opportunities to helping to prevent cyberviolence from happening. It is our belief, that all forms of online gender based violence & harassment can be prevented eliminated by amplifying the expertise of young women & LGBTQ youth, and engaging the ICTC sector through collaboration, leadership, education & influencing.

Young women in Ottawa identified four areas where social media and others engaged in the ICTC sector can pay particular attention to help prevent cyberviolence, and support users in the event that it does occur.

The four pillars for addressing cyberviolence are:

  • PREVENTION
  • RESPONSE
  • PRIVACY
  • SUPPORT
Prenvention

These recommendations were created in collaboration with the Purple Sisters Youth Advisory Committee of Ottawa.

Social media platforms can PREVENT cyberviolence through:

Leadership

Young women want social media platforms to harness their power as agents of change to help end cyberviolence. This means information & communication technology companies (ICT) should work towards…

  • Diversifying the staff at social media companies—specifically, expanding opportunities for women and Lesbian, Gay, Bisexual, Tran, Queer (LGBTQ+) people.
  • Partnering with local, national and global organizations working to end gender-based violence.
Effective community standards

Young women want social media platforms to address the gendered nature of cyberviolence by developing more nuanced policies, definitions and community standards to address it. This includes. . .

  • Recognizing cyberviolence as a gendered issue where women and LGBTQ+ youth are disproportionally affected.
  • Addressing cyberviolence with an intersectional approach, recognizing that the spectrum of online harassment and abuse can include misogyny, homophobia, transphobia, racism, ableism, sizeism and classism.
  • Broadening and more clearly defining what constitutes online harassment and abuse to include online hate speech, online harassment, non-consensual distribution of images, recording of sexual assault and digital dating abuse.
Supporting bystander intervention

Building kind and supportive communities—willing to respond when abuse or harassment occurs—is essential to preventing and ending cyberviolence. Social media platforms should help facilitate bystander intervention by. . .

  • Allowing social media users to report, respond and intervene on behalf of others if cyberviolence occurs. This encourages people witnessing cyberviolence to serve as allies and advocates and minimize feelings of isolation and helplessness for those experiencing cyberviolence.
Content filtering

Social media platforms can employ quality content filtering tools to help block problematic and abusive content. Quality content filtering allows users. . .

  • To moderate and disable comments on individual posts and threads.  
  • Access to word filtering systems and muting options.
Blocking

Blocking is crucial to preventing cyberviolence, because it allows users to prevent others from starting conversations with them, seeing things they post, tagging them, or seeing their profile. Blocking helps prevent harmful and abusive interactions that threaten users’ safety. Blocking is especially important for those who have experienced intimate partner violence, digital dating abuse and stalking. Effective blocking mechanisms. . .

  • Have reciprocal (two-way) blocking functions.
  • Allow users to block users that have already blocked them.
  • Do not notify users when they are blocked as a default.
  • Give users access to a blocked accounts page or feature (where applicable), offering users an easy way to manage the list of accounts they have blocked.
  • Allow users to hide their identity when clicking “attending” on events, such as Facebook events.
  • Ensure that blocked users’ names and profiles do not show up in tagged photos or posts.
  • Allow users to block all linked accounts* of a profile user on the platform or across linked platforms.
  • Make it less obvious when a user has been blocked from another user’s page to minimize risk to survivors of cyberviolence and other forms of abuse.

*Linked Accounts: This means that multiple social media accounts are connected to the same user (either on the same platform or different platforms)

Prenvention

These recommendations were created in collaboration with the Purple Sisters Youth Advisory Committee of Ottawa.

Social media platforms can RESPOND to cyberviolence through:

Effective reporting mechanisms

Reporting mechanisms that address the nuances of online gender-based violence are crucial to protecting users experiencing cyberviolence. Effective reporting mechanisms. . .

  • When platforms decide to act on cyberviolence reports by removing content or suspending accounts, their actions can also affect the evidence available for other challenges of reporting and responding.  Platforms could give the user experiencing cyberviolence the option to remove the abusive content, or the platform could remove the content and save a duplicate, a screenshot or the post URL in their database for future reference.
  • Platforms should expand the size of their team handling abuse and harassment reports to ensure reports of abuse are reviewed within 24 to 48 hours.
  • Ensure the identity of the user who has reported abuse is kept confidential.
  • Allow bystanders to make reports on behalf of others.
  • Ensure users—and bystanders, if they made the report—are updated on the status of their report.
Fair evidence requirements

Social media platforms develop evidence requirement policies with which to evaluate whether or not to take action if a user files a report of abuse. Many evidence requirement policies undermined the safety and self-determination of survivors of cyberviolence by placing the responsibility to prove online abuse solely on the target of the abuse.

Social media platforms need to better structure their evidence requirements and update their reporting systems to accept different forms of evidence. This could help reviewers and moderators understand the context, intent and impact of cyberviolence.

  • The challenges of proving harassment and gender-based violence online are often framed as challenges of context and interpretation which can sometimes feed into victim blaming. For example, one user might experience a post as harassment and another user may perceive it as something different depending on their own social locations and understandings of violence. It is important to focus on the context and impact of the cyberviolence rather than solely it’s intent.
  • Default URL requirements, which means requiring a post’s URL instead of just a screen shot or photo evidence, can complicate reports of harassment that are not associated with a URL (for example: posts that are deleted and the URL no longer exists).
  • When platforms decide to respond to cyberviolence reports by removing content or suspending accounts, their actions can also affect the evidence available for other ways of reporting.
  • When users delete abusive content or photos, screenshots are sometimes the only way to provide evidence. Platforms should allow screen shots to be solely used as evidence as many currently do not.
Moderation

Effective moderation of posts flagged as abusive is essential to ending cyberviolence and supporting survivors of abuse and harassment online. Moderators should. . .

  • Have required training on the issue of cyberviolence—including how to minimize risk to individuals who report abuse. Platforms should partner with experts on gender-based violence to ensure this training adequately addresses the needs of survivors.
  • Moderators or people reviewing reports of cyberviolence should be sufficiently trained and connected directly to the platform’s reporting database. Reporting reference guides and escalation protocols are not enough to ensure a practical and effective reporting tool.
Prenvention

These recommendations were created in collaboration with the Purple Sisters Youth Advisory Committee of Ottawa.

Social media platforms can ensure users’ PRIVACY though. . .

Fair terms of service

Youth want clear, user-friendly terms of service. It is essential that young people know what they are agreeing to when they sign up for a social media account. Social media platforms must. . .

  • Ensure terms of service are easy to find, easy to read—written in plain language—and concise enough that the average user will actually read them. If it is not possible for the full terms of service to be concise and free of legal  terminology, platforms should also create a more easy-to-read FAQ section with key user information.
  • Be more explicit about how they collect, share and monetize users’ data.
  • Ensure their terms of service are informed by diverse perspectives, which means creating more opportunities for women and LGBTQ people on their leadership teams and on their staff.
Data and account security

Protecting users’ data and increasing account security is key to preventing cyberviolence. To ensure data and account security for users, social media platforms should. . .

  • Building in an option for users to toggle between accounts or use two-step verification when logging into accounts or applications. When you enable two-step verification you add an extra layer of security to your account. You sign in with something you know (your password) and something you have (a code sent to your phone)
  • Ensure that all cross platform applications are password protected. This means that any applications linked with a particular social media platform also require a password to login. For example, the Facebook App and Facebook Messenger app should both require password verification.
  • Ensure users actively consent to privacy setting changes and updates.
  • Develop privacy options users can employ to customize who has access to their information and content.
  • Include an FAQ or ‘privacy basics’ guide where users can learn about privacy functions in a clear, accessible way.
  • Ensure direct messaging and photo sharing are automatically private by implementing end-to-end encryption within the platform. This means that only the communicating users can read content and messages. This is a system designed to prevent attempts at non-consensual surveillance and access to content by third parties.
  • Allow users to control audience and tagging on individual posts.
  • Ensure default privacy settings are at the most restricted level.
  • Deleting message and chat history as a default on the platform.
Changing location services settings

The tracking and sharing of users’ GPS location without their consent is a huge concern for youth—and particularly young women targeted with gender-based violence—on social media. This type of location tracking can compromise the safety of those who have experienced intimate partner violence, digital dating abuse and stalking. Social media platforms should. . .
 

  • Give users more control by allowing them to toggle on/off of their GPS locations easily.
  • Ensure that location services and photo location are turned off as the default. Users can opt into these functions if they choose.
  • Not automatically embed GPS coordinates and EXIF (Exchangeable Image File Format) data into photos by default. If this is not possible, platforms should make sure users are aware of this with a clear, accessible warning.   EXIF* data contains a ton of information about your camera, and potentially where the picture was taken (GPS coordinates). That means, if you’re sharing images, there’s a lot of details others can glean from them.

*EXIF stands for Exchangeable Image File Format. Every time you take a picture with your digital camera or phone, a file (typically a JPEG) is written to your device’s storage. In addition to all the bits dedicated to the actual picture, it records a considerable amount of supplemental metadata as well. This can include date, time, camera settings, and possible copyright information. You can also add further metadata to EXIF, such as through photo processing software.

Finally, if you use a camera phone or digital camera with GPS capabilities, it can record EXIF geolocation metadata. This is useful for geotagging, which creates all kinds of new possibilities, such as allowing users on photo-sharing sites to see any images taken in specific locations, view where your pictures were taken on a map, and to find and follow social events.

That said, EXIF and especially geotagged data, says a great deal about the user, who may or may not want to share all that information.

*Geotagging is when geographical information and data are added to various content and media such as photographs, videos, websites or text messages. Geotagging can reveal a wide variety of location-specific information from a device.

Encryption

Social media platforms should adopt encryption practices that better protect the users’ data and content. They can do this by. . .

  • Adopting https as the default for their platforms. With this feature, a browser is instructed to communicate using a secure connection, as indicated by the “https” rather than the “http.”

    Example: https://www.facebook.com

    This feature users Transport Layer Security (TLS)formerly known as Secure Sockets Layer (SSL)—and makes the communication between users’ browsers and website servers more secure

  • Adopting and advocating for Open Certificate Transparency Monitoring, which offers the ability to know the certificates a CT-enforcing browser will trust. This design encourages all Certificate Authorities to log every certificate before issuing them. This helps to detect forged SSL certificates. This basically means that all web browsers, servers and third party applications communicate securely amongst each other ensuring privacy and protection of user data, content and information. 
  • Publishing public and accessible reports on the number of requests for information they receive from law enforcement and national security agencies, as well as how they respond to those requests.
Changing policies on sharing user information with third parties

When social media platforms allow third party applications to access users information, this violates users’ consent and privacy. This loss of privacy puts survivors of violence and young women at increased risk of experiencing cyberviolence and having their physical safety threatened. Indeed, some third party applications are created with the sole purpose of circumventing the safety functions of other social media platforms.

  • Social media platforms must restrict or completely shut down the sharing of information with third party platforms. This would control and restrict outside access to application programming interface (APIs) that can compromise user’s privacy.  A good API makes it easier to develop a program by providing all the building blocks, which are then put together by the web developer or programmer.
  •  Platforms should fortify their APIs to block access from malicious code and malicious third-party apps from accessing and compromisng their systems.

API’s are sets of computer program instructions, protocols and tools for building software and applications.

Prenvention

These recommendations were created in collaboration with the Purple Sisters Youth Advisory Committee of Ottawa.

Social media platforms can SUPPORT users experiencing cyberviolence with:

Accessible resources

Social media platforms should offer resources and support—informed by gender-based violence experts and organizations—to survivors of cyberviolence. These resources should be easy to find and understand. They should also include local resources survivors can access in their communities.

Enhanced filtering

Adopting filtering mechanisms to help identify and assess the impact of cyberviolence is an important way to support survivors and prevent future violence. For example: social media platforms can filter key words and content that can potentially identify users experiencing abuse or harassment. This information can be used to reach out to at-risk users and offer support.

Partnerships

Collaboration between social media platforms and experts on cyberviolence—including anti-violence advocates, frontline service providers, as well as women and LGBTQ+ youth—is key to building better support mechanisms for survivors.

Social media platforms should develop long-term partnerships—through the development of steering committees and public engagement projects, for example—with anti-violence organizations, front-line service providers and other cyberviolence experts. These partnerships should inform social media platforms’ policy and staff training.