This module is a resource for lecturers
Gender-based interpersonal cybercrime
Gender-based violence, "violence that is directed against a woman because she is a woman or that affects women disproportionately" (see General Recommendation 19, UN Office of the High Commissioner for Human Rights, Committee on the Elimination of Discrimination against Women, 1992), includes physical, sexual, and/or emotional (or psychological) harm and has been committed both offline and online. Referring to gender-based violence online, Powell and Henry (2017) use the term "technology-facilitated sexual violence" to describe the use of information and communication technology (ICT) "to facilitate or extend sexual and gender-based harm to victims," including "technology enabled sexual assault;… image-based sexual abuse;…cyberstalking and criminal harassment;…online sexual harassment; and… gender-based harassment and hate speech" (Henry and Powell, 2014; Powell and Henry, 2017, p. 205), and position these as part of a continuum of violence (across the online and offline worlds) (Powell and Henry, 2017, p. 206; Powell, Henry and Flynn, 2018; McGlynn, Rackley and Houghton, 2017, p. 36).
Women are disproportionately subjected to various forms of online abuse in various parts of the world, especially women of specific religions, ethnic or racial groups, sexual orientation, economic status, and with disabilities. A poll by Amnesty International (2017) revealed that approximately one-fourth of the 4,000 women surveyed in the United States, United Kingdom, Denmark, Sweden, Spain, Italy, and Poland experienced some form of online abuse (e.g., cyberharassment) at least once. What is more, 41% of these women who experienced online abuse feared for their personal safety because of this abuse and harassment (Amnesty International, 2017). Women have received intimidating messages, threats of violence, and sexually explicit text messages, emails, images, and videos via dating, social media, and other online platforms, as well as in chat rooms and instant messaging services.
Are cyberstalking, cyberharassment, and cyberbullying gender-based cybercrimes?
Cyberstalking and cyberharassment are gender-based cybercrimes - women and girls are more likely to experience this form of harassment than men and boys (WHOA, 2012; Moriarty and Freiberger, 2008; Hunt, 2016; Duggan, 2014; Reyns, Henson, and Fisher, 2011). By contrast, cyberbullying does not seem to be a gender-based cybercrime. The research on the role of gender in cyberbullying is mixed; some studies have found that gender was a statistically significant predictor of being cyberbullied and being a cyberbully, while others did not (Beran and Li, 2005; Patchin and Hinduja, 2006; Kowalski and Limber, 2007; Navarro and Jasinski, 2012; Navarro and Jasinski, 2013; Smith, 2012; Smith et al., 2008; Smith, Steffen and Sittichai, 2013; Rivers and Noret, 2010; Li, 2006; Fanti, Demetriou, and Hawa, 2012; Livingstone et al., 2011; Calvete et al., 2010).
Threats of sexual and physical violence, along with sexist, misogynistic, discriminatory, and prejudicial comments have been communicated to women and girls via ICT creating a hostile environment for them online. In Ghana, women face extensive online abuse, which includes not only the distribution of sexually explicit images and videos, but also hateful, abusive, and offensive comments directed at women (Abissath, 2018). In addition to gender-based harassment, women across the globe have also experienced online sexual harassment, receiving unwanted "highly sexual comments and visual pornography that dehumanize[s]" them (Brail, 1994; Soukup, 1999; Li, 2008; Powell and Henry, 2017, p. 212). A case in point is cyberflashing, where women are sent unsolicited sexual images (e.g., a picture of the sender's penis) to harass, upset and/or alarm the receiver (Bell 2015; Powell and Henry, 2017, p. 211).
Women's rights activists and organizations, as well as feminists and feminist organizations around the world have also been subjected to cyberharassment and cyberstalking. A feminist organization in Colombia, Mujeres Insumisas, has reported incidents of sexual violence, harassment, and stalking against its members both online and offline (Lyons et al., 2016). Research has shown that simply being a female public figure can result in experiences of threats of physical and sexual violence, as well as misogynistic comments. For example, UK Labor MP, Jess Phillips, received over 600 rape threats in one night, alongside hundreds of other threats and derogatory comments, for calling for the identification of Internet (or online) trolls (Rawlison, 2018).
Women have also been the predominant target of image-based sexual abuse (IBSA) (colloquially referred to as 'revenge porn'), a form of cyberharassment which involves the "non-consensual creation, distribution and threat to distribute nude or sexual images" (Henry, Flynn and Powell, 2018, p. 566) to cause "the victim distress, humiliation, and/or harm them in some way" (Maras, 2016, p. 255). According to Powell and Henry (2017), "the term 'revenge porn' is inherently problematic as it fails to capture the range of perpetrator motivations which extend beyond revenge, for instance, perpetrators who distribute images in order to obtain monetary benefits or boost social status, or perpetrators who use images as a means to exert further control over their partners or ex-partners" (Henry and Powell, 2015a; Henry, Flynn and Powell, 2018; Powell and Henry, 2017, p. 208; Powell, Flynn and Henry, 2018). Indeed, the term fails to capture the different range of motivations that underpin this form of abuse beyond that of retribution, for example, blackmail and extortion, control, sexual gratification, voyeurism, social status building and monetary gain (Henry, Flynn and Powell, 2018). Revenge pornography also puts the focus narrowly on the non-consensual distribution of images, which means other forms of IBSA, such as the threat to distribute a nude or sexual image, and the non-consensual taking of intimate images, including "upskirting" (where an image is taken up a women's skirt), "downblousing" (where an image is taken down a woman's blouse) or surreptitious filming in public or private places (see e.g, McGlynn and Rackley, 2017; McGlynn, Rackley and Houghton, 2017; Powell, Henry and Flynn, 2018). This can minimize the harms experienced by victims. The term also likens non-consensual images to the production of commercial pornography, when many images that are shared without consent have very little in common with mainstream pornography (Powell, Henry and Flynn, 2018). This has the effect of placing the image and the abuse into a particular category in people's minds, further minimizing the harm experienced by, and done to, victims. Using the term "revenge" as a descriptor also has victim-blaming connotations because it implies the victim has done something to provoke the offender. Finally, the term focuses attention on the content of the image, rather than on the abusive actions of perpetrators who engage in this form of abuse (Rackley and McGlynn, 2014). For these reasons, the preferred term is image-based sexual abuse, which is considered "a form of sexual violence" (McGlynn, Rackley and Houghton, 2017, p. 37).
Images and videos of victims can be taken from online websites and social media accounts and used in ways to defame or humiliate them. For instance, a victim's face or head can be superimposed on the bodies of others for defamation or pornography (a process known as morphing). The morphed image can be obscene in nature and is intended to cause reputational harm to the victim. Face-morphing software programmes, like Deepfake, which uses a machine-learning algorithm to replace the faces in videos, has been used to create fake pornographic videos of victims (Henry, Powell and Flynn, 2018). Celebrities and even the former first lady of the United States (Michelle Obama) have been the targets of Deepfake videos distributed online (Farokhmanesh, 2018). Because Deepfake uses machine learning, eventually it will be difficult to discern fake videos from real videos without the assistance of media forensics (Maras and Alexandrou, 2018).
Sexting, a type of "self-generated sexually explicit material" (UNODC, 2015; Interagency Working Group, 2016, p. 44), includes "consensual image taking and sharing, as well as consensual taking and non-consensual sharing of images (and sometimes even non-consensual taking and non-consensual sharing)" (Salter, Crofts and Lee, 2013, p. 302). Sexting is the most common type of self-generated sexually explicit material involving children (Interagency Working Group, 2016, p. 44). Some "research has shown that girls feel pressured or coerced into it more often than boys" (Cooper et al., 2016, cited in Interagency Working Group, 2016, p. 44). Other research suggests that image-sharing among adolescents plays out in a 'pressurized yet voluntary' context (Ringrose and Renold, 2012; Drouin and Tobin, 2014). Research conducted in UK high schools by Ringrose and Renold (2012) found that young women and girls were under near-constant pressure from boys and young men to send increasingly graphic and often degrading images, such as photos of their breasts with the boys' names written on them. A later study by Walker, Sanci and Temple (2013) similarly found that young men were under social pressure to receive and share these images with their male peers, in order to assert and protect their heterosexuality. The studies on sexting have yielded disparate findings on prevalence, depending on the participant sample, sampling techniques, instruments, as well as the different definitions of sexting used. The establishment of prevalence rates on consensual sexting among young people is therefore made somewhat challenging (Klettke, Hallford and Mellor, 2014; Lounsbury, Mitchell and Finkelhor, 2011; Powell, Henry and Flynn, 2018). However, generally these studies tend to concur that sexting is relatively common among young people (although see conflicting findings of a 2017 study by UK Safer Internet Centre, Netsafe and the Office of the eSafety Commissioner (2017)). Europol (2018) likewise reported a significant increase in children's self-generated sexually explicit material (e.g., performing a sex act) and self-generated sexually explicit material live streamed (pp. 9, 31 and 35). In some countries, sexting has been prosecuted (Bookman and Williams, 2018; O'Connor et al., 2017).
Research demonstrates that image-based sexual abuse affects a significant proportion of the population. An American study found that 4% of men and 6% of women aged between 15-29 had a nude or nearly nude image shared without their permission (Lenhart, Ybarra and Price-Feeney, 2016). According to Australian research, one in five Australians aged between 16-49 years have had at least one experience of IBSA, including one in ten who have had a nude or sexual image shared without consent (Henry, Powell and Flynn, 2017; Australian Office of the eSafety Commissioner, 2017). Image-based sexual abuse occurs in a range of different relational contexts. For example, in the form of peer-to-peer harassment, such as where the perpetrator is a friend or acquaintance of the person targeted, and in the context of an intimate partner, or former intimate partner, relationship (Henry, Powell and Flynn, 2017). The diversity of those affected by image-based sexual abuse is also much wider than previously understood. For example, 2017 Australian research found that within the Australian community, those that are vulnerable to, and/or are more likely to be targets of, image-based sexual abuse including Aboriginal and Torres Strait Islander people, Australians with a disability, members of the Lesbian, Gay and Bi-sexual community and young people aged 16 to 29 years (Henry, Powell and Flynn, 2017). In some countries, national laws explicitly prohibit image-based sexual abuse (Centre for Internet & Society, 2018). The Philippines introduced laws in 2009, where a maximum prison sentence of seven years and a maximum fine of ₱500,000 exist for those who create or distribute a sexual photo or video of a person without their consent ( Anti-Photo and Video Voyeurism Act of 2009 (Republic Act No. 9995). Israel amended its sexual harassment law to include a prohibition relating to the online distribution of sexual images without consent with a maximum sentence of five years and the classification of the perpetrator as a sex offender in 2014 ( Prevention of Sexual Harassment Law, 5758-1998). Also in 2014, Japan introduced specific criminal offences for publishing a "private sexual image", with a maximum prison term of three years or a fine of up to ¥500,000 ( Act on Prevention of Victimization Resulting from Provision of Private Sexual Image, Law No. 126 of 2014) (see Matsui, 2015).
Jurisdictions in a variety of Western countries have also introduced specific or broader offences to criminalize image-based sexual abuse. At the time of writing, 38 American states, plus the District of Columbia, have passed some form of legislation on non-consensual imagery. At the end of 2014, the Protecting Canadians from Online Crime Act (S.C. 2014, c. 13) was introduced amending the Criminal Code of Canada to include new criminal offences for cyberbullying, image-based sexual abuse and other related offences. Five of Australia's eight state and territory jurisdictions (Victoria, South Australia, New South Wales, the Australian Capital Territory and the Northern Territory) have introduced specific offences to criminalize image-based sexual abuse, and a federal law was introduced in August 2018. Under New Zealand's Harmful Digital Communications Act 2015, it is a criminal offence to post harmful digital communication and this includes image-based sexual abuse material. Likewise, in England and Wales, the Criminal Justice and Courts Act 2015 criminalizes the disclosure of non-consensual "private sexual photographs or films" with the intention to cause distress to the victim (s 33) and in Northern Ireland, it is an offence to disclose private sexual photographs and films with intent to cause distress under the Justice Act (Northern Ireland) 2016. In 2018, Brazil introduced new legislation to make it a crime non-consensual image-based sexual abuse. Article 218-C Criminal Code enacted by Federal Law 13,718 of 2018 provides for this new offence. In Scotland, the Abusive Behaviour and Sexual Harm Act of 2016 criminalized the disclosure and threat to disclose intimate images and videos without the consent of the person depicted in them. In countries where there are no national laws that explicitly prohibit image-based sexual abuse, perpetrators could potentially be prosecuted pursuant to other laws, such as those criminalizing cyberharassment, cyberstalking, blackmail, and copyright infringement; however, these are limited in their ability to successfully prosecute those who distribute or threaten to distribute image-based sexual abuse (Henry, Flynn and Powell, 2018).
Depending on the jurisdiction, to prosecute those who distribute or threaten to distribute image-based sexual abuse, the prosecution must show that the defendant who engaged in the act intended to harass, abuse, or threaten the victim. The perpetrator can attempt to avoid prosecution with these laws if the perpetrator claims that they were motivated by personal desires, such as money or fame. Such problems have been identified by researchers as prohibiting the effectiveness of the laws to work in practice (see e.g., Henry, Flynn and Powell, 2018). Some laws, such as those operating in Australia, have been recognized for their more realistic engagement with this type of interpersonal cybercrime, by not requiring the prosecutor prove that the victim suffered distress or harm, or that the perpetrator intended to cause distress or harm. Instead, it is accepted that this form of abuse would reasonably be expected to cause distress or harm to a person. In relation to threats to record or distribute an image, the Australian laws also specifically note that it is irrelevant whether or not the image(s) actually exists. This is important as it covers situations where the victim may not know (or be able to prove) whether the perpetrator has the image they are threatening to distribute; for example, if the perpetrator is claiming the image was taken surreptitiously during a consensual sexual encounter, while the victim was asleep or consensual images that the perpetrator was meant to have deleted (Flynn and Henry, 2018).
Perpetrators of image-based sexual abuse can also be prosecuted with blackmail laws if they threatened the victims with releasing the images or videos before posting them. This tactic is known as sexual extortion (or sextortion), a form of cyberharassment, which occurs when a "perpetrator threatens to disseminate sexually explicit …[images and/or videos] of the victim unless sexual demands are met and/or sexually explicit images or videos are sent to the perpetrator" (Maras, 2016, p. 255). In some countries, perpetrators of image-based sexual abuse can be prosecuted with copyright infringement laws. If the victims took the intimate images or videos themselves, they could submit a takedown notice to the website and search engines where the image or video comes up in the search results (e.g., in the United States this can be done pursuant to Section 512 of the Digital Millennium Copyright Act of 1998). The website operator or search engine can refuse to take down the image or video. If this occurs, often the only option left for victims is to file a lawsuit against the website operators or search engines (which may not be a viable option because of these significant costs associated with lawsuits).
Next: Interpersonal cybercrime prevention
Back to top