Ill-Conceived Notions

An experiment in digital humanities

There is no evidence of how many people are being radicalized or de-radicalized, despite publications and studies that say radicalisation is happening (Klein, 2019). Any publications, even in fast-paced media, are always behind on the current situation, and even this project could be outdated by the time it’s finished. In my experience, YouTube creators themselves are the most equipped to capture and describe the field they exist in, but whether or not they recognise and respect this power is uncertain. The goal of independent political YouTube channels is often to sell a ‘truth’ that viewers can use to make sense of the world, but in the past this truth has been independent of facts or evidence that reflect a range of perspectives and human experiences, instead rationalising discrimination that has been cultivated by misinformation (Klein, 2019; Nguyen, 2018). The goal of this study is to note the ways in which language is utilised by creators with different social, political and personal goals.

The misrepresentation of YouTube by mainstream mass media downplays the effect that participation is having on modern discourse, such as honest debate and self-reflection, and muddies the multitude of forms that online hate can take (Murthy and Sharma, 2018). In a market whose goal is to sell fast stories rather than inform, amplifying the hostility of the site has more benefit than demonstrating potential benefits, and actual risks to make oneself aware of. Murthy and Sharma (2018) identify YouTube as a frustratingly unique platform, a community that isn’t really community, a participatory medium where connections are not really being made, and wherein responses can take the form of comments or new, isolate videos that are rarely easy to track, and one that attracts disproportionately more anti-social behaviour by being too large to moderate effectively and granting total anonymity to commenters. This inflated aggression has shone a bad light on the platform, caused issues for researchers who wish to categorise and measure its influence, and for everyday users who may not realise the true intent of the creators they follow until they are trapped in a hate-filled echo chamber

A ‘dark enlightenment’ has taken the cyberworld by storm, which is a regression to paleoconservatism based on ‘intellectualism’ (Sarkar, 2019b). The values promoted are often open to interpretation, not overtly advocating for fascism and white supremacy, but vague enough that viewers can easily be drawn to such groups (Shaun, 2019). The ideologies stemmed from primarily atheist YouTube channels and for some time it was near impossible to distinguish ‘edgy humour’ from what would devolve into sincere bigotry (Klein, 2019; Sarkar, 2019b). The minimization of authority on YouTube created a space for content to be discussed anonymously, in a comment section which is not moderated except in rare occasions by the channel creator (Sarkar, 2019a). A reactionary movement was spread by creators who saw an opportunity to capitalise on the insecurities of (mostly) young, white men whose identities were (in their mind) being attacked (Klein, 2019), and YouTube seemed to meet every criteria necessary for the bigotry to flourish.

Ideologies develop differently on YouTube than on other social media sites, since the focus is on a single creator, and conversations stem from a single source (the video). The highest rated videos tend to be those that capture a concern, interest, sentiment or topical subject, that the target community can relate to in some way (Beers Fägersten, 2017). Similar to reddit, comments and/or posts are sorted by popularity by default, meaning the most popular items will be those that capture core sentiments of the group in question, or spark conversations that the group feels are exigent. On Twitter, which receives the most attention for academic discursive inquiries (Farrell et al., 2019), there may be an illusion of community by using hashtags and following other users, but there is no method by which individual users can organise themselves in discussions on a single, clearly specified topic with a majority of users who have similar discussion goals, and no means by which to exclude irrelevant content that distracts from the core topic. Reddit and YouTube are distinct from Twitter in that they have community boundaries – on reddit, each subreddit has clearly stated intentions and community guidelines, and many will mention that unrelated or ill-meaning material will be removed by moderators; on YouTube, the creator has sole authority on the content of the channel and will often allude to this in their ‘about’ page. The key distinction is the singular primary voice of independent neopolitical YouTube channels, from which all discussion threads diverge.

An important observation of YouTube discussions, Wynn acknowledges (Klein, 2019), is that there is no guise about what the topics are – people are more willing to openly admit that the discussion is on race or gender, rather than violence, taxation, etc., as they would have been on mainstream outlets. Murthy and Sharma (2018) characterise this as neoteric discrimination, whereby public but anonymous users can actively express opinions to which there is no immediate consequence, and thus they can dissociate from the reality of their actions. For anyone to truly be able to identify the difference between jokes and hate speech, total and long-term immersion in that community is required. As Suler (2000) mentions, there are nuances which an outside researcher will simply not see in an online community if their first experience with that specific community is the research. Murthy and Sharma (2018) explain how this leads to misrepresentations in the mainstream, their example being the misframing of systemic racism as simply incivility based on online disinhibition, thus having little to do with reality. Dicks et al. (2005) and Charmaz (2006) both quote Dey’s advice (1999) in knowing the “difference between an open mind and an empty head”, describing the need for pre-fieldwork to occur in any ethnography project, so the researcher can be informed on what aspects of the data they are including, and also excluding.

Leave a Reply

Your email address will not be published.

css.php