Fact checking the Daily Mail — Assessing accuracy and credibility

For our fact-checking project we decided to take an in-depth look at five articles on the website of the British morning newspaper, the Daily Mail. We chose to fact-check articles from the Daily Mail because it is one of the most circulated newspapers in Britain, as well as The Mail Online, their website, being one of the most visited online newspaper websites at this time.

The popularity of The Daily Mail was not our sole motivation for scrutinising their articles. In the beginning of 2017 Wikipedia announced that it had banned the use of the Daily Mail as a reliable source on their platform, citing it as having a “reputation for poor fact checking, sensationalism and flat-out fabrication.” This action was an unprecedented move for Wikipedia. This inspired us to first-handedly discover what actually is so wrong with content published by the Daily Mail.

Using methods described by the Science Literacy Project we assessed the following five articles and judged them on their accuracy and credibility:

It is fair to say that the Daily Mail provided more decent articles than we initially expected. Although it was assumed that we would find more overtly framed articles riddled with false statements and unsubstantiated, sensational facts, it turned out that, in most articles, they did not do that bad of a job — though definitely not a good job either. We discovered enough ‘issues’ in the articles that indicate that aspects of articles published by the Daily Mail are of low journalistic quality.

The Daily Mail has a tendency of over-sensationalising stories. This was found in the articles on Jamal Khashoggi, Obama on Trump, and the giant shark. Although the headlines of the articles mainly told (most of) the truth; words such as: ‘giant’ (shark) and ‘confused, angry racist’ (Obama on Trump) were added to draw in readers whilst they did not necessarily reflect the content of the article. It is evident that this sensational writing style is used across the majority of articles published by the Daily Mail — something that can be verified by taking a quick glance at their homepage.

The Daily Mail also has difficulty with quotations. In the ‘Obama on Trump’ article, for example, quotes were slightly adjusted to increase their aggressiveness; and in the ‘Somali-born Terrorist’ article, parts of larger statements were omitted to manipulate their meaning. The fact that a newspaper is not able to properly copy-paste statements and write down quotations without removing or adjusting information is, of course, very troubling. Manipulating quotations affects the overall meaning of events — especially if a topic has a big societal impact, you do not want your newspaper to present only half a quote, or, a slightly changed quote. This stimulates the misinterpretation of what information was actually conveyed about the event, and, therefore, actively misinforms the newspapers’ audience.

Another pattern that we stumbled upon was that the Daily Mail does not put in the effort to update their articles. This occured in the article on the giant shark, in which an updated Facebook post was not included in the general article update; and in the article on Khashoggi’s death, were the correct names and titles of the people involved were not included in its update whilst being incorrect in the original publication. By inaccurately updating their articles, or flat-out refusing to do so, the information the Daily Mail provides becomes misleading in the long-term and that, of course, is not preferable. Especially on their website it should not be that difficult to update their article and rectify errors.

To conclude, although The Daily Mail did produce articles of a higher quality than was expected, we still found several troubling patterns that probably have a negative impact on its audience — misinforming them. Making headlines or content overtly sensational, perhaps, is not that bad (it is part of the identity of the Daily Mail), however, adjusting statements and quotations or inaccurately/not updating articles is of concern. What the Daily Mail does, does not happen by accident — they actively try to influence the public opinion by spreading half-truths and omitting vital information..

 

— D.K. Degeling, A.N.H. Hjelt, J.R. van Nierop, J. Sam, Y.P. Samarawira

Data visualizations – misleading or deceptive?

Data, data and more data – that is roughly how one could describe the current trends in society. We have in this age of digitalization seen a general hyper-connectivity spread throughout the world especially by different social networking platforms. This trend has furthermore spawned a situation where machines, companies and whoever wants the available information, will know more about me and you, than we probably thought we knew about ourselves. Data can be used and applied in countless different ways, which poses the question for companies of how they and other actors can maximize their output using the data available from their extensive databases?
It is widely known, that for example Big Data can act as valuable tool for marketing professionals in so far that it helps create more targeted ads, thus increasing the potential for businesses selling their products. But data can also function as the foundation of the creation of visualizations or illustrations (graphs, charts, diagrams etc.). These visualizations could be everything from a graph describing the past five year’s turnover in the annual statement, to a graph illustrating the expected benefits by acquiring the company’s product. But as Beattie and John Jones (2002) notes even information provided corporate annual reports are subject to inaccurate information (Bettie & Johan Jones, 2002). We may therefore suggest that it is evident that data can be applied and used in many different ways – well even to mislead or decieve. In my own opionion on data, I believe that the increased usage should ideally be used to enrich, improve and simplify our everyday lives, but this probably an utopoian thought as this is certainly not always the case.

I will claim that as more and more data become available it will create the foundation for more misleading and also deceptive visualizations being deployed. Furthermore I will contend, that there is a clear distinction between misleading and deceptive visualizations.

Misleading or deceptive – the theory behind

Data visualizations have always been subject questions about their validity, but as Albert Cairo notes “Charts, graphs, maps and diagrams do not lie. People who design graphics do” (Cairo, 2015, p. 104). Cairo also points to a clear distinction between deceptive and misleading graphics. He argues, that deceptive visualizations must have an intent to deceive by “knowing the truth and hiding it, or conveying it in a way that distorts” (Cairo, 2015). But Cairo also argues that a visualization can be misleading, but contends the difference being that this not a conscious intervention by the designer, but can be the result of “naive mistakes while analyzing the data or representing the data” (Cairo, 2015, p. 104) – so the difference between misleading and deceptive data visualizations, according to Alberto Cairo, lies in the intent of the designer.

However, different understandings and interpretations does exists in academic research. Pandey et al. (2015) argues that deception does not necessarily require intent by the designer. The authors stipulate that deceptive visualizations can be the seen as reflection of a poor skill-level by the designer – for example, not knowing best practice in statistics (Pandey et al., 2015). Although this is a valid point I admit to finding myself leaning towards agreement with Cairo. Because as mentioned above, it is my view that the increasing amount of data is leading to even more misleading information and therefore also deceptive tactics being deployed. Just think about Donald Trump and his proposed “fake news” – during his short time in office we have experienced everything from manipulated inaugurational pictures to more recently a doctored video of a CNN reporter Jim Acosta, which was deployed in order to make his actions look more aggressive (Harwell, 2018). But let me try to give some evidence to, what I would argue, is the difference between misleading and deceptive data visualizations.

Can a visualization show intent?

One of the common usages of misleading data visualizations is for the designer to use or display too much information in the graph (Cairo, 2015). As you can see in the picture below it is very difficult to make sense of what is happening – there is so much data being presented that it is impossible to single out any data points.

7441_Figure2
To many data points makes it impossible to isolate or make sense of the visualization (Hogle, 2018)

So why would somebody illustrate their data this way? There can be many reasons for this but one of the more prominent is, that it is a great way to “bury” bad news (Hogle, 2018). This begs the question of how to perceive this graph – is it misleading or deceptive? As argued above this classification rests upon the intent of the designer, as we in this case are not aware of the intent it raises the difficulty of reaching an indisputable conclusion. However, I will argue, that exactly this graph has more of a misleading than deceiving nature. Because even though it could “bury” some bad news, it does still present all the available data. I would therefore contribute its misleading character to what Cairo states; naive mistakes by the designer.

Another common usage, which is not directly a visualization problem, but I will argue is very important aspect in relation to data visualizations, is to describe or label the data inaccurately. This means, that even though the visualization in itself is accurately portrayed the explanation attached to the visualization is wrong and inaccurate (Hogle, 2018). In the picture below is an example of this.

trump
Map illustrating the county-by-county results of the 2016 US Presidential Election (Hogle, 2018)

The data shown is a visualization that accurately portrays the county-by-county results of the 2016 US Presidential Election. The picture has been proudly used by Trump, and you can understand why. The visualizations clearly shows a majority of red (counties that voted Trump) in favor of blue (counties that voted Clinton). The problem of the illustration however becomes evident when looking at how this visualization was deployed in favor of Trump. This next picture of a book cover titled “Citizens for Trump” using the same visualization implies why.

51Xeyxtf-HL._SX322_BO1,204,203,200_
Book cover showing how the same image can be made deceptive by use of text (Hogle, 2018)

As you can see the picture is attached with the word “citizens”. However, this word does not accurately reflect the data in the original visualization, as citizens, can be argued, to imply a notion of number of votes instead of number of counties. Furthermore, the counties in the midland are far less populated but represents a larger area, thus more red in the visualization (Hogle, 2018). Based on this information I will argue, that this illustration is not only misleading but deceiving, because it seems to portray a clear intent from the designer to alter the meaning of the visualization.

Final thoughts

Although this blog has not explored ethical considerations I think it is important to note, that these questions of misleading and deceptive visualizations are, as Cairo notes, shrouded with ethical questions (Cairo, 2015). This is also supported in a study by Marco et al. (2000) who states that “The reporting of data should be done with honesty and integrity, and every effort should be made to report data in the scientifically most accurate method” (Marco et al., 2000). But this discussion is for another time.

I would like to end this blog by asking you – Do you think that there is difference between misleading and deceptive data visualizations, and if so how important do you believe this distinction is?   

 

References

Beattie, V. & Jones, M. (2002). The impact of graph slope on rate of change judgments in corporate reports. ABACUS, 38 (2), 177-199. Retrieved from: http://eprints.gla.ac.uk/774/1/Abacus38(2)177-199.pdf

Cairo, A. (2015). Graphics lies, misleading visuals: Reflections on the challenges and pitfalls of evidence-driven visual communication. In D. Bihanic (Ed.), New challenges for data design (pp. 103-116). Springer-Verlag, London. Retrieved from: https://infovis.fh-potsdam.de/readings/Cairo2015.pdf

Harwell, D. (November 8, 2018). White House shares doctored video to support punishment of journalist Jim Acosta. The Washington Post. Retrieved from: https://www.washingtonpost.com/technology/2018/11/08/white-house-shares-doctored-video-support-punishment-journalist-jim-acosta/?utm_term=.317b5b94a576

Hogle, P. (August 15, 2018). Misleading Data Visualizations Can Confuse, Deceive Learners. Learning Solutions. Retrieved from: https://www.learningsolutionsmag.com/articles/misleading-data-visualizations-can-confuse-deceive-learners

Marco, C. A., & Larkin, G. L. (2000). Research ethics: ethical issues of data reporting and the quest for authenticity. Academic Emergency Medicine, 7(6), 691-694. Retrieved from: https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.1553-2712.2000.tb02049.x

Pandey, A. V., Rall, K., Satterthwaite, M. L., Nov, O., & Bertini, E. (2015). How deceptive are deceptive visualizations? An empirical analysis of common distortion techniques. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (pp. 1469-1478). ACM. Retrieved from: http://www.cs.tufts.edu/comp/250VIS/papers/chi2015-deception.pdf

The Brexit Referendum – a flawed and misleading question?

On the 23rd of June 2016 a majority of 52 % of the British voters elected to leave the European Union – a decision which has since sparked an astonishing amount of controversy within both the UK and EU. What has become evident in the aftermath is that a large amount of misleading information within both the “Remain” and “Leave” sides were present during the period up until the referendum. One of the most notably claims being, that the British National Health Services (NHS) would gain 350 million pounds a week as a direct result of the UK leaving the EU. This claim has however been debunked by credible authorities, with The UK Statistics Authority labelling it a “clear misuse of official statistics”. They estimate the figure to be in the region of 250 million pounds, and adds to the fact that it remains unknown how much of the “saved” money would actually be allocated to the NHS (Shehab, 2018). The “Leave” side, spearheaded by among others Michael Gove, also saw an opportunity to take advantage of the European Refugee Crisis by claiming; “Turkey is going to join the EU and millions of people will flock to the UK”. Michael Gove, one of the central figures of the “Leave” side, even went as far as to claim that Turkey’s inclusion could happen within the next four years. Nevertheless, what happened? Just months, after the referendum the EU suspended negotiations with Turkey due to questions over human rights abuses (Shehab, 2018). It’s important to point out that the “Remain” side also deployed misleading information. One example being, how pro-EU campaigners claimed that leaving the EU would spark a renewed push for Scottish independence. Though true to the fact remains, that 62 % of the Scottish people voted in favor of staying against a mere 46.6 % in England (Dickson, 2017).

bw-brexit_nhs_bus
One of the busses that went around London during the campaign

Although these campaigning tactics led to various kinds of misleading information and “mud throwing” being presented to the voting public, I will argue that the referendum question itself maybe poses an even greater problem of democratic deficiencies. Here’s why.

A flawed question design?

We know from within the scientific literature that questions applied in surveys, voting polls etc. should be structured in a clear and accurately formulated manner, in order to let the respondents formulate the desired answer (Fowler & Cosenza, 2008). But what this actually the case with the Brexit referendum? Researchers such as, Thomas Colignatus has recently argued that the binary nature of the referendum question (to either Leave or Remain the EU) did foster a disparity in the voting process and a misleading interpretation (Colignatus, 2017).

2016_EU_Referendum_Ballot_Paper
The Referendum Ballot Paper (Source: Wikipedia)

He bases his arguing on the fact, that the referendum question was worded in a far too narrow and binary manner for it to capture the complexity of the question itself. When interpreting the question I admit I also find myself supporting the point made by Thomas Colignatus. The notion of either leave or remain does stand out as a somewhat vague wording, which paves the way for countless number of understandings and interpretations. For example, does leaving the EU entail; to leave everything within the European Union, or would leave imply a situation where the UK would still see themselves cooperating with other member states and be a part of certain supranational matters (eg. Crime collaboration, the single market etc.). In another study by Miljan & Alchin (2018) on the design of referendum questions, they found that the particular wording of questions is an important factor in ensuring legitimacy to the results. They state, based on advice from International Institute for Democracy and Electoral Assistance (IDEA), that the question must not be “vague or capable of different meanings” (Miljan & Alchin, 2018). I will argue though, that this Brexit referendum question was capable of exactly that; vague and different meanings. Especially when considering the campaigning tactics this becomes even more evident. In that sense, it could be debated to what extent the vast amount of misleading information from both sides during the period leading up to the vote made it even more difficult for the voters to sort through the false claims, and therefore gain an understanding of what it actually meant to either leave or remain in the EU.

One or multiple questions?

The question that still begs to be answered is; what else could have been done to outweigh the democratic deficiencies of the binary referendum question? Although the literature seems rather inconclusive on this dilemma, it do offer another possibility – a multiple-choice ballot. As argued above, the question of independence is in reality more complex than just a YES/NO, what could be needed to offset this, is a ballot capable of capturing complexity. Rosulek (2016) finds that a multiple-choice ballot could be capable of doing exactly that. He argues that a ballot design through alternative measures such as rank ordering or to split issues into two or more sections could be applied. A multiple-choice ballot would at the same time work in favor of increased democratic legitimacy because it affords more engagement by the voting public (Rosulek, 2016). On the other hand, a multiple question ballot would require voters to inform themselves on a great number of issues, which can be both demanding and time-consuming. Furthermore, this can lead to confusion, lower turnouts and less informed decision-making. Finally, Rosulek suggests that if multiple-choice questions are left out of the ballot it can lead to widening societal disparities. The results of independence referendums tend to frustrate and even aggravate people, which was also evident in the Brexit referendum. An aggravation that can lead to social polarization, for example between different age groups (Rosulek, 2016) – a problem that became apparent in the aftermath of Brexit.

Final thoughts

I hope that this blog has provided you with a different take on the Brexit-debate, and given a little more insight on how vital the wording of a question can be for the outcome of such important matters as an independence referendum. I think it’s to fair to conclude that the Brexit referendum, especially in regards to the campaigning activity, was one of a kind. Based on the different viewpoints throughout this blog, I would argue that a referendum with multiple-choice questions would have promoted a more fair campaign and a more democratic referendum. What do you think?

References

Colignatus, T. (2017, 17th of May). “The Brexit referendum question was flawed in its design”. Retrieved from: http://blogs.lse.ac.uk/brexit/2017/05/17/the-brexit-referendum-question-was-flawed-in-its-design/

Fowler, F.J., & Cosenza, C. (2008). “Writing effective questions”. In E.D. de Leeuw, J.J. Hox, & D.A. Dillman (Eds.), International Handbook of Survey Methodology (pp.136-160). New York, London: Taylor & Francis.

Miljan, L., Alchin, G. (2018). “Designing A Referendum Question For British Columbia”. The Fraser Institute. Retrieved from: https://www.fraserinstitute.org/sites/default/files/designing-a-referendum-question-for-british-columbia.pdf

Rosulek, P. (2016). “Secession, Referendum and Legitimacy of a Ballot Text – Scholarly Reflection 1”. Politické Vedy. (4), 93. Retrieved from: https://www.academia.edu/30498114/Secession_Referendum_and_Legitimacy_of_a_Ballot_Text_-_Scholarly_Reflection

Shehab, K. (2018, 28th of July). “Final Say: The misinformation that was told about Brexit during and after the referendum”. Retrieved from: https://www.independent.co.uk/news/uk/politics/final-say-brexit-referendum-lies-boris-johnson-leave-campaign-remain-a8466751.html#explainer-question-4