Abstract
Based on ethnography from Australian digital newsrooms, this research shows how content production is split into two forms: Original news reporting is considered an act of ‘journalistic discovery,’ while content produced to appease metric indicators is considered an act of ‘metric confirmation.’ By conceptualising the digital space as a “glut of occurrences” (Tuchman, 1978: p.44-45) to be filled, the two case studies shown in this work inform how temporality and metrification intertwine to posit metric confirmation as low-risk, low-cost, high-gain work, while acts of journalistic discovery are comparatively high-risk and high-cost, with unknown outcomes. I argue that the inundation of metrics into newsrooms obfuscates other more crucial news values and poses challenges for the future of journalism when digital distribution is increasingly dependent on third parties while responsibility for commercial success has been shifted onto editorial staff.
Keywords
Introduction
Modern Australian digital newsrooms have increasingly become a hybrid of humans and computers engaged in processes of calculation and responses to feedback systems that interpret metrics about digital behaviour and then feed that data back to editorial desks (Anderson, 2011; Christin, 2020; Petre, 2015). Much of the global technology discussed in the literature has been scaled to Australian news markets, and the wholesale integration of metrics and feedback systems in newsrooms has brought about a dramatic reconfiguration of news production amid swathes of job losses (Dawson et al., 2021) and changes to what qualifies as a successful journalism output.
The existential issue of how third-party systems and measures influence content outputs has become the subject of much scholarly debate (Anderson, 2011; Carlson, 2018; Cherubini and Nielsen, 2016; Christin, 2020; Meese and Hurcombe, 2021; Poell et al., 2022). Ross (2017) argued that “accelerated transformation of information technologies […] coupled with radical changes in media consumption patterns, challenges modern-day journalists to rethink some of the most fundamental tenets on which their occupation is based” (p.82). Carlson (2018) argued that quantification in newsrooms limited the capacity (and information) on which editorial decisions were based, as “what is measured is what is measurable, and what is not measurable is often ignored” (p.412), while Anderson (2013) stated the evolution of “instantaneous audience metrics and newsroom management strategies […] marked the primary axial shift in the journalist-audience relationship. And when it came to questions of the role of audience metrics and traffic figures, much of the ambiguity expressed by journalists with regard to their audiences disappeared. They, like newsroom managers, were obsessed with ‘traffic’” (2013, p.135).
Metric systems for prediction and validation entangle newsmakers into platform logics by default and house them inside a broader digital economy of attention. By economy of attention, I draw on Myllylahti (2018, 2020), who argued that the attention economy was related to the capture of “eyeballs”, drawing on Goldhaber (2006) to define it as “paying, receiving and seeking […] the attention of other human beings” (2018, p.4). Poell et al., 2023 highlighted the market complexities created by the use of digital metrics in news-making as “most news organizations employ external metrics—such as those based on platform data—as well as internal metrics, including measures of traffic to the news organization’s website or app” (p.3). Petre (2015, 2021) and Christin (2017, 2018, 2020) have also reviewed the ways news practices have changed with the introduction of traffic-monitoring metric software such as Chartbeat in the US and France, and Blanchett (2018; 2021) reviewed similar audience analytic software in newsrooms in Norway and Canada. Christin (2017) argued that for journalists, web metrics “say something about the success and impact of one’s articles in the public sphere. As such, web analytics are intertwined with strong emotions, ranging from pride to shame, depending on how the article is faring in the chase for the readers’ attention” (p.10). In Australia, software such as Chartbeat now sits among a suite of technological tools that newsrooms have implemented into their digital news-making processes. The question then, is what happens to journalism when it is subject to these external metric influences? Can journalists maintain autonomy and act as a societal watchdog, engaged in “accountability journalism” (Schudson, 2020: p33-34), or is news content now just another content option in a global digital economy of attention?
Aalberg and Curran (2012) argue that democracy functions best when its citizens are politically informed and “for normative as well as empirically supported reasons, it is desirable that the media adequately inform the electorate about public affairs” (p.34). If informational quality is poor, and/or the citizenry is less informed, there are potential knock-on effects to democratic health. Therefore, this research adopts the normative view that quality news has value to democratic health and political accountability.
This article focuses on three key areas. Firstly, it makes the theoretical case for reconsidering the assumed meanings of metric values in digital news-making. Secondly, by conceptualising digital metrics as a measure that creates an endless “glut of occurrences” (Tuchman, 1978: p.44-45) to be filled with content in the digital space, it argues that the cadence of important news sits outside the scope of digital measures. Thirdly, two case studies from ethnographic work conducted in Australia show how different news cadences can result in different responses to metrics and two distinct forms of production in practice: acts of journalistic discovery and acts of metric confirmation. By discovery, I refer to the purely journalistic bodies of work that may or may not result in metric success for a news network, but it is work that invariably heralds significant public value and public interest. Metric confirmation, in contrast, is welded to platform logics – engaged in the labour of content production, which, in its purest form, is only developed because technological indicators determine that it would likely drive digital ‘traffic.’ Acts of journalistic discovery, then, become an economic and social risk to produce when pitched against acts of metric confirmation – cheap and quick-to-produce content seen as a sure-bet metric reward for production.
This metrification of digital newsrooms obfuscates other quality markers evident in journalistic discovery and poses a moral quandary for newsmakers who are tasked with aligning news values, which drive their work and storytelling, with metric outcomes that are far more suited to platform preferences than journalistic ambitions. This ethnography evidenced platformisation at work, contributing to an emerging body of research concerned with the influence of major technology platforms on news outputs (Anderson, 2021; Nieborg and Poell, 2018; Nielsen and Ganter, 2018, 2022). The growing influence of these technologies raises serious concerns about the viability of editorial independence, fairness and balance in digital environments.
Measuring time and space amidst the cadence of life
News values are what drive journalistic story-building, and far from the idea that digital journalism is “everywhere and nowhere” (Hermida, 2019: p.178) news is still anchored in – and drawn from – specific temporal and geographical contexts. The news value inherent in an act of journalistic discovery is born from the sense that the story houses enough innate value to be told. What has significantly changed is the temporal and spatial logics of digital distribution. As such, a shift in distribution power has rendered digital news “fragmented, atomised, remediated in places, products and platforms” (Hermida, 2019: p.178) and the shifting meanings of time and space in digital distribution logics have had consequences for the metrics designed to measure journalistic success.
Notions of ‘time’ and ‘space’ and what they mean in the context of community (or even what community means) have varied throughout history. Where the boundaries of journalism exist as a result have also been difficult to define (Carlson and Lewis, 2015; Reese, 2021), however, the two are bound together. A story’s ultimate raison d’etre – its news value and reason for becoming – exist before a story is distributed into the market, where the production of metrics begins. In this way, the story is the production of the compulsion to tell it, born from a specific temporal context, and manifested in the work. This is what gives news its meaning and generative value. For Tuchman (1978), “the rhythm of a newsroom is designed to catch those occurrences that happen at the appropriate time in the appropriate place. Time and space are accordingly objectified or given solidity by those organizational arrangements” (p. 40). Newsrooms, thus, ordered events emerging from the cadence of life, which has its own rhythm. Mumford (1934) argued that prior to the introduction of clocks, matters of time were a “sequence of experiences” in life (p.17). “When one thinks of time not as a sequence of experiences, but as a collection of hours, minutes, and seconds, the habits of adding time and saving time come into existence. Time took on the character of an enclosed space: it could be divided, it could be filled up, it could even be expanded by the invention of labour-saving instruments” (1934, p.17).
Without getting into the ontological argument, the point Mumford made was that prior to the scientific era, “in the symbolic world of space and time, everything was either a mystery or a miracle” (p.19) and “the true order of space was heaven even as the true order of time was eternity” (p.20). These notions were reflections of sequences in life being about quality. I propose that this is journalism’s true value – as something that reflects, mirrors and mediates discussions around the quality of collective civic life amidst shared time and space. Thus, at its greatest, journalism’s deepest links are related to both the geographical scope of communities it seeks to serve, and the moments of impact of events amidst these geographies – that is, the cadence of moments. News production then, is an ordered snapshot of those sequences that are significant and contextual.
Tuchman (1978) argued that for news-making, “the metaphor of ‘spatialized time’ is profound, for it emphasises that the social ordering of time and space stands at the heart of organized human activity” (p.39). Furthermore, she recognised that “the anchoring of the news net in time and space means that reporters and news organizations suffer from a “glut of occurrences” with which to fill the news product” (p.39). In this way, the news deadline was an opportunity to order the hierarchy of life’s sequences between temporal increments measured by the clock. With the advent of digital metrics, the values given to time and space have changed. Far from the perceived removal of deadlines, an ‘always on’ digital environment creates an endless “glut of occurrences” (Tuchman, 1978: p.44-45) to be filled. Those metrics measure volumes inside spaces – not values, and the need to fill the limitless digital glut creates the invisible pressure of metric confirmation to justify these void-filling confirmation processes. Espeland and Sauder (2007) have warned that measures are reactive and “elicit responses from people who intervene in the objects they measure” (p.2). This “concept of reactivity”, they suggest, “mediates two understandings of measures: as valid, neutral depictions of the social world, and as vehicles for enacting accountability and inducing changes in performance” (p.6-7). Their presence then, has the potential to change behaviours and outcomes toward them by the people engaged in making meaning out of them, and the case studies in this article will show these processes at work.
The introduction of metrics into newsrooms has also marked changes in responsibility and autonomy for journalists working with them. Coddington (2015) argued that “the wall between the journalistic and business-oriented functions of a news organization, is one of the foremost professional markers of journalism, a principle that is reinforced most strongly in the central sites of socialization” (p.67). The introduction of metrics in newsrooms marks a crumbling of this wall, as editorial staff are now tasked with what is largely the commercial arm of news-making – its distribution. This shift – from creating one arm of content that catered to a traditional two-sided media market in print and broadcast – now sees journalism fed into a much bigger ecosystem of multisided digital media markets dominated by platforms. As Nieborg and Poell (2018) argued: “what distinguishes multisided platform markets from past market configurations is that for platform holders, content developers can become dispensable” (p.4282).
Newsroom buy-in to metrics facilitates a growing dependence on third-party platforms, such as search engines and social media platforms, for both distribution and metric rewards. It also pressures journalists to orient their working processes toward the platforms they engage with – even in organisations funded by the public purse. While newsrooms can produce content that meets the goal of high journalistic integrity and quality and high traffic/metric results – the two outcomes are not bound together, and the surfacing of analytics to journalists can obfuscate the difference between value and volume, especially when technological indicators are buzzing around journalists’ heads in the newsroom. Leuven et al. (2018, p.799) suggest, journalists place enormous trust in the tools they use without knowing about potential built-in biases. “These tools have their own algorithms that decide which information is shown and which information is not” (p.802). If there is an overreliance on numbers and the numbers are bad, entire operations may turn bad with them.
Assumptions about metric data quality supplied by third parties cannot be assured, and the growing reliance on them for decision-making in newsrooms is one of concern when journalism itself has the power to bring about significant social change. Gillespie (2017) argued that “too often we treat the information providers as independent of search engines” when this position “profoundly overlooks the strategic efforts of the content providers. Precisely because information algorithms make judgments that can have powerful consequences, those interested in having their information selected as relevant will tend to orient themselves toward these algorithmic systems, to make themselves algorithmically recognisable” (p.64). Diakopoulos (2015) argued that “the opacity of technically complex algorithms” led “to a lack of clarity for the public in terms of how they exercise their power and influence” (p.398). As such, a focus on metric confirmation can lead news organisational processes to become subject to platform attention biases, which may have no preference for news specifically within surfacing hierarchies at all.
One does not need to look far to find evidence that platforms have proven they have enormous digital distribution power. In response to looming News Media Bargaining Code (NMBC) legislation (ACCC, 2020; Bailo et al., 2021; Frydenberg and Fletcher, 2021), Meta’s Facebook site temporarily banned news information in Australia in 2021 (Choudhury, 2021; Darmanin, 2021; McGuinness, 2021). Reports also indicated that Google trialled news-blocking in its search function (Evershed, 2021). More recently, Meta blocked news again, but this time in Canada, in response to the introduction of the Online News Act (Hermida and Young, 2023). These examples were clear markers of platform power to shift dials in the distribution market if the legislative or economic environments did not suit them. As such, how newsroom metrics intertwine with content outputs and technology platforms warrants much observation and scrutiny when considering these processes in relation to informational health, news distribution and visibility, as well as for the normative democratic role of journalism in holding powers to account.
This research aimed to explore the bridging behaviours between the newsmakers and the technologies they deployed in Australian digital newsrooms. The goal was to better understand the meaning(s) and reasoning of news outputs by asking what digital newsmakers were measuring, how they understood those metrics and, later, how these understandings linked in with the broader digital news ecosystem in Australia and beyond. What the ethnography surfaced were the ways that the metrification of digital production had gradually enabled and supported the platformisation of news while obfuscating the primary purpose of engaging in journalistic discovery. Pressure to evidence positive business-oriented outcomes, coupled with shrinking news resources and growing technological advancements, created a working environment that saw news workers ‘feed the beast that eats them’ (fieldwork, 2019). This research is significant because it highlights an underlying concern with digital news production processes present in Australia and many other countries today. The introduction of the News Media Bargaining Code in Australia has not changed this and, instead, has served to increase market dependence (and dispensability) of newsmakers on platforms.
Method
The two case studies in this research took place at national digital news hubs of networked news organisations in Australia at the very end of 2019, during the Black Summer bushfire season. The observation period concluded only weeks before the Coronavirus pandemic gripped the world. The case studies are part of a broader body of ethnography that spanned news organisations in three states of Australia, observing networked digital operations at national, state and regional levels of public and commercial media, with legacies in print and broadcast. The research involved almost 200 h of on-site ethnography, and the case studies were examples of production behaviours that surfaced repeatedly and were drawn from data saturation. The case studies highlighted ways metrics obfuscated journalistic values by replacing work purpose with reward mechanisms based on volumes.
Newsroom ethnography and interviews were conducted in the spirit of Actor-Network Theory by “following the natives” (Latour, 2005: p.62) to explore the interconnected relationships between metric tools and news workers, with iterative interview questions based on participants’ knowledge. All newsrooms that participated and the people involved in the study agreed to take part based on anonymity. Pseudonyms (referenced with an Asterix*) were used for organisation names. Written consent was given for recorded interviews. Around 20 h of unstructured interview data was recorded. It included interviews with senior editorial staff, group news editors, managing editors, news editors, and senior digital production staff on site. Additional accounts were also given by on-site news staff, including developers, journalists, sub-editors, producers, social media editors, video production editors, senior television producers and digital distribution specialists. While the subsequent pandemic lockdowns resulted in delays with data processing, the data should not be dismissed, as it speaks to the broader issue of digital platformisation processes amid news market erosion in Australia and beyond.
The first case study was an example of metric confirmation. Digital newsroom workers, in this instance, felt the pressure of the “glut of occurrences” (Tuchman, 1978) and the need to fill digital space with metric-driving content in a low-cadence news environment. Rather than focus on journalistic discovery work, editorial leaders chose to pursue a form of non-news, signalled by third-party systems, in pursuit of quick traffic wins (metric confirmation). The second case study highlighted how an act of journalistic discovery had news value built into it from the outset, but throughout the day, these values were obfuscated by the hunt for metric confirmation. In this case, news workers felt a sense of failure when the metrics they were able to harvest from the event were lower than anticipated, even though other important journalistic values such as “accountability journalism” (Schudson, 2020) and significant news impact were present.
Case study 1: The Jen effect – an act of metric confirmation
The American television show Friends was a global phenomenon in the 1990s, and show-star Jennifer Aniston’s debut on Instagram in 2019 was reported to have ‘broken’ the platform (Leskin, 2019). That day, I was at the main digital hub of an Australian nationally networked news organisation. A new role had just been created – Search Engine Optimisation Manager (SEOM). Seconded from another role within the company, the SEOM was involved in experimenting with a new software tool designed to help predict search terms that were likely to trend online.
The purpose of the new tool was to help the newsroom determine if there were areas of news value that people may be seeking answers for via search engines. If that value was not being met by the newsroom, having these search terms highlighted was an opportunity to produce content around subjects that the desk may not have been aware there was a digital appetite for, the SEOM said. Concurrently, the tool could also recommend keywords for use in headlines of news already being produced. This, I was told, was to help the newsrooms use ‘better’ words – that is, words more likely to be searched – to facilitate news visibility and distribution across search engines.
Adding these words into headlines, the SEOM said, helped digital news producers find a better fit between what they were producing and what was being searched for based on the feedback from the software. In this way, it geared newsrooms to optimise content for search engine visibility and distribution.
The tool, not built by the news organisation but supplied by a third party, was being trialled by the organisation to determine its usefulness to production. The SEOM indicated that their new role had been a ‘trial run’ and was still evolving in capability. New digital roles, it seemed, were popping up all over the newsroom, and the sentiment of workers was that the only thing they knew for certain was that their positions were constantly changing – or disappearing.
While the SEOM admitted to not really knowing how the new tool made search predictions or how it knew which words to source, they were diligent in pointing out the interesting capabilities that were built into it. The editor-in-chief (Big Ed) would later tell me that by creating better word patterns in headlines, the organisation could leverage the ‘street cred’ it had with search engine web crawlers.
Back on the news desk, it was gearing up to be what might be termed a ‘slow news day’. The Home Page Editor (HPE), who was charged with ensuring content was ‘ranking well’ on Chartbeat, had been running some of the network’s pre-written localised journalism and features content online. Unfortunately, the HPE felt this content “wasn’t really working” according to Chartbeat figures and was hoping something better would come along.
The SEOM flagged to the HPE that the new software tool had signalled Jennifer Aniston was a hot search topic. The HPE asked the Day Editor, in charge of editorial production, if someone could produce some written content on Aniston. Initially, the Day Editor chuckled and questioned whether Aniston was news. However, the SEOM’s data was indicating there was a lot of search action around Aniston.
Adjacent to the news desk sat the production desk, where some writers and sub-editors quietly rumbled and scoffed about whether something on Aniston was the type of content that this organisation should really be ‘wasting’ resources on. The producer, who was asked to do the work by the Day Editor, immediately tried to get out of it, expressing that they were busy doing a sports story. However, the Day Editor could see on the Chartbeat board that numbers were flat, and so was insistent. The producer did some bargaining. They said they needed to finish the sports story first, and then they would look at Aniston. The Day Editor agreed to this, and chatter on the production desk stopped. The silence was significant. This was not a story that the desk felt needed writing.
Later in the day, the same producer was asked again where the Aniston story was. They insisted they had been busy and had ‘accidentally forgotten’ about it. It was clear by this stage that the writer had attempted to either stall, delay, or avoid writing the content altogether. After a little more nudging from the HPE, the writer spent no more than 15 minutes on an Aniston story before it was published. Just as the SEOM’s new tool had predicted, the story rose quickly to the top of the Chartbeat board. This had the effect of validating both the new tool deployed by the SEOM and the Homepage Editor’s persistent request. According to all of the technological feedback mechanisms at play, the content was a traffic success and a numbers winner.
The Jen effect highlighted how the hunger to fill a measurement glut by news editors resulted in an act of metric confirmation – that is, how the metric technologies in a newsroom incentivised media workers to pander to the noise created by feedback alert systems rather than take on the risk of engaging in an alternate act of journalistic discovery, which would herald no guarantee of metric delivery. Although the culture of workers on the desk was clearly in opposition to the development of an Aniston story, what this case showed was how technology was self-validating. The alert system prompted editors to focus on producing a sure bet metric win that required minimal editorial attention. It was efficient. Content turnaround was fast and cheap, and the reward was traffic (metric confirmation). By publishing a story on Aniston and harvesting the traffic, attention increased, and metric averages improved, returning the screen to benchmarks that were considered ‘normal’ for the desk at that time. This process also showed how the pressure to service these benchmarks – and distribution platforms – was ever-present thanks to metric-flagging systems – and how non-news content was completely adequate for this purpose. It also highlighted how editorial roles had evolved to meet distribution expectations, a direct consequence of the digital platformisation of newsrooms.
Case study 2: The Newsbomb – an act of journalistic discovery
It was just before 8.30 a.m. at Origin* digital, and the newsroom was running at a high pace. The evening before, a specialist television investigative team within Origin’s own broadcast network had aired an explosive segment that had horrified the nation. The Newsbomb* segment was an original act of investigative journalistic discovery that I was told had taken nearly 2 years to complete. The investigative team at the heart of the broadcast story had faced some big battles to get the story to air, including the threat of legal injunctions. The segment had graphically exposed abuse and raised allegations of corruption within a major industry. Public outrage ensued.
Given story sensitivity and to avoid network leaks, only senior editors at Origin had been made privy to the digital version’s production before the show aired. For Origin’s Digital Night Editor, there had not been a lot of signalling about Newsbomb’s importance, and for the large part, they had not been aware of its ‘bigness’. As such, its digital ‘treatment’ (to adopt a word used frequently on the shop floor) was a habitual one. It had been signalled as something being sent over from television to ‘go up’ after the show.
So, the Night Editor executed the plan as directed and published the digital copy after the show aired. Within minutes, other digital media from outside the Origin network had produced re-writes of Newsbomb’s segment, using quotes, screen grabs and video snippets to pad out their digital copy. These were rival news media networks, leveraging attention traffic from the story to service their websites. In this regard, early reproductions of Newsbomb on the evening were acts of metric confirmation by other news sites, who recognised both the in-built news value and traffic value of the story.
The subject at the heart of Newsbomb began to trend on Twitter, and by morning, radio, breakfast television, newspapers and other digital news sites were running the story prominently. Newsrooms across the country were quick to line up interviews with experts and politicians on breakfast television and radio, and with each new development, the story advanced at a lightning pace. Its news value kept expanding, and new acts of journalistic discovery advanced the story in real-time at a high cadence. Rival news networks had created ‘reaction pieces’ with quotes from powerful industry leaders, police, politicians and the public about what Newsbomb would mean for affected industries and whether there would be investigations, reviews or criminal charges. With these newer angles, other media had begun to outrank Origin in the digital traffic race.
At Origin, the morning team quickly realised its digital copy was going out of date. The problem was, Origin digital did not have any journalists available to cover the follow-up stories unfolding across the country. Both the holding story from the night before and an additional piece produced by the night/early morning producers were indicating high traffic volumes on Origin’s Chartbeat Big Board for the website, on Mobile and Apple News, but the indicators were not where the team felt they should be. They were not high enough for the national attention the story had garnered. The news editor did his best to get momentum and rally the troops to production. Newsbomb should be doing so much better on Chartbeat, he said aloud in the newsroom. It was their story; they owned it, and it was big. The numbers should be doing better…
As Origin scrambled to get some content together from its own broadcast and radio reporters, the Day Editor looked to Twitter and Facebook and flagged some content that piqued his interest with the Home Page Editor (HPE), in order to try and freshen up the story angle. He asked if an expletive quote that had been flagged on social media was referenced in the television show footage. He had not seen the show the evening before, he would later tell me. The team around him seemed unsure. The HPE started scanning Newsbomb footage, looking for the quote, and soon found it. The Day Editor was baffled about how the quote had not been picked up earlier and ran with as a headline angle. “That’s got to be it, that’s got to be the headline, surely,” he exclaimed.
The HPE suggested a headline test via Chartbeat to see if the expletive headline, along with a new, more visually-stimulating thumbnail image, would help improve the story statistics. The HPE opened the Chartbeat headline test function and placed a few different headline iterations into it. The HPE would later admit to me that they did not really know how “that all worked” except that the system would surface different headlines, like an A/B test, to different parts of the network, and after a certain amount of time, it would automatically determine which headline was most successful, then switch and serve the winning headline to all channels automatically.
Within half an hour, the test result was revealed. The Day Editor’s view was confirmed when the percentages in Chartbeat showed that the headline with expletives was the most ‘engaged’ according to the software – and by a significant margin. As the headline changed across the network, the story started to climb up the Chartbeat Big Board quickly. It soon moved to number one after previously having dropped down the ranks. The Day Editor stood up from his chair, raised his arms in a double fist pump and hissed a satisfied “Yes!”. Using the moment as an opportunity to take a quick toilet break, he strutted off down the hallway with a swing in his step. Victory was his.
Metrics for success
It was about a month before I would get an opportunity to speak to staff involved in producing Newsbomb. It was clear there had been some dissatisfaction from parts of the network with the execution of the work. Despite becoming a story of national impact, despite days of follow-up stories emerging, criminal investigations, state government inquiries and industry promises to change, the sensation that there had been ‘failings’ did not dissipate at Origin.
One producer explained to me the “bristling” effect of digital audience analytics on staff engaged in television news programming. For the style of investigative news Newsbomb sought, the producer said, the team worked because it felt that this type of journalism was important, and that they were certainly not in it for money or fame. So, the conflict of analytics to the journalism work was a frustration, and the idea of being a slave to measures had met strong resistance from the investigative team. At the end of the day, it was still about the journalism, the producer said, and the team’s priority had to be the production of the program and the important journalistic work they were hired to undertake – to the highest standards, with the resources they had at their disposal. Digital outputs were a secondary consideration.
The producer also argued that ‘digital’ operated in what they thought was essentially a commercial environment. For them, part of the friction with data was the inconsistency over which goalposts a television team should be striving for. Was a television show still striving for a television audience? Or perhaps they were aiming for audiences on native video via the app, or was it catch-up television? Or should they be outputting programs to YouTube, or creating written content to the digital news websites, or to the program’s website? Were they supposed to generate traffic for their program digitally? Or was it for the rest of the organisation at Origin, the producer asked. These competing metrics were used to make the point that it was hard to know where to focus, where the goalposts were and, ultimately, what success was supposed to look like.
This case highlighted some of the risks news organisations faced when they invested in lengthy acts of journalistic discovery that did not reap metric rewards. It highlighted how a failure to validate the work with metric volumes led to disappointment, confusion, and doubt about notions of news success. In this way, metric confirmation obfuscated the power and value built into the story. What started from an act of journalistic discovery became a scramble for metric confirmation to evidence traffic volumes. As news workers grappled with distribution issues, external organizations seized a cost-effective opportunity to cash in on Newsbomb for traffic. This ready-made news value created easier wins for other organisations harvesting news traffic, as Origin had already taken on the financial and legal risks associated with journalistic discovery without knowing at the outset whether it would reap metric rewards.
Discussion
As the source of the original reporting, the act of journalistic discovery that became Newsbomb created a traffic opportunity for other organisations engaged in acts of metric confirmation. Thus, Origin’s network became a target for traffic-harvesting by rival media organisations, obfuscating other news values generated by Newsbomb amidst a feeding frenzy of traffic competition in a finite temporal economy of attention. This ‘traffic loss,’ then, translated to a sentiment of internal failure at the network, as Origin staff struggled to keep up with the momentum of a story it had spent the money to surface.
The shortcut for other media organisations occurred when journalistic discovery happened at minimal cost to them because Origin had absorbed the risk. With news value built into the story, it then had broad appeal to other news organisations to evolve it.
The greatest benefactors of metric confirmation work, however, are those who can view and use metric information in the aggregate. Third-party platforms, engaged in traffic distribution, are best placed to leverage this information without the financial and legal costs associated with journalistic discovery or the labour costs associated with digital distribution.
In this regard, major platforms have a significant market advantage with high levels of market insight and low levels of investment risk, along with maximum levels of financial reward potential (via advertising revenue facilitated by data insights or even the sale of the data itself).
That a news organisation engaged in high-quality, high-value investigative reporting could consider it had failed when an entire national news ecosystem had relied on its journalistic discoveries for traffic success, screamed volumes about the validating role of metrics to journalistic role performance. The story was a raging success. By non-metric standards, Newsbomb held strong accountability values at its heart and was “journalism of original reporting, presented in an emotionally compelling way, and asserting itself in the face of the powerful persons and institutions it covers” (Schudson, 2020: p.33). However, the metrics were deemed a failure. The fast flow of choices and metric rewards that stemmed from traffic-measuring technologies had a profound impact on Origin workers’ self-perception of whether they achieved successful production outcomes, even amid social impact and change. In this sense, the overall news impact was evidenced by state and national leadership promises of revision and change, but internal markers of proof (traffic success) went amiss.
Concurrently, the additional noise generated by multi-organisational attention and metric competition, traffic syphoning and blocking, amid the invisible algorithmic news surfacing processes of third parties, masked the multiplicity of ways this act of journalistic discovery exemplified impact and benefited the national community through its existence. It also highlighted the pivotal role metrics play in justifying journalism investment. Accountability journalism is no longer justification enough. Metrics, it seems, have become the required form of evidence to justify whether the juice was worth the squeeze. Their use is necessitated by the increasing distribution power of platforms and, as such, aligns internal news production processes with external distribution goals. What is seen as successful distribution (traffic) in one organisation becomes replicated by other media players in the same market landscape, chasing the same metric outcomes.
The Newsbomb case showed how metrics failed to measure news values amidst volumes because news values were built into the story, but the metric story was one of distribution volume failure. This led to internal confusion among workers and questions of doubt around their investment in the story. The numbers circulating back to Origin did not accurately reflect the power that this act of journalistic discovery collectively had across a national community, nor did it capture alternate metric volumes generated for other news organisations. During the story’s evolution, internal metric feedback loops became a primary focus of news-making attention.
In contrast, The Jen Effect highlighted what media workers considered non-news through a process of organisational acculturation, which was then silenced by contradictory metric outcomes. By speaking to notions of what was not culturally accepted among journalists as news, the workers moved closer to defining the value of their role in that process, and for the communities they served, without being forced to specify what that value was.
The differences and similarities in these two case studies came down to news cadence. On a low-news-cadence day (e.g., The Jen Effect), metric indicators gave signals as to where digital attention lingered, with editorial staff left with a moral quandary over whether to chase content to service this attention (which does not need to be news to meet this end) and pander to search engines, or to invest in relevant local journalism.
Conclusion
The hit-and-miss nature of journalistic investment posits it as high-cost and high-risk, while metric confirmation is cheap, fast and likely to supply expected high-traffic rewards, backed up by a vast quantity of already-invested-in technological prediction tools, ready to validate their volumetric worth. This is where the lines of value and volume become blurred. An act of journalistic discovery may or may not inherently contain the magic recipe of metric drivers within it, but it does not have to. The hit-and-miss nature of journalistic investment posits an act of journalistic discovery as high-cost and high-risk. At the same time, metric confirmation is cheap, fast and likely to supply expected high-traffic rewards, backed up by a vast array of already-invested-in technological prediction tools, ready to validate their volumetric worth.
Metric confirmation is the platformisation of news at work and poses two key problems. Firstly, distribution success is increasingly shifted onto the news desk, while distribution power is simultaneously held by third parties. Secondly, metric confirmation is problematic for informational health and normative ideals relating to the democratic function of news in a landscape where platform distribution processes are largely unregulated.
What determines the choice to push news out to the world or to attempt to pull traffic from platforms is often the result of a temporal nexus – a moment in time where a drought in news cadences sees newsrooms chase easy traffic wins in lieu of chasing news to fill the digital glut. A slow news day may see an investment in low-value, high-gain content as opposed to the choice to use that time to make a greater investment in journalistic discovery. Newsmakers face the endless dilemma of whether to pursue quality journalism (value) or the commercial ends underpinned by traffic (volume).
Attention rewards offered by metrics may or may not be attached to news. As The Jen Effect shows, attention could just as easily be attached to any other content flagged online. Measures do not discern between news and non-news, value and volume, discovery, or confirmation, for the very reason that Petre’s (2021) research points to: newsroom metrics indicate what is consumed, but they do not indicate why. They are hollow (p.105).
As the adage goes: what is interesting to the public is not necessarily in the public interest, and as Tandoc and Thomas (2015) warn: “If journalism is to help bring about the common good, it must provide the public with more than just what the public wants” as “a journalism that fulfils its communitarian role needs the autonomy to do so, especially with the multiple interests that seek to take advantage of the power of the press” (p. 244). Where journalists seek out content that meets both ends of the value chain (journalistic discovery i.e. estimated to result in metric confirmation), the risk is that news-workers may shy away from more challenging and untested discourses for fear of reprimand if story distribution fails. That has enormous implications for the types and forms of online discussions likely to occur and, subsequently, how they may fracture or emerge in other, less mediated, and more problematic ways. When journalists unquestioningly trust predictive analytics systems, not only do they become increasingly dependent on third parties for information and confirmation, but they also gear their time and work toward the outcomes and values determined by those parties. That is very much an issue for journalistic autonomy, independence and integrity, with consequences for overall news balance and fair representation as a likely outcome.
Footnotes
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
