Abstract
This article critically explores the ways by which the Web could become a more learning-oriented medium in the age of, but also in spite of, the newly bred algorithmic cultures. The social dimension of algorithms is reported in literature as being a socio-technological entanglement that has a powerful influence on users’ practices and their lived world. They do not only govern what is visible (and inherently, what is obscured), what is valued and noteworthy, but also have the power to enable and assign meaningfulness in managing how information is perceived by users. This incurs a certain knowledge logic which is pervasive in algorithmic culture. This article posits that inquiry about the relation between algorithms and learning needs definitions as well as the stance of not extensively relying on them. When asking what an algorithm is, or how to define the process of learning and knowledge acquisition, one must also keep in mind that a definition is mostly blind to the ambiguity and slipperiness of contexts, hiding the gaps that hinder the objective circumscription of a concept. This article proposes to mind these ‘gaps’ through discussing controversies that may (or rather actually do) happen regarding contextual or theoretical differences in the interpretation of key concepts such as learning, knowledge and culture. To extend the discussion, I will expose alternative material which allows a wider consideration of the concept of learning and emphasises a dimension of learning seldom taken into account: contextual dependence. The chief characteristic of data processed by algorithms being their decontextualisation, I will discuss the agonistic relationship that is emerging from learning in the age of algorithmic cultures, to explore the possibilities of bridging the gaps and exploit the valuable resources the Web has to provide to enrich another dimension of learning in our lived world: its contextual relatedness.
Introduction: Algorithms and their conceptual gaps
Since the advent of the public uses of the World Wide Web (thereafter, the Web) in the 1990s, many changes occurred in the field of information retrieval technologies. Such technologies, now widely called Web search technologies, have seen their applications evolve as the Web went public and began to expand its contents. However, algorithm technologies were developed at a time when information was mostly stored in a well-structured form in thematic databases, like in the strategic warfare network ARPANET or the later Usenet, which applications were restricted to engineers and researchers spheres. Though, it is no longer the case. While the Web has extended access to individual users, its contents have grown in conceptual uncertainty. A decontextualisation due the fact that data are no longer structurally or thematically consistent, and without any built-in indexing system, search engines algorithms have become the only tools to make the rather unruly structure of the Web manageable for human users. Through a witty analogy, Introna and Nissenbaum (2000) describe the Web’s decontextualised condition as being like a library containing all the printed books and papers in the world without covers and without a catalogue. Though, the Web remains a valuable source of information and can be said to extend the possibilities of learning by exploiting its resources. So, to search and sort information on the Web, we increasingly rely on search algorithms. Algorithms are not merely technological devices, their outputs have impact and consequences on society and its people’s lives, shaping what and who they know, what matters for them, what they encounter and discover. The social power of algorithms , in their ability to make choices and decisions without or with little human intervention is at the heart of discussions about algorithm’s potential social power (Beer, 2017). This implies that we now consider the alleged algorithm ‘objectivity’ to be more trustworthy than human decisions, or borrowing Turkle’s (2011) expression, that we expect more from technology and less from each other. In a nutshell, the social power of algorithms seems to be coupled with distrust in human intervention. Algorithmic systems are widely used in many fields such as finance, research, education, individual user applications, etc. It is difficult to provide a unified view of all the tasks algorithms are performing because of such a diversity of applications. For the sake of clarity, this article will limit itself to public relevance algorithms (Gillespie, 2014), which are algorithms used to select what is most relevant in the bulk of digital data and the digital traces of our online activities. The algorithmic assessment of information then represents a procedure with a singularly encoded knowledge logic, which consequently builds on specific presumptions about what knowledge and criteria of ‘relevance’ are, and how one should identify its most relevant components.
The core stance of this article is that algorithms can be (and actually are) tools for learning and knowledge, suffice not to forget the gaps in the conceptual interpretation of learning and knowledge that the very notion of algorithm leaves behind. As Beer (2017) rightly observes, algorithms belong to a wider socio-cultural context that promotes a certain rationality based upon the virtues of calculation, competition, efficiency and objectivity that are powerful channels to infuse and circulate these notions and ideals through the social world. In order to consider such an issue, one must begin with some points of definition to set up the context of the following developments. Such as, what is an algorithm and how to define the process of learning and knowledge acquisition? Though, one must also keep in mind that definitions are also restraining the conceptual breadth of the very thing they define by casting it in an objective mould. A definition is hiding the gaps and protrusions that hinder the circumscription of a concept. This article posits that inquiry about the relation between algorithms and learning needs definitions as well as the stance of not extensively relying on them. This article proposes to focus on the above ‘gaps’ through discussing controversies that may (and actually do) happen regarding contextual or theoretical differences in the interpretation of key concepts such as learning, knowledge and culture. To extend the discussion, I will expose alternative material, such as the situated cognition model developed by Brown et al. (1989), which allows a wider consideration of the concept of learning and emphasises a dimension of learning seldom taken into account: contextual dependence. The chief characteristic of data processed by algorithms being their decontextualisation, I will discuss the agonistic relationship that is emerging from learning in the age of algorithmic cultures.
In an attempt to define what is an algorithm, and the nature of their influence, Ziewitz (2016) began her study with the query ‘what is an algorithm?’, and ended stressing the need to properly consider ‘what is an algorithm?’ She observed that algorithms were rather difficult to understand, because of the opacity and inscrutability under which they operate. Opacity of operation tends to be seen as another sign of influence and power. One pertinent way to inquire the social influence of algorithm would be to look beyond the algorithms themselves to explore how the notion, figure or concept of the algorithm is employed as an important feature of their potential social power and how they come to matter in specific social situations (Beer, 2017; Ziewitz, 2016).
Accordingly, this article shall also aim to (re) consider definitions, keeping in mind Beer’s (2017) distinction between algorithms in themselves and the notion of algorithm, and further inquire how such a notion of the algorithm is envisioned to promote certain values and forms of calculative objectivity. These values consequently alter how we understand learning and knowledge in an algorithmic context, a point that will be developed in the following section.
Learning, knowledge and calculation
The three concepts of learning, knowledge and calculation, respectively, cover a wide range of theories, methods, values and interpretations. Some scholars may gather them as sub-concepts of a united whole. Some others may separate them or order them in spheres which have a different structure altogether. The one thing that may be ascertained is that they are all important dimensions of human endeavour and human formation. One cannot possibly do without any of them. However, when it comes to define them, there is unfortunately no definition that satisfies the multiplicity of contexts under which they can be understood. In other words, there is no Grand Theory that unifies human knowledge, because knowledge is inescapably bound to be contextualised, in the same way language and society are. Therefore, there will always be gaps in any knowledge, would it be common or academic, there will always be a remainder, aspects that are excluded, overlooked or simply ignored, by accident or by design, for they do not fit the current contextual frame of interpretation. There are as many theoretical stances as there are cultures, more to the point, these are cultures, with their own worldviews. In this section, I will state the theoretical tradition to which I situate myself, so to set a clear conceptual field for the later developments.
I shall begin with the concept of learning (to avoid a pretension to define learning itself). I first need to emphasise that I do not limit learning to the process of formal education and schooling. Education is a part of learning, not its whole. I consider learning as the lifelong process of human formation, which does not necessarily imply the coupling with teaching for it to occur. While one can rightly argue that teaching/learning relations underpin the modern educational institution, the fact that they do not constitute the totality of what an individual may learn is often overlooked. That is the reason why this article will not overtly emphasise teaching situations, which would limit the concept of learning to the dependence from and the condition of former teaching that is the tacit presumption of most of the contemporary educational research. Of course, this does not mean the author does not acknowledge the value and necessity of teaching, I merely deem wise not to presume its coupling with learning in the present context and for the sake of this article’s argument. I would rather agree with the social anthropology of learning approach inspired the situated learning theory of Lave and Wenger (1990), or the situated cognition model developed by Brown et al. (1989), which posits knowledge and learning as a process of social construction. They emphasise that conceptual knowledge cannot be abstracted from the situations in which it is learned (which is too often the case in conventional schooling), that it is situated as a product of the context and culture in which it is developed and used. It echoes Polanyi’s (1966 [2009]) tacit/explicit distinction and Ryle’s (1949 [2009]) contrast between knowing how and knowing that. The former argues that knowledge is not always explicit, in the sense that it is not always easy to be transferred or explained, it also has an inarticulate component, an unspeakable dimension that is called tacit. The two dimensions of knowledge acquisition are considered interdependent in making knowledge circulation possible. The latter also emphasises the interdependence of two types of knowledge that cannot be reduced to one another, because knowing how cannot be defined in terms of knowing that, and because knowing that does not lead to being able to use that knowledge (Brown and Duguid, 2000b). Moreover, Cox’s (2004) review of situated learning theories summarises that while the cognitive (‘educational’) model emphasises teaching and considers learning as a mechanistic, cerebral process of transmission and absorption of ideas, where knowledge is merely instrumental to the learning process, the situated model rather focuses on informal learning, interpreting it as being as much about understanding what to do and how to do it. It is learning in its widest sense, that which is not restricted to learning facts but also learning to live, or ‘learning to be’ (Brown and Duguid, 2000b: 128), a dimension of learning that is seldom represented in the knowledge-delivery view of education. Here again, the two forms of learning are not opposed but interdependent. Thus, this article considers knowledge and learning as being first and foremost contextual and cultural concepts, which are the result of a wide relational and formative process which does sometimes, but not necessarily follow the foreordained rails of a pedagogical curriculum or a developmental model.
The relation with calculation appears when one considers the relation of the above learning concept with learning in an algorithmic context. How does the mediation of algorithms in information retrieval and sorting influence learning and the conception we have of knowledge acquisition? The introductory section of this article discussed of the social influence of algorithms insofar as they promote certain values based on calculative rationality. This implies that they will emphasise some kind of information and discard another, based on specific calculations. Moreover, such a process gains validity and legitimacy through the promise of algorithmic objectivity (Gillespie, 2014). This is the comfort and reassurance given by treating technology as an autonomous and fully objective force that prides of its automated mechanical neutrality and is not tainted by the biases of human intervention. It presumes a shift in the rules of legitimate knowledge where the rules of calculative rationality have replaced the self-critical judgement of human reason (Ziewitz, 2016).
The controversy about algorithms and culture: Considering social entanglement
I believe that such a pretention to calculative objectivity which develops in contemporary society in replacement of the long-valued human reason is a key point for understanding the socio-cultural mechanisms that induced the advent of the social power of algorithms and their specific knowledge logic. This is also a point that can help in exposing the gaps between learning and calculation. I shall consider the following controversial quote from Brown and Duguid (2000b), interpreting it in the context of their ‘situated cognition’ theory and see how setting it into the algorithmic objectivity context can alter its meaning. Such an experiment gives an insight of what has been left behind in the frame of the algorithmic knowledge logic and its socio-technological influence. ‘The ends of information, after all, are human ends. The logic of information must ultimately be the logic of humanity. For all information’s independence and extent, it is people, in their communities, organizations, and institutions, who ultimately decide what it all means and why it matters.’ (Brown and Duguid, 2000b: 18 [Emphasis added]).
Finally, the choice between the determinism of human reason and that of calculative rationality is a false one. The reason is, whatever the version, determinism means a no-choice situation. I disagree with both because they shrink the interpretation of socio-technological phenomena and the individuals that live and experience them to a single pattern. Returning briefly to the Brown and Duguid quote, I suggest reading it as follows. Learning and knowledge are neither totally internal nor totally external, they are inter-relational and situated. They are therefore not easy to acquire and transmit, because they need assimilation and commitment. They need to acquire meaning for the learner/knower and in order to be inscribed in one’s living context. Hence, information being independent from meaning, such a meaning has to be produced by people and their cultures. However, Arendt observes that culture does not fit well in the objective mould of calculative rationality: ‘An object is cultural depending on the duration of its permanence, its durable character is opposed to its functional aspect, that aspect which would make it disappear from phenomenal world through use and wear and tear … Culture finds itself under threat when all objects of the world, produced currently or in the past, are treated solely as functions of the vital social process – as if they had no other reason but the satisfaction of some need – and it does not matter whether these needs in question are elevated or base.’ (Arendt, 1968, cited in Bauman, 2011: 107 [Emphasis added])
Different from the obsession of visibility, but yet in the same scope, is the obsession for relevance. As discussed above, the notion of algorithm is envisioned to promote objectivity and impartiality. The results of a search query (even though they are different for every user) are deemed to be relevant for that specific user. The top ranking news, topics, trends or sales are also deemed to represent an impartial and objective standpoint. Though, the users are also complicit in this, because a society that obsesses over such top rankings has made those results important, rather than the reverse, that the results are obsessed over because they are important (Gillespie, 2014). So the entanglement between algorithms and the behaviour of users who take them up is here too, expressed as a recursive loop between the algorithm calculations and the people that rely on these calculations, or behave in such a way that the algorithm will show them in their best light. This being done independently of considerations of relevance or meaning which seem to be internalised by users as a given. Otherwise, considering the (fairly possible) situation of users refusing to fit the algorithm into their practices, to make it meaningful (that is, to give it the meaning it is initially deprived of), that algorithm will fail, and with it the social entanglement brought forth by more cooperative or less critical users. Relevance is a fluid concept, because there is no independent metric for evaluating ‘relevance’ in algorithmic processing output for a given query. At best, what it approximates is not relevance but popularity and trending. The equation of relevance with popularity (which also roughly reads as the equation of quality with notoriety) is indeed the dominant method of ranking search results in algorithmic systems (Lobet-Maris, 2009; Rieder, 2009).
To consider more thoroughly the above mentioned several aspects and issues concerning culture and relevance, as well as the structural making of the notion of popularity in relation with the former two concepts, I propose to look back to the pre-Web era’s context, which I believe, provides a socio-cultural basis upon which the present condition has developed and evolved. In the early 1960s, the social historian Daniel Boorstin dedicated a chapter of his book The Image (1961) to the mechanisms of the raise of contemporary fame as a social value and its cultural implications. The chapter is titled ‘From Hero to Celebrity: The Human Pseudo-Event’. In the following, I will summarise Boorstin’s arguments and then discuss their relation with Web ranking’s trending.
A genealogy of trending and fame: Pre- and post-Web
Daniel Boorstin insightfully observed that in its innermost structure, the legitimacy underpinning the acquisition of the much sought for social status of ‘fame’ (here understood as a measure of human excellence) was based on some sort of a self-legitimating prophecy. He opposes the ancient figure of the hero to the contemporary figure of the celebrity and their respective criteria. Boorstin (1961) defines the hero as a human figure (real or imaginary or both) who has shown greatness in some achievement and is thus admired for his excellence, courage, nobility or exploits. The criteria defining the greatness of the hero precede any measure of excellence. Even though those deeds would not be valued or admired, the fact of their accomplishment would still be undeniable. On the other hand, a celebrity is a person whose greatness has been equated to his fame, and fame can be made overnight. Boorstin writes that fame is that kind of thing that can at will, and usually deliberately, fabricated. The criteria defining a well-known man or woman do not need pre-existent deeds of excellence, suffice for them to be declared famous to become so (and on the reverse, suffice this declared fame to decline for them to return to their initial anonymity). We can make a celebrity, but we can never make a hero. In a now-almost-forgotten sense, all heroes are self-made (Boorstin, 1961).
Consequently, a celebrity is defined as ‘a person who is known for his well-knownness [sic]’ (Boorstin, 1961: 57). Celebrities are made by people and social forces (the show business, the media, etc.) that are independent of their own deeds, the criteria for their intrinsic qualities is rather their lack of qualities, they are themselves but mere products of the twitch of the forces that called them to existence, ‘pseudo-events
2
’, to borrow Boorstin’s expression. If, compared to the hero, their lives are empty of achievement, it is because they are not known for their achievements, and their chief claim to fame is their fame itself. The hero was distinguished by his individual excellence or achievements, the celebrity, by his image. In relation with Arendt’s arguments about the value of culture depending on the duration of its permanence, discussed previously, Boorstin also posits the permanence of the hero against the ephemerality of the celebrity. ‘The celebrity (…) is always a contemporary. The hero is made by folklore, sacred texts, and history books, but the celebrity is a creature of gossip, of public opinion, of magazines, newspapers, and the ephemeral images of movie and television screen. The passage of time, which creates and establishes the hero, destroys the celebrity. One is made, the other unmade, by repetition. (…) No one is more forgotten than the last generation’s celebrity.’ (Boorstin, 1961: 63 [Emphasis added])
The controversy about algorithms and learning: Escaping ‘trendy knowledge’
In the light of the arguments of the above section, I shall return to Hallinan and Striphas’ (2016) concept of algorithmic cultures to reconsider and develop their context. They interpreted it as being the use of computational processes to sort, classify, and hierarchise people, places, objects and ideas, and also the habits of thought, conduct, expression that arise in relationship to those processes. Through those processes, what is important, valued and deserving attention is produced and circulates in/as contemporary culture. However, the pre-Web society analysis done by Boorstin, completed by a much alike contemporary insight by Rieder shows that there are two ways to achieve excellence. One is to attain it for a reason, which is true, justified and socially recognised, because one cannot deny the factual correctness of the achievement of the deeds that are being valued; the other is to be declared famous (and thus deserving public attention) by the chief claim to fame, the justification and value criteria of which remain obscured. Rieder states that algorithmic ranking belongs to the latter. We are provided with news, topics, people, search queries results that are trendy rather than relevant. And more to the point, those trendy topics have demonstrated to have a tremendous influence on what we pay attention to, what we know and what there is to know, and even how learning and knowledge is to be understood (that is, the classification of ideas and the habits of thought, conduct and expression). Such a ‘social entanglement’ is an important factor when studying the social influence of algorithms, but this is not an immutable statement of fate; it is rather a fact to be considered and developed in further research. The social power of algorithms may be rephrased as the power of trending (a force which as haunted technological societies for at least a hundred years, if we refer to Boorstin’s account of it). So the form of knowledge heralded by the algorithmic culture can be labelled as ‘trendy knowledge’. Though, I would not be so hasty as to call it ‘new’. Trendy knowledge is the wisdom of rankings (here replacing the wisdom of crowds of old) and more recently, the dictatorship of online visibility. In regard to trending, what is the difference between a best-seller novel, a yellow press headline and a Web ranking? I would say none, because they both rely on trendiness and both have social influence that gives them the primacy of public attention. I would be inclined to follow Knight’s argument when he writes that we should not assume that these technological changes actually represent new epistemologies, whether positive or negative. Rather, we should seek to understand how informants – including non-human informants – mediate our understanding of the world around us and have always done so (Knight, 2014).
The growing tendency for knowledge and culture to be algorithmically defined and processed has also implications in the way we learn. Upon discussing the relation between algorithms and learning, it is important not to confine learning to the process of formal education. As I have previously emphasised, the teaching/learning relational dichotomy is deeply rooted at the heart of the educational institution. This implies that it feeds upon pedagogies, curricula and, crudely said, management methods. Long before its institutionalisation, the transgenerational transmission of cultural assets has been done from the elders to the younger ones, except that curricula were replaced by other social practices. It must be noted that it is obviously not the intent of the author to deny its importance. My point is rather that learning is not only achieved that way. In overlooking such a fact, we tacitly exclude the possibility of other practices and open yet another conceptual gap in what we precisely intend to circumscribe. As the previous developments have shown, the situated cognition model (Brown et al., 1989) exposes yet another dimension of learning, one that emphasises the active involvement of the learner with the situation and its context, where everything around is a potential source of relation and knowledge, suffice the learner to be willing so. Brown et al. remind us that for this to happen, commitment from the learner is required. For there will be no one appointed for teaching what there is to be known, nor even some foreordained knowledge or teaching content to be acquired. That is why learning requires an effort from the learner. I shall call this cognitive effort, which is, all in all, reducible to the very simple expression of ‘the effort of thinking’. I wish to emphasise, if still need be, that the concept of ‘thinking’ intended here is not thinking as in the context of the Cartesian cogito, but an inter-relational link between the learner and the situation, including other individuals.
Notwithstanding learning needs commitment, such commitment is nowhere to be found in the present situation, mostly because it has become one of the most fearsome threats. From a million sources, images competing for our attention can enter our minds regardless of quality and regardless of our interest. It is the state of mind of the subject constantly browsing, and updating without sense, purpose or commitment (for an extensive discussion of the issue of retreat from commitment in networked relations, see Turkle, 2011). Fisher calls such a condition ‘reflexive impotence’. As a concrete example, he observes that young people experience a world where nothing can be done and retreat into a position of indifference 3 (alternately expressed under the acronym TINA, meaning There Is No Alternative, see Bauman, 2007). They seem to be in a state of hedonic depression, constituted by an inability to do anything else but pursue pleasure (Fisher, 2009, cited in Lovink, 2011), though any choice that will be made is likely to be ephemeral, bound to be forgotten and replaced in any moment. This flow of networked relations and links (from Facebook friending to blogs and tweets) are ties that symbolise reputation, which can be measured and mapped and ranked.
In the same context, Russell (2011) satirises the idea that education can be reduced to looking up facts with a search engine and spitting them back out in his SearchReSearch blog post ‘Why knowing search isn’t the same as having an education’. Russell makes an important difference here: ‘Rapid lookup is a great tool but it is NOT education or learning in any meaningful way. The framework that organizes all these factoids
4
and inter-relationships IS education – it places the bits and pieces into context and lets you understand the structures and the functions of the world’ (Russell, 2011, [Capitals in original])
Minding the gaps: Restoring the Web as a tool for knowledge and learning
Through this article, I have been exposing insights about the gaps that plague a too restrictive (and thus exclusive) interpretation of oft discussed concepts in the field of algorithms studies. I discussed alternative models of learning and knowledge that fail to be considered, by omission or by design, in the mainstream discourse. This final section summarises the gaps that I deem particularly important in reconsidering learning in the age of algorithmic cultures.
The first gap is that of problem versus solution. There are problems with solutions, problems without solutions, and the gap of solutions without knowing what the problem is. The task of linking things together was previously the exclusive task of human knowledge, but knowledge, as well as cataloguing schemes made to categorise data in the era of digital information, are hardly the discovery of a true ‘natural order’, their purpose is chosen, not given. In the information sphere, those sorting mechanisms breed cognitive management technologies that are blind to cultural and subjective ambiguity and the slipperiness of contexts. They feed on dangerous ideologies of cybernetic control that imagine the world to be reducible to a seemingly orderly viewpoint where everything is measurable or calculable (Becker, 2009). We live in a time where people only crave for solutions and answers, the easiest and quickest the better. The question ‘What is the problem?’ shifts to the unquestioned statement ‘We have a problem, find a solution’. However, issues such as social tensions cannot be resolved only by declaring them ‘a problem’ (Lovink, 2009). To find a solution to these ‘declared problems’, one has to know first what the problem is, which requires time and the above developed cognitive effort. Unfortunately, our current techno-cultural default is what Lovink (2011) calls ‘temporal intolerance’, the expectation that all information, including any object or experience to be instantaneously available. It is breeding reflexive impotence, which defeats any attempt of cognitive effort. As a result, it hinders relation, commitment and the ability to assess and evaluate information, so we will find easier, quicker and socially approved (read: trendy) to rely on an algorithm that will decide it on our behalf. However, the blame for this is not to be put on algorithms, but rather on a society mistaking redundancy with affluence and information stacking with knowledge acquisition. Algorithms themselves have just been delegated the ungratifying task to sort the resulting mess.
The second gap is the crudely binary ‘with or without’ algorithmic tools. Learning in the age of algorithmic cultures does not necessarily mean learning according to the criteria defined by algorithmic culture (which would be like building a digital cage for oneself), or radically refusing it, in the hope of reviving the modern dream of the supremacy of the human cogito. It is rather learning how to fruitfully use algorithms and when to do without them. The proper wording would be with and without them. The mediation between the search engine as a solution provider and the user appears obviously not only insufficient, but also inhibitive of the knowledge acquisition process (the ‘cognitive effort’). One cannot build knowledge through mediation with the Web in a situation where the user asks questions and just begs for answers. Lovink (2009) uses the witty analogy of the Web being neither the Delphi oracle nor a vending machine in which you throw a coin and get what you want. Lovink’s strategy includes a reconsideration of media literacy, an important aspect of which is the ability to walk away from the screen. To know how to use a search engine also includes the knowledge of when to put it aside, in order to partake in a more context-sensitive relation between the information itself (the device’s output), the user, and what the user will do with this information. It is the necessary connection between decontextualised information and the contextualised learning of the (would-be) knower, which will bridge the gap between the two. As Brown and Duguid (2000b) rightly argue, a shift toward knowledge may (or should) be a shift towards people, instead of concentrating instead on the disembodied information-driven processes, that draws attention away from people, and consequently, the context that surrounds their social activity, the basis of the situated cognition model on which most of the arguments of this article are based. Information, as a digitally produced decontextualised substance, can become knowledge only when embedded in social relationships and in the larger cultural context that frames it outside the digital network that hosts it. Algorithmic culture is not just about the relation between people and computer screens, but also, as long as it is cultural, about people and knowledge (inside and outside the digital network) and about the relation between people in such a cultural context.
The third gap is that between expectations and the actual social condition. It is an ages old, very powerful cultural force that took the different forms of prophecies, utopias and the like through the ages. The modern period had its grand narratives of progress that were indeed yet another expectation that was a tremendously effective social catalyst. To discuss this third gap, I will borrow from the old wisdom of pre-Web times. The similarity between the structures of Web rankings, celebrities and best-sellers are unveiling the so deeply human (though always mediated) trait of engineering ephemeral fame for as much deeply human purposes. Informational technology does not escape this human influence, and comes with its loads of expectations and promises, despite many people who believe that it is a built-in component of technological artefacts (If we believe the example of the sudden rise of popular science magazines in the 1950s, just after the advent of the first data processing devices. Popular science, it is known, is a sort of anticipation of the social uses of technology, oft providing an extrapolated image of a glittering future. For an extended discussion on the topic of the promises of the information technology, see Argenton, 2017). We can take advice from Boorstin, who warns us that: ‘Never have people been more the masters of their environment. Yet never has a people [sic] felt more deceived and disappointed. For never has a people [sic] expected so much more than the world could offer.(…) By harboring, nourishing, and even enlarging our extravagant expectations, we create the demand for the illusions with which we deceive ourselves. And which we pay others to make to deceive us.(…) We tyrannize and frustrate ourselves by expecting more than the world can give us or than we can make of the world.’ (Boorstin, 1961: 4, 5)
Footnotes
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
