Abstract

The cover artwork featuring a distorted highway from Clement Valla’s series Postcards from Google Earth, coupled with the epigraph quote sourced from Paul Valéry’s “The Conquest of Ubiquity,” perfectly frame the contents of this short but dense book. Softimage, written by Ingrid Hoelzl and Rémi Marie over a theoretical and practical journey between Vienna, Montreal, Oslo, and Hong Kong, is a speculative effort in rethinking the status of the image after the digital. Moved by a genuine will to challenge the trappings of incessantly debated distinction between still and moving images and the teleological interpretations of realist cinema as the logical progression of photography, the authors follow an intuition about the screened image formulated by Lev Manovich 20 years in the past: “such an image is no longer the norm, but the exception of a more general, new kind of representation for which we don’t have a term yet” (Manovich, 2001, p. 103). Hoelzl and Marie propose a new term for precisely this new kind of representation—the softimage—a neologism that resonates with concepts like soft city (Raban, 1974), soft cinema (Manovich & Kratky, 2005), software (Fuller, 2008), and in general with a broader movement toward the ontological softening of theories about media, infrastructures, and technologies.
Softimage consists of six chapters—for the most part reworked versions of earlier publications—which are framed by a condensed introduction and conclusion, and enriched by plenty of figures and illustrations, composing a volume about the visual that doesn’t shy away from using photos and artworks as integral part of its arguments. Different forms of images, from the artworks of Nancy Davenport, David Claerbout, and Thomas Ruff to screenshots, photographs, and elaborations produced by the authors themselves, are woven in discussions drawing on the classics of art history and cultural studies as much as on software studies and computing. Media art, consistently employed as a starting point toward speculative propositions regarding the historical change of the image, isn’t necessarily the central concern of the book, which gradually moves to discussions involving file formats, editing software, mobile apps, digital topography, and computer vision. The introduction of the volume provides readers with an excellent roadmap, summarizing the theoretical progression building up toward the formulation of the softimage and highlighting the continuity between chapters that might at first seem to deal with starkly different topics.
Starting from a discussion of the most basic distinction between printed and projected images, Softimage progresses with a rigorous logic and steady tempo, presenting a procession of concepts flowing one into the other and supporting a larger movement toward further speculations about visuality. The first chapter pairs Walter Benjamin and Ken Burns, moving stills and praxinoscopes to open up the definition of “photographic image” to different forms of playback, and dismisses the ongoing debates regarding the exclusive materiality or immateriality of images. Once the image is freed from its stillness and fixity, Chapter 2 goes one step forward and discusses the expanded photographic space beyond the limits of any chosen frame, identifying a “desire of endlessness” (p. 40) behind image-making practices like montage, collage, animation, and looping. A similar argument animates Chapter 3, this time questioning the temporality of the photographic image through processes like postproduction and screening, and proposing the concept of the “photographic now” (p. 58) to better characterize the ontological continuity granted to the image by digital signals. With the fourth chapter, Hoelzl and Marie move further away from photography and media art into the realm of computing, identifying an epochal break from the geometric projection to the algorithmic processing that today governs images through processes of encoding and compression. Chapter 5 takes the following step and discusses how algorithmic images are operationalized in databases and programmed to be accessed from multiple kinds of devices and screens while operationalizing users in circular operations of data exchange. The sixth and last chapter brings this progression to a conclusion, postulating the image-screen to have become the basic moment of network access to the ubiquitous data-space of contemporary cities and mobile media.
Softimage is a pleasant and engaging journey along the multiple vectors outlining major ontological shifts that the image is undergoing as the digital permeates everyday life. Students and general readers unfamiliar with image theory will find in Hoelzl and Marie’s volume—its first half in particular—an approachable introduction to the major thinkers and concepts in the field, often presented with an innovative twist and in stimulating combinations without unnecessary theoretical convolution. Academics and media art practitioners familiar with the topics at hand will be tickled by the speculative directions proposed in each chapter and challenged to incorporate or disprove the propositions of this volume through further research. Softimage is not flawless: the last couple of chapters at times meander into long-winded descriptions of software technicalities and introduce some too many undertheorized technological buzzwords, straying away from the main threads of the book; the conclusion, two pages long, would have benefitted from a more thorough recapitulation of the overall movement carried forward throughout the text. Despite these minor shortcomings, the book proposes an original and coherent argument in a surprisingly compact and accessible format: having left behind stale ontological debates about the materiality and temporality of the image, it is now the task of media theorists to move toward expanded operational relationships between images, screens, data, and networks. As Hoelzl and Marie conclude,
What was supposed to be a solid representation of a solid world based on the sound principle of geometric projection (our operational mode for centuries), a hard image as it were, is revealed to be something totally different, ubiquitous, infinitely adaptable and adaptive, and something intrinsically merged with software: a softimage. (p. 132)
Interview With the Authors
Particularly in the first half of Softimage, you make a consistent use of contemporary artworks to argue certain points about image theory. The individual choices of artists and artworks are very fitting, but could the arguments they sustain be applied to contemporary visual art and everyday photographic practices at large? In short, do concepts like “photographic now,” “extended photography,” and so on translate to larger contexts in visual culture? Do you see them as emerging in visual art and slowly transition to the realm of everyday life, or the other way around?
We use the artworks only to shed light on some fundamental changes in the ontology of the image brought about by digitalization. The three first chapters of Softimage operate the deconstruction of the medium specificity of photography: First, the photography/film, still/moving divide is a historical and economic one and is not based on any real ontological difference; second, the frame and its outside, which were part of the definition of the photographic image during the analogue era, are not relevant anymore in the digital era; third, the temporality of the digital image (which we call the “photographic now”) is no more one of pastness (re-actualizing a past event) but one of a radical presentness: it is the 24 times-per-second re-actualization of the video signal displayed on screen. The successive chapters are an attempt for a reconstruction of the image on an algorithmic basis.
“Algorithm” is one of the concepts that recur most in the second half of the book, but is never really defined or operationalized, if not in recursive citations which risk falling into a kind of media mysticism and turning a technical term in an embodiment of a “digital evil.” How is this not the case? If you understand algorithms in the broadest sense as sequences of computational steps producing outputs from inputs, then what are their implications for images, besides the fact that all digital media function through computation? If instead you understand them in the narrowest sense of the sorting and curating delegations typical of Web 2.0, is your position decidedly pessimistic as it appears from the discussion of Google Street View, in which you postulate the algorithms operationalizing their users?
In the book, we draw on a very broad definition of algorithm coming from the algebraic methods of calculation invented by the Persian mathematician Al-Khwarizmi, whose Latinized name gave birth to the word “algorithm.” From this point of view, the algorithm is a computational machine that one can feed with data variables. Once the data are provided, the machine carries out its computation, step by step. In recent developments, the computation is cybernetic, meaning that it takes into account user actions. And it is on this basis that we came up with the idea of “reverse operativity,” of the image operating us. In fact, we are not interested in the so-called “digital evil” because we don’t think that the digital is evil in itself, but possibly its uses are. All along the book, we are seeking to understand and to show what invisible transformations are taking place in our relation with the world, under the cover of an apparent visual continuity. This has nothing to do with judging, but everything to do with an acute attention brought to what is hidden behind the image.
The algorithm is everywhere, and it is there for purposes of surveillance and control, but I would not formulate this in terms of a “digital evil” but rather in terms of a new mode of evil that dissimulates itself behind terms such as “service,” “security,” “usability,” “efficiency,” and “transparency” (something that Deleuze has pointed out in his Postscript on the Societies of Control 25 years ago!). So the question “what do images want?” is no longer a rhetorical one (grounded in the analysis of the image as a rhetorical tool) but a factual one: we increasingly live in a world of computer-computer interaction where images are involved in a multitude of processes that are hidden behind their appearance on screen and their so-called “interactivity.”
A related point: you repeatedly quote Lev Manovich, especially in relation to the necessity to emphasize “software” rather than “digital media” as an object of study. There is a constellation of terms across Softimage that shift around each other and seem sometime equivalent: algorithms, software, computation, processing, and programming. Could you attempt a short summary of how these terms orbit around the softimage—a concept in itself related to software?
Through a progressive series of case studies, the book traces the dissolution of the image from hard to soft: from print to screen, from still to moving, from geometric to algorithmic, from output to moment of network access. With animation, postproduction, compression, navigation, and wireless access, held-for-granted concepts like indexicality, reference, and frame, which characterized the photographic paradigm built on the assumed solidity of the image as a stable representation of a stable world (hardimage), make place to what we call the softimage. It is kind of ironic that the publication of this book coincides with the final 2015 release of the famous three-dimensional (3D) animation software Softimage® (whose product support is ending this year). So while Softimage will be dead by the end of the year, the softimage lives on—not only in the multitude of new image software (programmable images) but in the sense of the image having become a program in itself and of the concept of image having become soft, eluding definitions . . .
One last note on the last chapter, which is the one that opens up the most the theorization of the image to contemporary everyday life settings. You poignantly argue that image-screens have become “an integral part of an entire range of fixed and mobile electronic devices such as video billboards, info screens, smartphones, tablet PCs and laptops scattered and moving through urban space” (p. 117): I am particularly curious about your definition of urban space here. Given the little relevance of large-scale public screens and the emphasis on mobile and personal media, why did you decide to emphasize the urban character of this sort of mediations? When you claim that “Throughout networked mobile devices, we inhabit the network as well as the physical city” (p. 120), who is the “we” and why is it limited to a global urban environment? The same technologies arguably keep functioning in less urbanized and even rural areas, as paradoxically highlighted by the Google Street View mapping of the Rio Negro, so what purpose does the emphasis on the augmented city, rather than augmented spaces or places, serve in your argument?
Our sixth chapter actually defines the city as a mode of connectivity rather than as a physical territory so that the urban/rural opposition no longer holds. We carefully deconstruct Manovich’s concept of “augmented space” and its derivative, the “augmented city,” arguing that there is no such thing as non-augmented space and that the city (here we draw on Kittler) is always already augmented! Building on Adrian Mackenzie, Jean-Luc Nancy, and William James, we then develop our concept of the urban data-space as a relational space of being-together (of physical and digital architecture, infrastructure, citizens, images, screens) where bodies and signals commute and communicate through wireless networks and mobile devices.
In this chapter, we deliberately place ourselves on the side of William James’ radical empiricist philosophy (and its use by Adrian Mackenzie in his book Wirelessness). In this philosophy, the world is not described as matter but as experience. In other words, the urban is not defined with stone or bricks but in terms of relations. Contemporary space is crossed by other dichotomies, such as real/virtual, present/absent, connected/non-connected, which are more relevant than the urban/rural one. I guess everybody knows that Google’s mapping of the Amazonas has no other goal than to create a publicity effect to mask other, monstrous holes in the map—including the majority of Africa, for instance. What Google wants to tell us and to make us believe is that the world is not only totally mapped but totally connected (to Google) and that Google’s database is the world—which of course is an extravagance. Besides, once the effect is obtained, Google immediately suspends its efforts. Are the villages that border the Rio Negro part of the urban space? Let’s say they were part of it . . . for the length of a promotional campaign.
