Abstract
Modern payment cards encompass a bewildering array of consumer technologies, from credit and debit cards to stored-value and loyalty cards. But what unites all of these financial media is their connection to recordkeeping systems. Each swipe sends data hurtling through invisible infrastructures to verify accounts, record purchase details, exchange funds, and update balances. With payment cards, banks and merchants have been able to amass vast archives of transactional data. This information is a valuable asset in itself. It can be used for in-house data analytics programs or sold as marketing intelligence to third parties. This research examines the development of payment cards in the United States from the late 19th century to present, drawing attention to their fundamental relationship to identification, recordkeeping, and data mining. The history of payment cards, I argue, is not just a history of financial innovation and computing; it is also a history of Big Data and consumer surveillance. This history, moreover, provides insight into the growth of transactional data and the datafication of money in the digital economy.
This article is a part of special theme on Big Data and Surveillance. To see a full list of all articles in this special theme, please click here: https://journals.sagepub.com/page/bds/collections/hypecommerciallogics
In 2002, an executive at Canadian Tire, a Toronto-based national retailer, made some surprising discoveries about the store’s credit customers. Close analysis of the store’s card data revealed statistical correlations between how customers used their cards and how they paid their bills. Among other things, the executive learned that customers who purchased bird seed and protective felt pads for the feet of chairs tended to be reliable bill payers. By contrast, those who bought cheap motor oil and chrome skull auto accessories were more likely to miss payments. “If you show us what you buy,” the executive boasted to a
When these stories broke, they illustrated the startling power of Big Data. While Target’s program attracted media attention and sparked privacy concerns, both cases showed how massive data sets could be mined for hidden behavioral clues and marketing insights. More importantly, they revealed how payment itself—point-of-sale (POS) swipes and card-based online sales—could be leveraged into consumer data programs. These retailers were not alone. During the first decade of the 21st century, payment systems were increasingly viewed as data goldmines. Unlike cash transactions, which produced only anonymous receipts, card payments yielded transactional data attached to specific individuals. 1 This surplus data, merchants discovered, was its own reward. It could be used in-house, as with Canadian Tire and Target, or it could be reassembled, parsed, and sold as marketing intelligence to third parties. Payment cards were not just convenient stand-ins for cash; they were data-harvesting devices.
Reflecting on recent developments in payment systems, anthropologist Bill Maurer (2014) posed a provocative question: “Is there money in credit?” Given the enormous analytical power of Big Data, credit’s informational yield has begun to compete with its value as a source of fees and interest. “Credit’s function,” Maurer mused, “may no longer be as money, but as a means to ever more consumer data” (513). Maurer’s observation points to a radical shift in the business logic of payment. Though money has always been linked to memory via recordkeeping, whether inscribed in tally sticks or account books (Maurer and Swartz, 2017), the records themselves have been subordinate to the value of the commodities, bills, or cash they represent. More recently, however, electronic payment systems have automated money’s recordkeeping function and vastly expanded the volume and granularity of transactional data that is recorded. As Rachel O’Dwyer (2019) argues, this is not simply an elaboration of money’s historical recordkeeping function; it is central to the emergent platform economy, which depends upon the corralling of user interactions so that their behaviors can be collected and monetized. “What is different here,” O’Dwyer writes, “is that … the monetary record becomes a proxy for the intimate secrets and desires of its users” (10).
Building on Maurer and O’Dwyer’s insights, this article examines the long and complex evolution of the payment card’s surveillance function in the United States from the late 19th century to present. Modern payment cards encompass a bewildering array of consumer technologies, from credit and debit cards to stored value cards and loyalty cards. But what unites all of these financial media is their connection to recordkeeping systems. The simplicity of paying with plastic belies the complexity of information-processing infrastructures—the pipes and rails, in industry speak—that make it possible (Gießmann, 2018; Maurer, 2012). Each swipe, for a $4 coffee or a $40,000 wristwatch, sends data hurtling through invisible networks to instantly verify accounts, record purchase details, exchange funds, and update balances. Where cash facilitated “secrecy” and “concealment,” as Georg Simmel (1990: 385) once observed, payment cards demand the opposite: continuous exposure and confession. The history of payment cards, I argue, is not just a history of financial innovation and computing; it is also a history of consumer surveillance.
This history, moreover, provides insight into the proliferation of transactional data in the modern surveillance economy. One of the defining characteristics of computerized information processing is the production of transactional data. The necessity of data “capture” is fundamental to the logic and design of modern computing (Agre, 1994). While electronic computers introduced new possibilities for capturing data, they also expanded the number and variety of data points generated during interactions themselves, including metadata (Schneier, 2015). This particular technological augmentation and related project of datafication (van Dijck, 2014) made it possible to compile more detailed records and ultimately to discover new uses for data, especially uses unrelated to the context of its original collection (Nissenbaum, 2010). The development of late 20th-century data mining and Big Data analysis would depend upon access to such accumulations of repurposed data (Kitchin, 2014; Mayer-Schönberger and Cukier, 2013). Transactional data would not only feed Big Data programs; it provided the building blocks of the modern surveillance economy. This account shows how data “capture,” a philosophical metaphor (Agre, 1994), has become indistinguishable from surveillance: the systematic, real-world collection of information for the purpose of social classification, prediction, and control.
The discovery of transactional data and the rise of surveillance-based business models is often traced to contemporary tech giants. This perspective is exemplified by Zuboff’s (2015, 2019) account of “surveillance capitalism,” a concept that neatly captures the totalizing scale and exorable commercial logic of Big Data aggregators and platforms. For Zuboff (2019), the commodification of untapped transactional data—“behavioral surplus”—began in the early 2000s and can be credited to one company in particular: Google. “The discovery of behavioral surplus markets,” she writes, “marks a critical turning point not only in Google’s biography but also in the history of capitalism” (91). This narrative, however, is misleading. The value of transactional data was recognized much earlier, not by Google, but by other capitalists—namely, credit-granting department stores during the 1920s and credit card companies during the 1970s and 1980s. Both retailers and banks mined their payment records for insight into the buying habits, interests, and future profitability of their customers. The history of payment cards thus reveals the deep roots of surveillance capitalism and efforts to transform data into capital (Sadowski, 2019; West, 2017).
By turning attention back on the payment card, this article examines the fundamental relationship between credit and transactional data, and the historical significance of payment cards as data-capturing technologies both before and after digitalization. Importantly, this account shows how payment systems, with their imperative to record all financial activities, provided a conceptual model for transactional data capture in the modern surveillance economy. Indeed, the inspiration for Google’s data-siphoning programs can be traced to electronic payment systems and their automatic generation of transactional data (Varian, 2010). After reviewing existing surveillance scholarship concerning credit and payment cards, I examine historical linkages between identification and recordkeeping in retail charge systems in the United States during the late 19th and early 20th centuries. I then describe technological changes in American payment card systems between the 1970s and 1990s, when electronic billing and POS systems enabled new forms of transactional data capture and suggested new modes of data monetization. Finally, I connect this history to the development of computerized transactional data, including Google’s pioneering data programs, and long-running privacy concerns surrounding electronic payment systems and their contribution to the intensification of consumer surveillance.
Critical perspectives on payment cards and surveillance
Though the history of American credit surveillance is well documented (Lauer, 2017; Olegario, 2006; Rule, 1973; Sandage, 2005), little attention has been paid to the payment card
By the 1920s, charge cards were commonplace in the United States, but their surveillance function did not attract critical attention until the late 1960s. With the growth of computerized recordkeeping and the prospect of a “cashless society,” the risks associated with automating financial activity became apparent to legal experts and privacy advocates. The crux of the issue involved transactional data. In the near future, one lawmaker warned, “every commercial action will become a matter of record,” down to the purchase of “a newspaper or a package of gum” (US Senate, 1968: 276–277). Several years later, sociologist James Rule (1973) marveled at the “staggering proportions” of BankAmericard’s reach in his pioneering study of mass surveillance systems. “Every transaction is a source of information on the card-holder, not only with respect to the amount charged, but also the date, the nature of the purchase, and the merchant who made the charge” (267). The payment card, whether a credit card or an imagined universal identification card, would be the interface through which the details of one’s spending would flow.
The payment card’s surveillance function received additional scholarly commentary during the 1980s and 1990s. Clarke (1988) observed the integration of electronic payment into systems of “dataveillance”; Gandy (1993) addressed the credit card’s contributions to the “corporate data machine”; and Haggerty and Ericson (2000), following Poster (1990), cited credit and financial transactions as constituent elements of the “data double.” More evocatively, Lyon (1994) suggested that the requirements of late 20th-century existence might best be summarized as “body, soul, and credit card” (3). While these studies highlighted the payment card’s role as a technology of identification and control, subsequent researchers have turned attention elsewhere, to the risk-scoring algorithms that govern access to payment cards and banking services (Cheney-Lippold, 2017; Fourcade and Healy, 2013; Lauer, 2017; Marron, 2009; O’Neil, 2016; Pasquale, 2015; Poon, 2007). As critical attention has shifted to hidden algorithms, however, the payment card itself has slipped from view. Now an old technology, the payment card is overshadowed by more encompassing—and voracious—data-harvesting platforms and internet-connected devices that shape social reality and economic opportunity.
While scholars have long observed the surveillance function of payment cards, none have fully explored their historical and conceptual development as surveillance devices. This omission can be attributed to the difficulty of studying payment systems. Access to corporate archives is limited and published industry accounts offer few details about their operation. This account suffers from the same limitations. In what follows, I reconstruct the history of payment surveillance in the United States from trade publications, news reports, congressional testimony, and a small number of secondary sources, including Stearns’s (2011) indispensable history of Visa. By relying on industry accounts, the present study no doubt over-samples success stories and the aspirations of business executives. What is missing—and what might be revealed by archival sources or ethnographic research—is the on-the-ground experiences of payment card developers, computer programmers, and database marketing practitioners as they struggled to make their information-processing systems functional and profitable. While attuned to the false starts and failures that attended the development of the modern payment card, this account highlights the general trend toward more comprehensive consumer data capture. As this history illustrates, the shift from transactional data as
Consumer credit and the emergence of payment cards
The history of payment cards begins in the late 19th century with the formalization of consumer credit relationships. Contrary to popular mythology, Americans were never debt-free paragons of economic virtue. In fact, many early Americans ran up bills with local merchants and settled accounts when seasonal harvests were sold and currency flowed. These debts were not just for productive capital; they were also for consumer goods, including food, drink, home goods, and personal items (Calder, 1999; Mann, 2002; Woloson, 2010). Until the 1880s, consumer credit was largely informal and rooted in interpersonal relationships. Local merchants knew their customers directly, or through second-hand opinion, and made credit-granting decisions based on this knowledge. Customers judged to be honest and reliable were granted credit; those judged to be deceitful or imprudent were not. As populations grew and became increasingly mobile during the late 19th century, however, it became more difficult to identify newly arrived and unknown customers or to predict their creditworthiness (Lauer, 2017).
While small local businesses struggled to manage the influx of credit-seeking strangers, a new kind of establishment—department stores—embraced them. During the 1880s and 1890s, many large department stores offered credit to their customers as a way to boost sales volume and to bind shoppers to their stores. Credit accounts, touted as a privilege and mark of distinction, fostered goodwill and repeat business. For retailers, however, credit was a massive administrative burden. In particular, credit programs required time- and labor-intensive recordkeeping. Where small businesses scrambled to keep track of neighborhood debtors, downtown department stores invited crowds of strangers into their showrooms, opened thousands of credit accounts, and developed new recordkeeping systems to identify, evaluate, and monitor them (Lauer, 2017). Initially, this was all done without payment cards. Customers identified themselves at checkout counters, sales slips were produced, and transactions were recorded in the store’s credit and billing offices.
Though payment cards did not appear until the early 1900s, they were imagined earlier. In
During the early 20th century, when the first payment tokens and cards were introduced in real department stores, they performed the same function—linking identity to accounts and transactions. Facing growing throngs of unknown customers, store clerks could no longer recognize each and every accountholder. “Some form of identification became necessary,” Lewis Mandell (1990) observed in his groundbreaking history of the credit card industry: “hence, the credit card” (17). Shortly after World War I, mass retailers began to issue identification tokens to their credit customers. The earliest known examples are coin-sized metal fobs imprinted with the store’s name and an account number (Mandell, 1990; Stearns, 2011). These tokens served a double purpose. They identified their bearers as legitimate credit customers and eliminated the need to remember one’s account number. Sales slips could be quickly written up—and charges more accurately linked to individual customers—by referring to the numbers stamped in metal. A similar plan was introduced slightly earlier at Western Union, the nation’s dominant telegraph company. In 1914, the firm began issuing paper cards to its customers, pre-printed with their names, addresses, and a line for their signatures. Regarded as “the first consumer charge card,” Western Union’s cards were used to authenticate the identity of their bearers and to bill telegraph messages to their accounts (Stearns, 2011: 7).
During the late 1920s, a new payment card technology, the Charga-Plate, brought identification and recordkeeping together. The system, developed by the Boston-based Farrington Manufacturing Company, consisted of a metal plate embossed with the customer’s name, address, and account number (Hyman, 2011; Mandell, 1990; Stearns, 2011). The rectangular plates were similar to those used in office addressing machines, which likely inspired its double function as a form of identification and a duplicating device. When customers asked to make credit purchases, their Charga-Plate was inserted into an imprinting machine and, with carbon paper placed between the embossed metal plate and the sales slip, a perfect copy was made. Prior to the Charga-Plate, clerks manually wrote each customer’s name, address, and account number (sometimes transcribed from a metal token) on sales slips. This not only took additional time, but also introduced the possibility of illegibility and error. Charga-Plate
It is worth emphasizing here that early store credit cards were nothing like later universal credit cards. One major difference was their operation as credit instruments. Early store cards did not permit revolving balances—charges were due, in full, at the end of the month—and they did not accrue interest. Another difference was their exclusivity. Each store issued its own cards, which could only be used at that particular store (or chain of stores). Though cooperative networks did emerge in some cities prior to World War II—in Seattle, for example (Mandell, 1990: 18)—generally speaking, department stores and other mass retailers resisted anything like a universal card. Operating a credit program required huge administrative expenses and, without finance charges to outset these costs, it was often unprofitable. Credit granting boosted sales, but more importantly, it fostered bonds of loyalty and kept customers away from competitors. A customer with a Wanamaker’s card, for example, would shop at Wanamaker’s department store rather than another where they did not have a card. This was the real value of early store credit programs.
The first universal credit cards, issued by third-party intermediaries, emerged shortly after World War II. The Diner’s Club, established in 1950, could be used by cardholders at a variety of restaurants and businesses in its national network. No interest charges accrued to unpaid balances. Instead, Diner’s Club profited from monthly cardholder fees and by collecting discounts on purchases from participating merchants (Mandell, 1990, Swartz, 2014). In 1958, American Express and Carte Blanche (a Hilton venture) introduced their own cards, as did Bank of America and Chase Manhattan (Wolters, 2000). Bank of America’s card—renamed BankAmericard in 1966—would become the Visa network. A second national system, the Interbank Card Association, was formed in 1966 and dubbed Master Charge in 1969. It was rebranded as MasterCard in 1979. In addition to these well-known national programs, a number of local banks also experimented with community charge card services during the early 1950s (Vanatta, 2018). Their abandonment highlighted the difficulty of implementing universal cards even on a city-wide basis.
The development of modern national credit card networks was enormously complicated to say the least. Beyond convincing thousands of skeptical merchants to accept their cards and enrolling enough creditworthy consumers to use them, credit card networks still faced massive information-processing challenges (Batiz-Lazo and Del Angel, 2018; Evans and Schmalensee, 2001; Mandell, 1990; Stearns, 2011; Zumello, 2011). After authorizing a customer’s card, which might involve consulting a cardholder directory or calling the card issuer, the merchant recorded the credit transaction on a carbon sales slip. At the end of the day, copies of the sales slips were deposited at the merchant’s bank, which tabulated the sums and settled accounts with the card issuing bank (Stearns, 2011: 30–32). It is important to emphasize the sheer volume of paper that was pushed by merchants and data-processing clerks prior to the development of electronic card readers and information-processing technologies.
Electronic payment and the automation of transactional data
Until the 1970s, credit card companies recorded little or no transactional data about their cardholders. During the 1960s, the value of transactional data was overshadowed by operational difficulties, efforts to build out networks amid fierce competition, and rampant fraud. The industry’s survival hinged on solving these basic problems. Though accounting and billing systems were increasingly automated, credit card customers typically received aggregate monthly statements in envelopes stuffed with copies of their original sales slips. This system, called “country club billing,” offloaded the responsibility for itemizing transactions to sellers and provided cardholders with tangible evidence of their purchases. For credit card companies, country club billing was a crushing administrative burden. To cut costs, many switched to “descriptive billing” systems during the early 1970s. Instead of assembling and return-mailing sales slips, credit card companies sent their customers statements with itemized lists of transactions, including the date, amount, place, and brief description of each. Though descriptive billing would become the industry standard, it was initially met with resistance. Many cardholders enjoyed the convenience of reviewing their original receipts and distrusted the credit card company’s cryptic transcriptions.
The development of descriptive billing—and consumer opposition to it—revealed the paradox of transparency and privacy. On one hand, cardholders wanted full information about their transactions, which appeared on the sales drafts. On the other, many Americans worried that computerized businesses, including banks, retailers, and credit card companies, already knew too much about them. While consumers viewed descriptive billing as inadequately detailed, credit card companies argued that the only remedy—short of reverting to inefficient country club billing—was to enhance their descriptive billing systems by collecting more detailed transactional data. The standoff had legal implications too. According to the Truth in Lending Act (1968), credit card companies were required by federal law to provide transaction information in periodic statements sent to their cardholders. The format and granularity of transactional data was the sticking point.
During the early 1970s, most credit card companies did not have the technical capacity to capture more detailed transactional data. Manual keypunching and computer storage were still expensive. Regulatory pressures, however, seemed to encourage more extensive consumer surveillance. As Dee Hock, founder and president of Visa, pointed out, “the plethora of the disclosure and billing laws” were “forcing the excessive accumulation and retention of data which many fear” (Privacy Protection Study Commission, 1976: 21). During this period of legal uncertainty and technological transition, many credit card companies continued their country club systems while experimenting with descriptive billing. American Express, for example, used a “dual system” to hedge its bets, though, as one of its executives noted, “the direction everybody is looking at is electronic.” By then, American Express was in the practice of saving several months of “transactional history” for each cardholder (Privacy Protection Study Commission, 1976: 83, 119, 121). Even the Federal Reserve Board, the agency responsible for enforcing Truth in Lending regulations, supported the shift to descriptive billing. George Mitchell, the agency’s head, argued for the abandonment of country club billing, which he regarded as outmoded and “ill-suited to electronic technology” (US Senate, 1971: 26).
Still, while payment systems were automated during the 1970s and 1980s, and technical efficiencies of computing were leveraged in descriptive billing, the data captured by early systems was not valued as an asset itself. This began to change during the late 1980s. As computerized information fueled the late 20th-century economy, new efforts were made to cull customer data from the payment process and to mine it for actionable insights. Credit cards, with their connection to recordkeeping, were eyed as a natural conduit. Forecasting the future of the credit card, one industry writer highlighted its function as a data-harvesting interface. “Many have called the time we live in the Information Age, but few have looked to plastic cards as bearers of the kind of data this era thirsts for. Yet that is exactly what they are becoming.” Thanks to new information-processing technologies, payment data was easily captured and “whisk[ed] back to retail or bank computers.” Armed with this data, businesses would be able to “find out more about consumers than their own mothers know – or at least find it out sooner” (Stewart, 1989: 62–63). The embrace of transactional data occurred among three main actors in the burgeoning consumer payment ecosystem: universal credit cards, retailers, and credit card processors.
Universal credit cards
Among these, the leading innovator was American Express. In 1988, the universal credit card company integrated state-of-the-art optical processing technology into its billing program. The transition to electronic processing, begun a decade earlier, dramatically reduced the volume of paper and streamlined information flows. But more importantly, it transformed cardholder transactions into valuable data. For all the promise of automated descriptive billing, American Express initially found a more expedient source of extractable data in the sales drafts themselves, which were digitally scanned. “By tracking charge slips, the company’s computers might identity the cardholder as a frequent traveler to Tokyo, say, or an avid tennis player,” a
Unlike the Visa and Mastercard networks, which stitched together a national network of banks, each with its own proprietary data claims, American Express administered its entire system. Its closed loop structure meant that it controlled all cardholder data flowing through its network. By analyzing its own transactional data, American Express sorted its cardholders into tiered segments, from “value-oriented” at the bottom to “Rodeo Drive Chic” at the top. This kind of sociodemographic clustering, including its euphemistic lifestyle designations, had been pioneered a decade earlier by Claritas, a database marketing firm that cross-referenced census data and postal zip codes (Gandy, 1993; Turow, 1997; Weiss, 1988). American Express sold access to its own segmentation program through joint-marketing arrangements with companies that accepted its cards, including American Airlines, Saks Fifth Avenue, Hertz Rent-A-Car, and Marriot Hotels (Crenshaw, 1992). In the cutthroat credit card business, American Express’s target marketing services also helped to justify its more expensive discount rate for merchants, which was a constant source of complaint. While American Express sold access to cardholder lists to third parties, other credit card issuers mined transactional data for their own in-house marketing programs. “The notion of segmenting your cardholders by who they are and what they are doing is key to moving this business forward,” one executive argued, emphasizing the importance of integrating information about “how often and where specific purchases are made” into credit card marketing databases. “The days of offering a promotion to an entire portfolio are over” (Morrall, 1992).
The payment card’s desirability as a data source was reflected the rise of cobranded cards, which paired credit card companies with a variety of non-bank businesses. Citibank’s frequent flyer program with American Airlines, introduced in 1987, opened the floodgates to such arrangements. The pursuit of data was a driving force behind such alliances (Konsynski and McFarlan, 1990). While credit cards gained access to new slices of creditworthy prospects, their partners—airlines, hotels, automakers, retailers, telephone companies, and even charitable organizations (Evans and Schmalensee, 2001; Mandell, 1990)—gained access to the credit card’s databases. “For companies seeking to leverage a pre-existing data-bases, or to create one from scratch, getting a cobranded card in the wallets of their customers” was a winning strategy, a finance executive explained. Cobranded cards allowed “savvy companies to capture data, track customer purchase patterns, and reward their patronage and loyalty” (Drees, 1994). This was the motive behind General Motors co-branded card with Household Bank in 1993. As a banking journalist noted, “The plan to use cardholder information to better target automobile promotions illustrates what many say is the real potential of cobranding: tapping the wealth of information that is available about cardholders” (Kreege, 1994). While this information was used to segment and stratify cardholders in corporate databases, the cards themselves—emblazoned with prestige logos, mascots, and markers of tiered exclusivity—outwardly signaled one’s “transactional identity” when flashed at the register (Swartz, 2020).
Retailers
Cobranded or not, the rapid growth of universal credit cards posed a direct challenge to retailers with their own store cards. Under pressure to accept Visa, Mastercard, and American Express cards, retailers risked losing customer connections and the transactional data that went with them. “The proliferation of cobranded and cut-rate bank cards is … producing a wave of fear among retailers that they could lose a key link to their most loyal customers,” an industry writer confirmed. “The trend is awakening many retailers to a need to mine transaction data from their store cards to create tempting merchandise offers for targeted groups of cardholders, ultimately even individual cardholders” (Lucas, 1995). Mass retailers had long recognized the marketing value of customer information in their credit files, but this data was difficult to extract from paper archives. During the 1920s and 1930s, a number of forward-looking stores experimented with information-processing systems to do this. Using modified addressing machines and punch cards, these “customer control” programs allowed retailers to track the purchasing patterns of individual customers and to develop targeted promotions. Though some of these systems were remarkably effective, their complexity and cost prevented widespread adoption (Lauer, 2017).
During the 1980s, these barriers were removed by magnetic-stripe technology and advances in POS systems. Cards equipped with “mag” stripes were encoded with account data (Svigals, 2012), and POS systems recorded item-by-item purchases by scanning product barcodes. When used together, they automatically linked people to the things they paid for. A single store, in some cases, recorded tens of thousands of transactions each day (Ing and Mitchell, 1994). By connecting this data to store credit cards—or check-cashing cards—customer databases, demographic profiles, and more sophisticated target marketing programs could be built. “If you already have a point-of-sale system, it’s easy to take advantage of a data base management system,” a Citicorp executive told a gathering of retailers (Data in store charge card systems, 1982). With these systems, retailers could easily identify their best customers and push special offers to them. “Stroke the best active customer with VIP cards, fashion shows, advance notice of sales, increased line of credit, ‘mystery’ discounts, and other incentives,” the Citicorp executive urged. “Most of all, build your data base constantly. Hunt your files for interrelationships” (Data in store charge card systems, 1982).
POS systems put retailers in a unique position to collect more granular transactional data than credit card-issuing banks. Where banks were only privy to information about the location of transactions, their totals, and general merchant categories, retailers knew which specific items each customer purchased, including their volume and frequency. Grocery stores, in particular, were early adopters of the technology (Turow, 2017). “Money isn’t the only critical commodity supermarkets are taking in at the checkout counter these days,” a
Citicorp’s POS program, named “Reward America,” turned supermarket checkout scanners into consumer surveillance devices through the use of loyalty cards. Customer identities were connected to their purchases by scanning their membership cards at the register. This was not the first loyalty card system—frequent shopper and rewards programs already existed—but Citicorp’s venture was ambitious in its attempt to harvest transactional data at scale and to build a comprehensive national database. It was a Big Data program before the concept had a name: huge, collected in near real-time, and assembled from a variety of disparate inputs (Laney, 2001). To enroll in the program, customers submitted personal information, which was also funneled into Citicorp’s databases. Citicorp hoped to extend its coverage to 25 million American households (Lane, 1990). Participating supermarkets received valuable sales and marketing information, but they were not the biggest winner. Citicorp not only owned the hardware and software; they owned the transactional data. And not surprisingly, they planned to sell access to this data for marketing. “It might, for example, sell Folger the demographics of its buyers and their names or similar information on buyers of Maxwell House coffee,” the
Credit card processors
While credit card issuers and retailers developed strategies for mining their transactional data, a third group, credit card processors, soon joined them. These firms, mostly unknown to the general public, provide the infrastructure—the telecommunication networks, equipment, and systems—through which payments are communicated between merchants and banks (Maurer, 2012). This complex secondary industry emerged during the late 1960s as credit card volume ballooned and intermediaries sprang up to take on specialized support roles. Their influence grew during the 1990s, when banks outsourced more of their backend operations to cut costs. These behind-the-scenes firms would become the backbone of the digital economy. They would also be in a unique position to view the torrents of transactional data that passed through their systems. For precisely this reason, a mid-1990s McKinsey report chided banks for ceding control of their transactional data. Predicting “rapid growth” in the processing industry, the analysts noted that “transaction data could become more valuable to processors than the transactions themselves” (Bowers and Devine, 1995: 83).
One processor, First Data, emerged as the dominant player during the 1990s. Founded in 1969 as the Mid-American Bankcard Association, a processing organization for Visa-issuing banks, First Data was purchased by American Express in 1980. After American Express sold the firm in 1993, it continued to expand through strategic acquisitions, consolidating its position in the market and becoming an industry juggernaut (Evans and Schmalensee, 2001; Levinsohn, 1998). By 1996, First Data controlled 30% of the card processing market in United States and authorized a third of all credit card transactions at the register (Frank, 1996). Like other late 20th-century information-handling businesses, First Data sought to leverage its information assets in new products and services. In 1996, the company launched an ambitious data-sharing venture called USA Value Exchange (Usave), which pooled consumer data from participating First Data clients. “We can target every aspect of the consumer’s behavior,” Usave’s president noted, touting the program’s utility for merchants. First Data enrolled more than 40 financial institutions, including Advanta and Wells Fargo, 60 million cardholders, and 600,000 merchant locations (Quittner, 1997).
Together, universal credit cards, retailers, and card processors transformed the payment card into a fully electronic surveillance device during the 1980s and 1990s. Not all of their initiatives were successful, however. Most notably, Citicorp’s Reward America program was shuttered in 1991. Despite huge investments—Citicorp dumped more than $100 million into the venture—and great expectations, the supermarket-based POS system failed to win over retailers who lacked the technical sophistication to use the program and resented Citicorp’s ownership of their customer data. For its part, Citicorp overpromised and was unable to “handle all the data that flowed in from scanners recording tens of thousands of shopping carts full of groceries daily” (Bleakley, 1991: A12). The failure of Reward America illustrated the challenge of running a huge transactional data program across multiple chains. “What we saw as an industry,” one retail executive reflected, “was a whole lot of information captured but nobody knowing quite how to use it” (Cantrell, 1992). This was a temporary setback. The value of marketing and predictive insights, mined from millions of individual purchase histories, spurred new efforts. By then payment cards were no longer inert paper cards or metal plates. They were “ever more sophisticated plastic electronic composites” (Deville, 2013: 10) through which consumer behavior was datafied.
Datafied payment and the threat to privacy
In 2010, Google’s chief economist, Hal Varian, gave a keynote address to the American Economic Association. After surveying the internet’s revolutionary development during the 1990s, he turned to a less celebrated topic. “I start with a point so mundane and obvious, it barely seems worth mentioning. Nowadays, most economic transactions involve a computer. Sometimes the computer takes the form of a smart cash register, sometimes it is part of a sophisticated point of sale system, and sometimes it is a Web site. In each of these cases, the computer creates a record of the transaction” (Varian, 2010: 1). From this banality, Varian elaborated a grand vision of automated data collection, personalization and recommendation systems, data mining and continuous experimentation, digital auctions, and differential pricing. “The record-keeping role was the original innovation for adding the computer to the transaction,” Varian explained (2010: 1). Going forward, this logic could be extended. Following the example of modern payment systems, all “computer mediated interactions” could be turned into data-producing events.
Varian’s vision, of course, is Google’s business model. Zuboff (2015, 2019) has termed this model “surveillance capitalism” and elaborated its mechanisms and threats to the individual and society. At the heart of this model is the expropriation of behavioral surplus, the user content and online activity that Google and other platforms and apps mine to build predictive algorithms and lucrative ad-serving programs. Zuboff locates the origin of surveillance capitalism in Google and, indeed, points to Varian’s published works, including his 2010 speech, as a window into the secretive company’s strategic thinking. What is striking about Varian’s Big Data prophesy, however, is not its sui generis quality. Rather, it is his reference to an already-existing precedent: payment systems. Varian (and by extension, Google) was not the first computer visionary to be inspired by financial recordkeeping. When Vannevar Bush laid out his now famous plan for postwar computing in 1945, he turned to “the prosaic problem of the great department store.” In addition to anticipating hypertext and the World Wide Web, Bush (1945) explained how thousands of charge sales could be quickly recorded and processed by using a punch card system that linked customer identification cards with sales records and billing slips (106).
That both Varian and Bush would look to payment systems while imagining the future of computing is revealing. Payment—and credit payment in particular—is fundamentally concerned with storing and recalling information. In a word, payment is concerned with
A decade after Bush’s seminal article appeared, the computerization of payment was already underway. Machine-readable checks were developed in the mid-1950s (McKenney and Fisher, 1993) and by the 1960s, the arrival of a cashless and checkless society had moved from science fiction to technical reality (Batiz-Lazo et al., 2016). Most industry experts predicted that the future of payment would involve identification cards resembling modern credit cards. Transactions would be processed electronically by inserting the card into a computer terminal connected to banks via telecommunication lines (see Diebold, 1967; Reistad, 1967). The motivation for such a system did not come from consumers, who were reluctant to see their cash and checks dematerialize. It came from banks and merchants desperate to solve their own paperwork crises. Though the benefit of transactional data was not initially the focus of such efforts, a Harvard Business School study foresaw its potential. On-line real-time POS registers, the authors observed, could become “source points” for collecting market research. “Cashiers could key in data on the customer’s age, sex, address, or planned use of a product at the same time as the transaction data is entered” (Anderson et al., 1966: 110). In the near future, they predicted, transactional data that flowed through payment systems would become valuable business intelligence.
The development of electronic payment systems was part of a larger public debate over computerization in the United States. By the mid-1960s, government agencies and many businesses were transitioning to computerized recordkeeping systems, raising concerns about the prospect of a totally documented—and totalitarian—society (Westin and Baker, 1972). As the cost of computer storage fell, organizational memories grew exponentially, to the consternation of many lawmakers and citizens. Beyond the Orwellian nightmare of centralized government dossiers, critics pointed to vast accumulations of personal information already housed in computers belonging to credit bureaus, insurance companies, and banks. During an early congressional hearing on computer privacy, Paul Armer, a leading computer scientist at the RAND Corporation, sketched out the privacy implications of the coming cashless society. In the “extreme case,” he explained, “all transactions go through the system and all the details are recorded (who, what, where, and how) … . Such a system would know where we are and what financial activities we are involved in everytime we so much as buy a candy bar or pass through a toll station” (US Senate, 1968: 328). Even Armer, a computer expert and privacy champion, did not believe that such pervasive, granular forms of financial surveillance would actually come to pass. Yet within 20 years, this “extreme” scenario was taking shape.
During the 1990s, a series of privacy controversies revealed the extent to which credit card companies were peddling their cardholder’s transactional data. Just as Citicorp shut down its Reward America program, the company announced plans to sell marketing information gleaned from its 21 million cardholders (Miller, 1991). Citicorp’s new program soon attracted the attention of regulators. “A consumer who pays with a credit card is entitled to as much privacy as one who pays by cash or check,” New York’s state attorney general argued. “Credit card holders should not unknowingly have their spending patterns and lifestyles analyzed and categorized for the use of merchants fishing for good prospects” (Crenshaw, 1992). American Express, long a purveyor of marketing access to cardholder data, signed a consent decree with New York State and agreed to provide notification and opt out policies for its 20 million cardholders. These data-sharing scandals came on the heels of others involving credit bureaus, list brokers, and banks. “You don’t have to live like a fugitive to protect your financial privacy – but it might help,” the
Public outcry did little to slow the development of transactional data-mining schemes. Marketers coveted the breadth and granularity of card data, and credit card companies were eager to sell it to them. “While demographics are very useful, they can only assist in approximating customer spending,” a writer for a leading database marketing trade magazine acknowledged. “Card data contain the hard facts” (Angel and Hadary, 1998). Going forward, credit card companies formed new partnerships and farmed their data out to upstart analytics teams. “We have models that can know when you’re going to buy a bunch more stuff,” the head of one such software company announced in 1999. The firm’s model employed state-of-the-art neural networking techniques. More significantly, it was built using a room full of “computer tapes containing data on years of transactions for more than 260 million credit-card accounts” (Bank, 1999). With these kinds of programs, the payment card itself became a crucial input in the emergent surveillance economy. Each transaction fed mysterious algorithms that predicted future behaviors and sorted Americans into ever-finer categories of risk and value.
The past future of payment and consumer surveillance
As plastic credit cards proliferated, they became a symbol of instant gratification in late 20th-century America. Easy to acquire and even easier to max out, they seemed to epitomize the seductions and entrapments of modern consumer capitalism. Worse still, there seemed no way to avoid them. By the 1970s, it was already becoming burdensome to shop or travel without one. Credit cards were demanded as a form of personal identification and collateral in a growing range of transactions, from cashing checks to renting a car and registering at a hotel. “Like it or not were are operating in a credit card economy,” one lawmaker lamented during a congressional hearing in 1977. Citing the more than 500 million cards then in circulation, he added, “In fact, it is virtually impossible to function in our day-to-day lives without credit cards … We are a nation of credit card junkies hooked on popping plastic” (US House, 1977: 35). In effect, credit cards became a credential that verified one’s trustworthiness and, with their connection to financial institutions, held individuals accountable.
Twenty years later, the mainstreaming of internet commerce made them even more indispensable. The success of online selling, after all, was not simply a product of new cost-cutting technologies and favorable tax laws. It also depended on payment infrastructures already in place. “The information technology that makes this all possible includes the once-vilified credit card,” a
By the time Sergey Brin and Larry Page founded Google in 1998, a sprawling digital infrastructure already existed to collect and analyze transactional data. During the closing decades of the 20th century, the payments industry created digital networks for tracking the financial activities of millions of Americans, compiling increasingly granular purchase histories, and mining this data for marketing insights. Before Google turned its search engine into a global thought-siphoning machine, payment cards already exposed the interests and concerns of their users through their transactional records. “If the network can follow the trail of your spending,” technology writer James Gleick (1996) warned in an article on digital payment, “it can become more omniscient than a private detective who follows you around with a camera” (50). Consumer credit granters had long recognized the power of their transactional records for predicting customer interests and future behavior, but this information remained frozen in paper until electronic recordkeeping systems automated its capture. The discovery of data surplus and the origin of surveillance capitalism was not Google, as Zuboff (2019) contends. It was the consumer credit industry, which emerged in the late 19th century and drew millions of Americans into its documentary regime. Buying now and paying later was the primal scene of modern surveillance capitalism. Google and its Silicon Valley competitors merely applied the recordkeeping principles of accounting and payment to all digital interactions.
In 2011, Google announced a new mobile app, Google Wallet, which would allow users to make purchases with a wave of their phone. The tech giant’s move into payments was not motivated by traditional profit generators in banking—float, interest, and fees. Google had its sights set on something else entirely: transactional data. “Google’s interest here isn’t the payments, it’s the data that underlies the complete chain of commerce,” a Forrester Research analyst told the
Though payment cards did not become ubiquitous in the United States until the late 20th century, their recordkeeping protocols and investigatory infrastructures were established much earlier. At the time of Bellamy’s (1888) utopian novel, credit relationships were nothing new, but their institutionalization was. Credit-granting retailers forced their customers to surrender detailed personal and financial information in exchange for the privilege of a paying later. Credit transactions, unlike cash payments, were memorialized in the accounting rooms of early department stores. The payment card became a mobile site of consumer surveillance when it became the interface through which personal transactions were recorded, first in ledgers and later in databases. When analysts at Canadian Tire and Target mined their transactional data in the early 21st century, they were leveraging an entire infrastructure—recordkeeping practices, identification systems, information-sharing arrangements, and payment norms—built over more than a century. While the history of payment cards illuminates the long history of surveillance capitalism, it also illustrates the conceptual and operational origins of transactional data in everyday life. Today’s platform surveillance and compulsory sharing can be seen through the historical lens of credit payment and its demands for disclosure, continuous tracking, and evaluation. Long before the internet and its mobile appendages became de facto surveillance devices, this function was performed by a wallet full of payment cards.
Footnotes
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
