Abstract

From a cultural anthropology perspective, the very near future of pulmonary vascular diseases (PVDs) seems as fantastic as their ancient past: They did not exist before, and they soon will cease to exist – possibly.
Cultures all over the world had legends of the golden ages in which people had extreme lifespans. We are told that Devraha Baba of India was 513 years old when he died, that Methuselah of Zion lived 969 years, that Tiresias of Greece lived “more than” 600 years and PengZu of China 800 years, that Abdul al-Habashi of Arabia lived 674 years and 100 days, and so on around the world – all according to such legends. As Norris McWhirter, certainly an authority on extreme claims (he and his brother Ross authoring the first Guinness Book of Records in 1955), wrote, “No single subject is more obscured by vanity, deceit, falsehood and deliberate fraud than the extremes of human longevity.”[1] What is interesting, however, is not the question of whether those extreme ages were literal or figurative years, but simply the fact that it was universally believed that people used to live far longer than they do now, which presumably would mean the absence of killer diseases, including PVDs (e.g., idiopathic pulmonary arterial hypertension, persistent pulmonary hypertension in the newborn, pulmonary embolism, chronic thromboembolic pulmonary hypertension and pulmonary hypertension associated with congenital heart defects, to name only a few).
Intriguingly, there were numerous other legends about the arrival of diseases on this planet, such as the Greek one of Pandora: She was given a beautiful sealed jar and told never to open it. As curious as a scientist, she opened it, and all diseases and evils were released into the world.
As might have been true in the ancient past, so might be true in the near future: The absence of PVD and other killer diseases enabling extreme human lifespans. There is this significant difference, though: For the past, the authorities were tribal mythmakers and shamans, but for the future the authorities are – not will be, but are – scientists, engineers and physicians.
Sandhiva et al. state that devices enabled by nanotechnology “can be used to probe cellular movements and molecular changes associated with pathological states. Nanodevices like carbon nanotubes to locate and deliver anticancer drugs at the specific tumor site are under research. Nanotechnology promises construction of artificial cells, enzymes and genes. This will help in the replacement therapy of many disorders that are due to deficiency of enzymes, mutation of genes or any repair in the synthesis of proteins.”[2]
All this is quite recent. Only 13 years ago, Robert A. Freitas Jr., now a Senior Research Fellow at the Institute for Molecular Manufacturing in Palo Alto, California, authored the world's first detailed technical design study of a “respirocyte,” an artificial red blood cell that will be able to store and transport 236-times more oxygen than natural red blood cells thus preventing ischemia, for example.[3] Freitas and others have since also begun work on “microbivores,” artificial white blood cells that will attack pathogens far more effectively than do natural white blood cells.
Clearly, in other words, “nanotechnology is spreading its wings to address the key problems in the field of medicine.”[2] But, it has already “flown the coop,” so to speak, into popular culture. On 27 December 2011, a thriller titled 77 Shadow Street was published: Dean Koontzs “Pandora” version of a world in which medical nanodevices have run amok. Before the horrors crescendo in the novel, two characters discuss the new field: “A great many scientists and futurists' believed that the day was fast approaching when human biology and technology would merge, when all diseases and genetic maladies would be cured and the human lifespan vastly extended by Biological Micro Electron Mechanical Systems (BioMEMS). These tiny machines, as small as or smaller than a human cell, would be injected by the billions into the bloodstream to destroy viruses and bacteria, to eliminate toxins, and to correct DNA errors, as well as to rebuild declining organs from the inside out… . They're predicting nanorobotic-augmented blood by 2025, maybe 2030 at the latest. You know what's going to happen if the lifespan of people goes up like to 300 years or something?”[4]
Of course, the point is that, no, we do not know what would happen –and we should know.
A single example from the insect world makes the point in a particularly graphic way that extending lifespans, in this case that of the common housefly, can have ghastly consequences. “If a pair of houseflies mated and all their descendants lived and bred without any losses to predators, then within a single summer season there would be a million, million, million, million flies. That would be enough flies to cover the whole of Australia 11 m (36 ft) deep in flies.”[5]
In, and of itself, longevity is a truly fascinating and complex topic. Why, for example, does a mayfly rarely live more than 24 h? Why does one ocean creature, the worm-like gastrotrich, live a maximum of 3 days, while another ocean creature, the quahog (a bivalve mollusk) live up to 410 years? And, does the ultimate secret of longevity lie not in flesh and blood but in cellulose and chlorophyll, given the fact that the oldest living thing on Earth at present is a bristlecone pine tree in California, nicknamed “Methuselah,” which today has a verified age of 4,844 years?
But more pressing, it seems, are accurate answers to this simple question: What might be the long-term consequences of tampering with a natural lifespan? In the United States today, you can hardly build a fence without first compiling a detailed Environmental Impact Statement. Why are not detailed Social Impact Statements required for things like new devices that will “merge human biology with technology” so as to alter human lifespans? Perhaps the Ancient Roman adage for doctors, Primum non nocere, “First, do no harm,” should be amended after all these years to recognize nanotechnology: “First, do no harm. Second, think about the harm not doing harm might do.”
In his introduction to Young Scientist Journeys, a book for scientists aged 12-20 years, Ghazwan Butrous coined the term “the Great Unknown” as the place where everything that remains to be discovered now exists, including “technological innovations that will make today's cutting-edge marvels seem like blunt Stone Age implements.”[6] Today, those marvels seem to be doing precisely that at breakneck speed, which is why the world's most prolific science writer, Isaac Asimov (more than 500 books written or edited), uttered this warning: “The saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom.”[7]
The latter, society's wisdom regarding science, is, ultimately, also the responsibility of scientists, which is excellent news. All that is required is for scientists to turn back the clock to circa 1960, or to regain their status in those days. Very briefly, in the past 50-some years, scientists have gone, in the eyes of the nonscientific public, from Santas to suspects, from the bringers of great gifts for everyone, to the bearers of bad news for everyone; from the marvels of space flight, Teflon and Tang, the just-add-water orange juice, to the experts who keep saying that this causes cancer, that is bad, this is destroying that, these things cause obesity we will soon be entering an Ice Age, we are now in the Age of Global Warming, etc. ad nauseam. This transmogrification, fueled by the medias preference for bad news (“If it bleeds it leads”), has done two unfortunate things: It has turned the general public's total trust in scientists into widespread distrust and it has given rise to “scientist impersonators,” a.k.a. “pseudo-scientists.” Thus, for example, in the minds of far too many people, a religious leaders views on any given scientific issue or a conspiracy buff are just as valid as a Nobel Laureates. That is precisely why the Royal Society in 2002 hosted a forum on the topic: “Do We Trust Today Scientists?” The resounding answer then and since then has been: “No.”
One expert, after studying North America, the UK, New Zealand and Japan stated: “It has been said that public trust in scientists, and indeed in science, is dwindling. People seemingly rely on beliefs rather than fact and in any case do not trust authoritative information. Instead they rely on information presented to them by groups of individuals with whom they share beliefs or ideologies.”[8] Another expert, after studying the United States, stated: “Nationally representative surveys conducted in 2008 and 2009 found significant declines in Americans climate change beliefs, risk perceptions and trust in scientists.”[9]
In theory, nothing could be simpler: For scientists to regain their former authority on science issues requires only that all the scientist impersonators lose their authority. Yes, that simple theory would be complex to implement. But, scientists generally and medical experts specifically thrive on solving complex problems. After all, they just came up with a microscopic robot blood cell that can store and transport 236-times more oxygen than a natural red blood cell.
