Flourishing With Emerging Technologies

Emerging technologies – e.g. autonomous vehicles, gene editing, blockchain, and smart drugs – promise an exciting future. Before this excitement can become a reality, though, important concerns about safety, effectiveness, and equity must first be addressed. For instance, processing Bitcoin transactions is said to already chew up as much electricity as all of Denmark; no pharmaceuticals are currently sufficiently safe or effective to make them fit for general public use; and to avoid increasing the gap between the haves and have-nots, autonomous vehicles and gene editing technologies would need to be affordable to everyone not only the wealthy. To these ends, much effort and resources are devoted to identifying and ironing out bugs, and to figuring out ways to reduce the costs of such technologies. The key idea here is that once the bugs are ironed out, these technologies should be made affordable so that everyone can use and benefit from them.

However, what’s often overlooked in the midst of excitement about the promise of emerging technologies, is what Tsjalling Swierstra calls “soft impacts”. For instance, if sophisticated AI could extract highly accurate predictions and recommendations from vast quantities of data, might we eventually expect one another (and maybe even ourselves) to comply with AI’s recommendations, and might we thus lose some freedom to make different choices? Or what if we could no longer take the car out for a spin, or hop on a motorbike, because humans were deemed way too dangerous by comparison with autonomous vehicles to let loose onto the roads — is this something that we might come to regret? Once we eradicate all the genetic conditions that we currently fear, might we then move on to changing humans in new ways which we presently find objectionable? And if everyone could afford safe smart drugs that made them more productive and less prone to fatigue, would free market competition eventually lead everyone to use them just to remain competitive and would we all end up working even longer days? Alas, because unintended consequences like these are often more difficult to imagine, because they critically depend not just on the technology itself but also on how people use it, and because frequently it is not even clear whether those consequences would be good, bad, or just different, voiced concerns about soft impacts tend to either be overlooked or just ignored and even derided as hysterical “scare-mongering” that employs unrealistic and unlikely dystopic Brave New World and GATTACA scenarios.

However, I will argue that by overlooking, ignoring, and even deriding concerns about potential soft impacts, we effectively relinquish control over how we shall live our lives to the invisible hand of competition fuelled by morally undirected technological progress. Technologies shape the way we interact with one another, how we think of ourselves and others, and even what things we value. Thus, if we wish to have a say over such things – things which matter no less, though are admittedly harder to predict and evaluate, than the more-obvious “hard impacts” which we either explcitly aim to bring about, or can more easily foresee and attempt to avoid – then we will need to pay significantly more attention to soft impacts than we currently do. To live in a world we have chosen, rather than in whatever world we inadvertently create for ourselves, we need to contemplate the full range of consequenes of emerging technologies, not only those that are easy to imagine, predict, and evaluate. In order to make this task easier, in the final part of this talk I will describe a method for doing precisely that — one which builds on an existing approach in medicine to identifying and safe-guarding against the unintended medical side-effects of medical procedures and technologies.

This is an abstract for a talk scheduled for presentation on October 14, 2019 at the HPS Research Seminar Series 2019 in the Department of Philosophy at the University of Sydney.