Securing Society 5.0

Overcoming the hidden threats in society’s greatest evolutionary leap

PART 1: AN EVOLUTIONARY INFLECTION POINT

Chapter 1: The Age of Convergence

You can’t have change without resistance.

Sir Isaac Newton knew this when he drafted his laws of motion, you knew this when you got out of bed this morning. It is a guiding principle embedded in the movements of the heavens, biological evolution and the rise and fall of empires. At its most basic, history is simply an account of the perpetual tension between these two great forces: The unrelenting drive to novelty and the equal but opposite urge to keep things as they are. They influence your decisions, they shape your relationships. They trigger revolutions. We all possess both instincts, yet, when convenient, it is popular to disparage people for appearing resistant to change. We call them philistines or reactionaries; they are labelled provincial or parochial in their thinking; even technically neutral terms like conservative become pejorative when used to describe someone accused of obstructing the march of progress. However, of all the ways in which to be a neophobe, it is only opposition to technological development that earns its own title.

Johannes Trithemius stands accused of being a Luddite: an opponent of technological change. The 15th-century German abbot wrote extensively as a lexicographer and historian and is credited with approximately 30 original works over the course of his scholastic career. During his time in charge of the Benedictine abbey of Sponheim, Trithemius grew the abbey library from about fifty texts to more than two thousand, making it one of the largest collections in Europe at the turn of the 16th-century. He was passionate about the written word, using his 1492 work De laude scriptorum manualium — In Praise of Scribes — to exhort monks to stay true to the art of copying sacred texts. In it, he shares his zeal for the subject:

“My tongue is already sticking to my dry palate, my breathing grows weak, and my pen is shaking. Yet, my whole being is filled with the desire for, and the love of, writing.

This was a man dedicated to the safekeeping and distribution of knowledge, regarding it as a sacred duty to be upheld by anyone in the monastic tradition. But, he wrote De laude scriptorum manualium shortly after the invention of Gutenberg’s printing press, and dedicated portions of his essay to criticizing the machine. His argument against the new technology sweeping Europe was not just that it would put scribes out of work, which there was no doubt it would, but also that the scribes themselves would suffer spiritually as a result. He clearly saw value in the printing press, using it to print De laude scriptorum manualium, but also believed that monks who were no longer scribes would become idle and lose their intimate contact with the scriptures and, therefore, God. His fear was not for mere redundancy, but for his charges’ souls. Some modern commentators have labelled Trithemius a technophobe, mocking his apparent antipathy for one of the most celebrated inventions in western civilisation. It is the same ridicule levelled at Sir William Preece, the 19th-century British telegraphy pioneer who did not foresee the telephone becoming popular with the British public because, unlike the United States, Britain had “a superabundance of messengers, errand boys, and things of that kind.” The President of the Michigan Savings Bank in 1903 is remembered with similar derision for telling Horace Rackham, Henry Ford’s lawyer, not to invest in the Ford Motor Company because, “The horse is here to stay but the automobile is only a novelty – a fad.” Steve Balmer may have had a few controversial moments as Microsoft CEO, but none more memorable than his infamous 2007 declaration: “There’s no chance that the iPhone is going to get any significant market share.”

With hindsight on your side it is easy to jeer at these failed prophecies of technological failure and there is no shortage of writers happy to do so, but resistance to new technologies is more common, and often more violent, than many citizens think.

Rise of the cyber-physical system

In the early 19th-century, artisans in Britain’s textiles industry revolted against the mechanised forms of manufacturing that would come to define the Industrial Revolution. With their livelihoods threatened by the employment of machines run by cheap unskilled labour, the disempowered workers began a campaign of factory sabotage, destroying textile machines in an uprising that lasted five years. Taking their name from the fictitious insurgent Ned Ludd, these were the original Luddites, the first casualties of what we now call “disruption” and the namesakes of all those who fear disenfranchisement at the hands of technology. This is not, however, a phenomenon of the past. Recent history rings with the echoes of Luddite outrage, perhaps most emblematically in the global protests against ride-sharing app, Uber. Since its launch in 2011, Uber has faced hostility from traditional taxi services in almost every country it has entered. From London to Jakarta, Paris to Rome, Delhi to Johannesburg, professional taxi drivers have taken to the streets in opposition to an apparent capture of their industry by a faceless digital competitor. In numerous countries demonstrations have turned violent, with cars attacked and, in some cases, torched. As recently as April 2019, Germany saw the largest taxi protest in the history of the federal republic as Government officials proposed a relaxation of restrictions on ride-hailing services.

Part of the fight has been energized by the familiar industrial-era themes of low pay and poor working conditions, but it has also been fuelled by regulatory disparities, such as the unequal requirements facing incumbent taxi operators who need expensive medallions to work legally in some US cities. Universally, the storms of protest have been a reaction to a sudden and extreme inequality of competitiveness. Uber is a poster child of industry disruption, but to taxi owners it must feel more like the antichrist of competitors, using technology to deliver a lower-cost higher-value experience that can be introduced into a new market with minimal infrastructural outlay and rapid scalability. This part of the fight, however, is probably futile. Like the Luddites, what the drivers are trying to resist is the unstoppable tide of progress itself. Technological advancement is driving a trend towards hyper-connected systems designed to deliver increasing levels of convenience and value at decreasing costs. The casualties of those cost reductions are inevitably people, which is why it’s easy to sympathise with individuals and groups that are left stranded by seemingly unstoppable waves of change. Of course, to taxi drivers, as it did to 18th-century artisans, the wave must feel more like a tsunami, and of course there is more to come.

In its “Future of Jobs Report 2020,” the World Economic Forum (WEF) estimates that 85 million jobs will be replaced by AI and automation by 2025. The WEF, however, urges those afraid of redundancy not to ‘fear AI,’ suggesting in the same report that 97 million new jobs may emerge that ‘are more adapted to the new division of labour between humans, machines and algorithms,’ representing a net employment gain. The broader economic implications of this trend are also predicted to be net positive. PwC’s recent Global Artificial Intelligence Study suggests that, by 2030, AI will be responsible for a 14% ($15.7 trillion) increase in global GDP, with up to 26% rise in GDP for local economies. So, analysis seems to suggest that, despite reactive concerns, the overall effect of technological employment in coming decades will be beneficial to humans. The challenge, though, will lie in convincing people that this is the case. While tech could create 97 million new jobs, there is no guarantee that the 85 million people who are predicted to lose their jobs would be capable of performing the new roles. At this stage, plans for mass upskilling and reskilling of employees currently in ‘traditional’ occupations are largely unformed and untested. Additionally, public confidence in the fair and equitable distribution of wealth is at an all-time low, tarnishing any suggestion that increases in global GDP will automatically translate into a better life for the individual. Finally, there is the question of timeline. Even if the technology-enabled restructuring of most industries and jobs equals a net positive for the common human, the results are by most people’s standards long-term. The Luddite movement and all those like it across history are based on a response to short-term threat. Unless governments, institutions and businesses learn from the Uber case study and map out a well-governed and well-regulated transition to new ways of work, we are likely to see followers of Ned rising in pockets around the globe as the 21st-century evolves.

This is understandable. The resistance to change is something we can all empathise with, even if our willingness to acknowledge it varies, and this tendency seems most acute when applied to technological transformation. In his book, Innovation and Its Enemies: Why People Resist New Technologies, Calestous Juma cites numerous historical examples of forceful opposition to innovation, including the mechanisation of farming, mechanical refrigeration, and the use of technology to print the Koran. Juma argues that skepticism in the face of innovation is natural given the profound impact that disruptive technologies have on our social and economic systems.

“Through improvement and marketing, some new technologies move up the performance ladder to eventually become dominant by displacing previous technologies. But they do more than just replace incumbent technologies. They reorder the socioeconomic terrain by coevolving with new institutional arrangements and organisational structures. It is this wider societal transformation that generates tensions between innovation and incumbency.”

Coevolution is a central theme that we shall return to shortly, but Juma makes another important point: modern technology has moved beyond the limits of individual utility and become a definitively social phenomenon. It has been integrated into almost every aspect of our personal lives, but its real impact is at scale, in all the ways that it connects us and optimises collective human endeavour. As we shall see, this amplification of technology’s impact began with the First Industrial Revolution and continues today at a now-exponential rate of assimilation into our physical, even biological, existence. But the technology of today and the technology that the Luddites raged against is not just different in degrees of sophistication, it is different in nature. While both represent an extension of human potential through mechanisation and automation, the technology of today and the emerging future sees humans increasingly outsourcing the management of machines to machines with a soon-to-be-realised objective of mechanical autonomy. The distance between the person and the point of final impact is growing.

In the mid 20th-century the evolution of physical systems led to the emergence of cyber systems and, more recently, cyber-physical systems (CPS).

Cyber-physical systems represent a convergence of the tangible and the digital, an unlocking of the potential—and risks—that become available when we interact with physical objects through computational systems. Edward A. Lee correctly argues that “CPS is about the intersection, not the union, of the physical and the cyber,” though this distinction will be tested more and more as CPS and biology continue to merge.

In the mechanical realm, a cyber-physical system could be a computer chip that monitors a valve in a factory, automatically opening or closing it as needed to maintain proper system pressure. In the biological realm, imagine an automated insulin pump that constantly checks a diabetic’s glucose levels and automatically adjusts insulin levels without the patient even being aware of a potential issue. The benefits of this marriage between cyber and physical are numerous and far-reaching. Integrating computational components expands what physical devices can do beyond the limits implied by human interaction. Even the most basic single- or limited-purpose computational component added to a mechanical or biological system can potentially monitor and adjust more reliably than a person. CPS can also gather more comprehensive data than humans—who are prone to distraction—could, without missing critical data that could affect decisions.

Components integrated at a minute level also surpass human ability in their potential to network with computers that control larger parts of the overall process. This eliminates the indirect step of humans inputting collected data. In fact, cyber-physical systems offer an almost unlimited ability to network components incorporated at the most basic level to computers engaged in every level of the overall process – wherever in the world those computers may be – to offer real-time data on all components of the system and allow humans to intervene in the process only when necessary.

We might, therefore, define cyber-physical systems as any systems in which embedded computers monitor and control physical processes, with feedback loops where physical processes affect computation and vice versa.

The term “cyber-physical system” has roots going back at least as far as 1948, when American mathematician Norbert Wiener coined the neologism cybernetics to describe what he called “the scientific study of control and communication in the animal and the machine”. He took the term from the Greek word κυβερνήτης (kybernḗtēs) referring to a governor, pilot, or helmsperson of a ship, an apt reference to the work he had done during World War II in helping devise a mechanism for the automatic aiming and firing of anti-aircraft artillery. Cyberspace, another term regularly heard when talking about CPS, emerged later from the imagination of cyberpunk science fiction writer, William Gibson, first appearing in his short story Burning Chrome, but receiving more exposure in his later novel, Neuromancer. “Cyber-physical systems” were first referenced in 2006 by Dr. Helen Gill of the National Science Foundation, who defines CPS as “physical, biological, and engineered systems whose operations are integrated, monitored, and/or controlled by a computational core. Components are networked at every scale. Computing is deeply embedded into every physical component, possibly even into materials. The computational core is an embedded system, usually demands real-time response, and is most often distributed.”

CPS is a broad, umbrella term that encompasses many other, more familiar terms for technologies that connect our physical world with the cyber world. It encompasses Industrial Control Systems (ICS) such as Supervisory Control and Data Acquisition (SCADA) Systems, Distributed Control Systems (DCS) and Programmable Logic Controllers (PLC). It also encompasses Internet of Things (IoT), Industry 4.0 and Industrial Internet of Things (IIoT), the Internet of Everything and ‘Smart’-Everything, and the Fog (similar to the Cloud, but incorporating also the physical objects that can be connected to it). CPS is the broadest of these terms because it focuses on the fundamental intersection of computation, communications and physical processes without suggesting any particular implementation or application.

Also relevant to the focus of this book, with its implications for the future security of a techno-human integrated society, is the definition used by the Cyber-Physical Systems Security Institute (CPSSI) and its member companies: “Cyber-physical systems are physical or biological systems with an embedded computational core in which a cyberattack could adversely affect physical space, potentially impacting well-being, lives or the environment.” This definition recognizes the cyber-kinetic threat of cyberattacks which, as we will see as this book unfolds, is a crucial consideration as we seek to evolve our societies and nations to a higher level of tech-enabled wellbeing.

Cyber-connected objects have become ubiquitous. We already take their existence for granted, even though this was the stuff of science fiction only a few years ago. We’re all familiar with the ‘sexy’ examples of smart connectivity: cars that park themselves and warn you of other vehicles in close proximity as you drive on the highway; homes that change atmospheric conditions, lighting and music to your preferences as soon as they recognize your voice; apps that let you monitor home and vehicle security from the other side of the world; and refrigerators that let you know when you need to stock up on milk.

However, the real impact stretches far beyond lifestyle accessories. Distribution of essential services like power and water is made more efficient by smartification. Sensors detect imminent failures in civil and industrial systems before they happen and dispatch repair personnel to pre-empt more costly interventions. Traffic control systems monitor traffic patterns and adjust traffic light timing to optimize traffic flow.

Cyber-physical systems make this all possible, sensing and influencing the physical world. In commercial terms, this shows up as factories with ‘intelligent’ machines that optimize maintenance and production cycles, or large-scale farming operations that use connected devices to maintain an optimal balance of soil moisture and nutrients.

In civil and social terms this translates into smart cities and improved healthcare. Much of the advanced diagnostic equipment found in today’s hospitals is connected via CPS, but the integration is far more personal than that. Cyber-enabled devices are now planted in human bodies: pacemakers, heart monitors, defibrillators and insulin pumps enable doctors to remotely monitor patients’ conditions and make adjustments as necessary. People have become cyber-physical systems too.

The starling and the machine

In 2013, George F. Young and colleagues completed a fascinating study into the science behind starling murmurations. These breathtaking displays of thousands – sometimes hundreds of thousands – of birds in a single flock swooping and diving around each other, look from a distance like a single organism organically shapeshifting before the viewer’s eyes.

Young and his fellow researchers reference the starlings’ remarkable ability to “maintain cohesion as a group in highly uncertain environments and with limited, noisy information.” The team discovered that the birds’ secret lay in paying attention to a fixed number of their neighbors – seven to be exact – regardless of the size of the flock. It is these multiple connections, like neurons bridged by synapses, that allow the whole to become greater than the sum of its parts.

The murmuration is an elegant example of emergence: the tendency of an entity to develop higher properties that are not present in any of its constituent parts. By combining their energies, the individual parts give birth to something that transcends their potential while including their value.

Cyber-physical systems are another example of emergence. By wirelessly connecting a sensor to computational power on a network, one becomes able to manipulate the physical world without physical action. It’s akin to a thought moving an arm. Like the starling murmuration the cyber-physical system resembles an organism, with many distributed nodes moving in unison to realize a higher potential. Practically speaking, this innate design approach has many benefits, as suggested by Haque et al

(1) Network Integration. CPS allow physical systems to be operated, managed and updated digitally, which also means remotely. Through wireless networks and cloud computing, CPS sensors can collect data which is processed in the cloud, leading to wirelessly distributed actions that have real-world effects. Networking leverages distributed hardware and virtualised services to enable real-time interventions.

(2) Interaction between Human and System. As AI becomes more sophisticated, a debate is intensifying over the potential for machines to make better decisions than humans. AI is already able to predict some forms of disease better than human doctors but for now humans appear more effective at making nuanced judgements, especially in situations that have ethical or social implications. Cyber-physical systems generate mammoth amounts of data processed at high speed to give humans higher quality information that leads to higher quality decision-making. It is a collaborative relationship that should deliver better results for the individual (think: wearables and wellbeing apps feeding back personal biometric data) and the collective (think: biometric data from sensors measuring driver fatigue).

(3) Rapid iterative improvement. CPS allow all network components, from software to hardware, to be measured, improved and updated, often remotely and in real-time.

(4) Better System Performance. With wireless communication between sensors and cyber infrastructure, systems are able to use feedback more efficiently to improve system performance. Larger and larger portions of these processes are managed autonomously. System maintenance is also improved, increasingly based on predictive analytics.

(5) Autonomy. CPS measure the physical world, but the processing of that data takes place in the cyber. Functional decisions based on that data are run by control systems resident in the cloud, functions that increasingly use AI to learn, adapt and predict. As 5G technology becomes more accessible and the network capabilities required for low-latency, high-volume data processing become available, autonomy in cyber-physical systems will become a widespread reality, significantly reducing the need for operational human intervention. For the first time, much-hyped possibilities like networks of driverless cars will become feasible at scale.

(6) Scalability. The combination of networking infrastructure, software management and remote sensors means CPS are inherently scalable across geographies or population sizes. The constant self-learning potential of CPS networks also promises increased efficiencies over time, leading to greater gains from further scaling.

(7) Flexibility. Present systems based on CPS provide much more flexibility compared to the earlier research efforts in wireless sensor networks (WSN) or cloud computing alone.

(8) Optimization. Wireless sensors and cloud architecture do not only encourage better system performance, they also permit optimisation of applications, hardware and services running on the network.

(9) Faster Response Time.Virtualized networks and cloud infrastructure lend CPS greater adaptability. The network location of computing functions can be optimised to drive faster response times and greater system performance. Edge computing is an example of these, with virtual functions disconnected from the core and placed closer to the CPS sensor or device producing the data.

With advantages like these it is easy to see why so many are excited at the prospects of a cyber-physical future. Perhaps this is what Andrew Ure felt when he wrote:

“I conceive that this title, in its strictest sense, involves the idea of a vast automaton, composed of various mechanical and intellectual organs, acting in uninterrupted concert for the production of a common object, all of them being subordinated to a self-regulated moving force.”

The title that Ure refers to is “the factory” but not, as it may sound, the autonomous factory of the 21st-century. Writing in 1835 and the Industrial Revolution, Ure was exalting the first systems of mechanised manufacturing. He saw them as “an inevitable step in the social progression of the world.” Though those working in the mills of the time would probably have disagreed with Ure’s utopian view of reality, hindsight suggests that he was right. For good or bad, the Industrial Revolution irrevocably changed the way humans live. Now cyber-physical systems promise to do the same, ushering in a Fourth Industrial Revolution. MIT professor of history, Bruce Mazlish, called the first Industrial Revolution “a quantum jump in our story,” a time that forever shifted our relationship with machines, even “the human sense of self.”

Many trace our time of cyber-physical integration back to that era of mechanical revolution, but in truth the origins are far older. We are standing on an co-evolutionary arc that starts the moment an early human used a rock or a branch to aid them in survival and seems to run forward to a time when the human and the tool are indivisible.

Chapter 2: How we got here

If you’re ever planning a visit to the Swiss city of Neuchâtel, consider arranging your visit to coincide with the first Sunday of the month. On that day, twelve times a year, the dolls in the Musée d’Art et d’Histoire come alive. The musician plays a musical instrument, the draughtsman draws pictures, including a portrait of Louis XV, and the writer scribes custom messages, pausing only to ink his quill.

These are the famous automata designed and built between 1768 and 1774 by Prussian watchmaker and engineer Pierre Jauqet-Droz. Still fully functional after more than 250 years, these mechanical figurines are considered the ancestors of the modern computer and the oldest “living” examples of a mechanical tradition that dates back to ancient China and continues to the present day. The writer, which is the most sophisticated of the three automata, consists of 6,000 parts and can be “programmed” through a series of tabs and cams to write original scripts up to 40 characters long. Though more rudimentary, this design predates by half a century the development of Charles Babbage’s Difference Engine, widely regarded as the first attempt at a mechanical computer. There is a historical thread here that we could follow all the way to the the laptop on your desk or the smartwatch on your wrist, but that is not the intention of sharing this story. Yes, the automata of Jaquet-Droz and his fellow builders across the centuries are curious elements of the history of computing, but they also reflect something more fundamental: how technology defines humanity.

It was Benjamin Franklin who famously said, “man is a tool-making animal” and this fact has long been regarded as one of the key differentiators between humans and the rest of the animal kingdom. More recent research in ethology has proved that humans are not unique in using and making tools; animals from dolphins to chimpanzees to crows are known to adapt objects from their environment to assist them in hunting, feeding and protection. However, no other species is even remotely capable of producing something as sophisticated as Jaquet-Droz’s automata. For Mazlish, this is a more accurate distinction between humans and animals, not the ability to make tools but the ability to make machines. It is for this reason that the Industrial Revolution marks such a profound shift in social evolution. During this era humankind accessed the broader potential of mechanisation and automation beyond the curiosity and entertainment that automata provided. We were able to use machines to amplify the output of labour, a trend that continues till today though by multiple factors of scale.

The importance of this shift cannot be overstated, nor can the importance of the shifts in which we are currently engaged. The authors argue that no era before has been defined by such a global sense of change and development. The future has never been less clear, yet, at the same time, we’ve never had a more sophisticated ability to understand and interpret evolution in real-time. As Nietzsche points out, “The future influences the present just as much as the past,” which is partly why we have written this book, to join the congregation of voices adding constructively to our understanding of the emerging future. This book looks forwards to the society that is to come and the unprecedented extent to which it will rely on technological integration, but in order to fully appreciate the gravity of this assimilation we first need to look backwards.

The case for co-evolution

It is popular to see the timeline of human history as marked by inflection points of major technological advancement. The plough, the printing press, the telegraph, the steam engine, electricity, the telephone, the computer, the internet; each of these breakthroughs is associated with tectonic shifts in how people have lived and worked. However, it would be simplistic to suggest that technological evolution was so linear or so binary. The evolution of human systems and the evolution of technology have been far more interactive than a technologically deterministic view suggests. Rather than technology developing autonomously according to some internal logic and precipitating social change, humanity and its machines appear to have evolved through a far more intimate relationship, one that is fraught with all the usual complications that relationships imply. As Kevin Kelly, futurist and founding executive editor of Wired magazine says,

“The reason why we have this tension with technology is because we are both the masters and the slaves. We created humanity and we are the created. We make the tools and the tools shape us.”

The tension that Kelly describes is not just personal. To re-iterate Calestous Juma’s observation form the previous chapter, “It is this wider societal transformation that generates tensions between innovation and incumbency.” The ambivalence that humans seem to be unable to shake regardless of how assimilated we become with our technology is connected to their apparently unlimited potential for widespread impact. These machines, these tools we have invented are at once a part of us and at the same time separate from us. They appear to evolve along a path parallel to our own, but intuitively we know it is a different process of development. We are organic creatures evolving through a genetic history with generations between the turn of every page.

Technology, on the the other hand, does not evolve this way. Technologies emerge from previous technologies in a more scattered, but ultimately more efficient, fashion, drawing on advances in one, or multiple, domains to drive innovation in another. When Johannes Trithemius spoke out against Gutenberg’s printing press it was not a radically new creation he was targeting, it was not even the first time printing had been performed using a system of individual letter or character blocks. What made Gutenberg’s press so profoundly impactful was the way in which it harnessed existing components to create something more complex and more effective. He created a superior metal alloy more suitable to printing, combined it with ink and paper, and brought these components together in an adapted version of the wine presses traditional to Western Europe. In turn, each of these components has its own history of different elements that have been added together to trigger an “evolutionary” step forward. Economist W Brian Arthur calls this “evolution by combination, or more succinctly, combinatorial evolution.”

Technologies inherit parts from the technologies that preceded them, so putting such parts together—combining them—must have a great deal to do with how technologies come into being. This makes the abrupt appearance of radically novel technologies suddenly seem much less abrupt. Technologies somehow must come into being as fresh combinations of what already exists.

The pace of combinatorial evolution is necessarily determined by the number and diversity of components available to be combined. The more we have to work with, the more creative possibilities are available to us and, unlike humans, we don’t have to wait a lifetime for every “genetic” iteration of a new technology. So it is that, as human production and improvement of new technological components has advanced in every imaginable area of our existence, so has the number of components. Increasing the number of components you’re working with vastly increases the number of potential combinations, which is why the rate of technological development is not just increasing, it’s accelerating. By now Moore’s Law is so well known that most of us have become desensitised to this dramatic proposition of exponential growth, but British author and tech philosopher, Tom Chatfield, “recaptures its shock” by flipping the view:

From the perspective of technology, humans have been getting exponentially slower every year for the last half-century. In the realm of software, there is more and more time available for adaptation and improvement – while, outside it, every human second takes longer and longer to creep past. We – evolved creatures of flesh and blood – are out of joint with our times in the most fundamental of senses.

As this narrative plays out, one view is that it will naturally lead to the Singularity, a point in the future at which exponential evolution allows technology to transcend the need for human intervention. Machines will be able to self-design and self-improve at a rate beyond human capability, liberating them from our control. It’s a controversial notion met with equal amounts of fear, disdain and enthusiasm. Fear is usually accompanied by descriptions of dystopian futures in which we are biological slaves to machines, while detractors see the concept of Singularity as mythological in nature, useful for its metaphoric value but not much more. Most well-known among the enthusiasts is inventor and futurist Ray Kurzweil who believes that, by 2045, people will be neurologically integrated with computer technology, creating a cyber-physical human with the brain processing power of a machine.

Will humans be left behind as technological advancement outpaces biological evolution? Samuel Butler thought it might. In 1863, Butler published an article entitled “Darwin among the machines” in which he proposed the idea that machines undergo their own evolutionary process and could eventually rise to dominate humans. The notion was developed in Butler’s 1872 fantasy novel, Erewhon, one of the first to explore the possibility of artificial intelligence and the associated idea that machines could be dangerous as well as useful. What sets Butler’s commentary aside is its prescience. Writing almost 150 years ago, his concerns were remarkably consistent with apprehensions voiced today. The 2015 Open Letter on Artificial Intelligence signed by the likes of Elon Musk, Steve Wozniak, Stephen Hawking and leading figures in computer sciences and AI echoes Butler’s view that AI has the potential to do harm as well as good. The letter acknowledges artificial intelligence’s great potential but also recognises its potential pitfalls, stating the need “to focus research not only on making AI more capable, but also on maximizing the societal benefit of AI.” Similar concerns have prompted a campaign to stop the global robotic arms race to develop fully autonomous weapons that will replace humans with “killer robots.”

We take this time to review humankind’s co-evolution with technology in order to understand our species’ physical, psychological, even emotional relationships with machines across the centuries. Our aim is not to philosophise about the meaning of technology, it is only to highlight the diversity of attitudes that humans have had, and continue to have, with the machines that make their lives easier and more productive. These are important considerations as we look forward to a cyber-physical society in which judgements need to be made about where to draw the boundary lines between human convenience and human safety. Even more crucially, we need to actively engage the topics of trust versus security, the latter being a historically underexplored theme. While cybersecurity has for some time been interpreted in terms of financial and privacy threats to businesses and institutions, it is increasingly understood to be about far more. Cybersecurity has become cyber-physical security, a discipline directed at ensuring the frontiers of physical life are guarded against incursions through the cyber realm.

In my consultancy work over the past 30 years I have watched conversations around security evolve in response to the major threats of the day and the emerging future, but for some time I have been advocating an even farther reaching perspective, one that takes into account the evolution of society itself. Cybersecurity has traditionally been a protective investment directed at reactively or proactively shielding corporate or private assets from capture and harm. However, as we move closer to a societal structure in which the cyber and the physical are inseparable, cybersecurity will move beyond being protective to being facilitative. The unimaginable efficiencies and rewards of a techno-human society can only be unleashed if the structures, systems and moment-to-moment actions of that society are secure. In such a context, cybersecurity becomes an enabler of novelty and growth, not just a protector. However, taking this view requires acceptance of even broader challenges. Maintaining sophisticated levels of cybersecurity requires constant change and adaptation to account for the increasing complexity of cyber and cyber-physical systems. But, as we begin to consider the security needs of networked cyber-physical systems that incorporate individuals, homes, companies, cities, nations and the planet, we realise that understanding the tech is not sufficient; we need to appreciate the growing complexity of society itself.

Technology and the stages of social evolution

In Arthur C. Clarke’s 1956 novel, The City and the Stars, the venerated sci-fi author presents two alternative visions of post-apocalyptic society. The first is Diaspar, a technologically-advanced city presided over and managed by the Central Computer. The city is maintained by machines who are directed by the Central Computer’s advanced artificial intelligence. Humans are stored as data in the Central Computer’s memory banks and, on rotation, given physical form when their consciousness is downloaded into a body created for them but the Central Computer. They are effectively immortal.

The various elements of this idea are not unfamiliar to us. So far in this book we have already touched on the not-too-distant reality of smart cities managed and maintained by machines and artificial intelligence synchronised through a massive Internet of Everything. We have also alluded to the transhumanist belief that humanity will soon possess the technology required for whole brain emulation and the ability to upload the human mind to the cloud. And, while we have not yet seen robotics even closely approximating the sophisticated anthropomorphic android bodies imagined in Diaspar, even conservative estimates put us only a few hundred years away from Westworld-like robots to house our consciousness. In fact, the only unlikely aspect of this vision is the timeline – The City and the Stars is set one billion years in the future.

Clarke’s counterpoint to Diaspar is Lys, where humans have chosen to retain mortality, living fairly simple subsistence lives in a physical oasis with machines used only to provide physical labor. This idea, too, is familiar to us, though for different reasons. Even though conceptions of a post-apocalyptic world often see humans returning to the basics, living off the land in pre-modern clusters of civilisation, the reason Lys is easy to comprehend is because humanity has already lived through social structures with many of the same qualities. Large parts of the world still are.

Despite emerging from the creative mind of a science-fiction writer, Diaspar and Lys can be seen as two legitimate, though hypothetical, nodes on a spectrum of social development that dates back tens of thousands of years. It is important that we have some understanding of this spectrum if we are to learn lessons from the past or, more pertinent to this book, make intelligent decisions about the future integration of technology and society.

In sociological terms, a society is a group of people who live in a definable territory and share the same culture. Though this definition is broad enough to capture the essence of what we’re talking about when we discuss society, it clearly doesn’t reflect the ways in which societies differ from one another. Nor does it offer a way to interpret how society has changed in the millennia since we roamed the savannah or how we can expect it to change in the future.

American macrosociologist, Gerhard Lenski, regarded technology as the main source of societal change through a process he called socio-cultural evolution. In Lenski’s view, a society’s level of technological development is the primary determinant of whether that society survives and how it changes or evolves. In evaluating technology, Lenski focused on the information available to a society and how the society collectively employs that information. He posited that societies advanced by accessing new information and technology, then defining different types of societies by the technology they used and the social organizations that the technology helped create and sustain.

According to Lenski, there are four primary stages or types of society:

1. Hunting and gathering societies

Estimated to have started about 1.8 million years ago, hunting and gathering is believed to account for about 90% of human history, with some hunter-gatherer tribes still in existence today. In these societies, people make use of extremely basic tools to help them hunt animals – traditionally a male responsibility – and gather wild plants for food – usually the function of females and children. Hunter-gatherers are often nomadic, following migrating animals and seasonal changes in flora, so they seldom build permanent settlements. Due to technological limitations and unpredictable access to scarce resources, these societies tend to be small, incapable of supporting a group of more than 25 to 40 people effectively. With the survival of the tribe dependent on the constant search for food, every person has a part to play and resources are shared equally. As a result, these societies have very low inequality. They also have an extremely short-term focus, living life from day to day. From Homo erectus to Homo sapiens,humans lived in hunting and gathering societies up until about 12,000 years ago, when the domestication of plants and animals led to new kinds of society: horticultural and pastoral.

2. Horticultural and pastoral societies

The shift from hunting and gathering to horticultural societies took place with the development of rudimentary tools that allowed small-scale farming. These included axes for clearing or chopping wood, sticks and wooden or metal spades for digging, and the foot-plough for preparation of small areas of ground for planting. For the first time in human history we see settlements, with groups building permanent or semi-permanent dwellings next to land where they could cultivate plants for food. In horticultural societies we also see the first examples of material surplus – the accumulation of more resources than are needed to feed the population – and the resultant growth in society size. Material surplus also leads to the first instances of specialisation – as the quest for food loses some of its universal urgency, some members of society are freed to pursue alternative activities.

Emerging around 10,000 years ago, pastoral societies rely on the domestication and herding of animals for their survival. Like horticultural societies, they can be semi-settled but they are more inclined to nomadic patterns than strictly horticultural groups as they move from place to place to keep their herds fed. Mobility keeps pastoral societies smaller than horticultural ones and gives pastoralists more adaptability relative to their environment – rather than shaping their environment to suit them, pastoralists respond to the existing environmental conditions. This interactive relationship with nature and the need to know the location of the best grazing lands means information sharing is critical to such groups.

3. Agricultural societies

Starting around 5,000 years ago, agricultural, or agrarian, societies are permanent settlements with much larger populations than previous social systems. These population sizes are enabled by considerable material surpluses achieved through the employment of advanced technology and techniques. Irrigation systems, fertiliser, the animal-drawn plough – developments like these led to massive leaps in food yields and production. As a result, the specialisation of social roles that began in horticultural societies blooms, and we see new occupations solidified in fields such as politics, manufacturing, military and religion. Segmentation breeds stratification and we see social inequality build around differences in access to resources, especially land. Feudalism is common in agrarian societies, but so are more formalised social systems. The family unit becomes less important as society centralises around certain functions like education or religion.

4. Industrial societies

As we’ve already discussed, the Industrial Revolution starting in the mid to late 18th-century saw the most significant social transformation in human history, in the main thanks to technology. Combining mechanical systems with advanced forms of energy like steam power allowed for the production of goods at an unprecedented rate and at radical scale. This was the advent of the centralised workplace – the factory – which led to formalisation of education systems as parents could no longer take responsibility for daily child rearing. The impact of new technologies was also felt in food production. New machines like the tractor and the combine harvester produced huge surpluses that could support even larger populations with even more specialization of social roles. Economic interdependency became more complex, as did political and administrative systems. In order for this more complex society to run effectively, there was a greater need to assert centralized control over everything from the production of goods, to transportation, to agricultural production. Mass production and large-scale agriculture saw the fundamental shift from a subsistence-based economy to a capital-based economy, spawning greater social inequality. It’s easy to see how Karl Marx, born in the middle of the Industrial Revolution, may have been influenced by these social realities. His theory of historical materialism puts a strong emphasis on material conditions as the driving forces of human history and, like Lenski, identifies technology and the modes of production as crucial to social change. Industrial societies see greater fragmentation of the family unit in favour of institutions like public education and healthcare. We also see, for the first time, rampant urbanisation as people flock to towns and cities to find work in factories.

Emerging after Lenski’s time, the more advanced societies of today are post-industrial and postmodern. Post-industrial societies began to take shape in the 1960s when we saw a shift away from an economy based on raw materials and manufacturing to an economy based on information services and technology. The advent of the computer made possible a new system based primarily on the processing and control of information. In the move from industrial to post-industrial, individual economies shifted from production-focus to more of a service-focus, with a concomitant decline in manufacturing. Of course, these societies still require manufactured goods, and therefore post-industrial societies are only possible if they are supported by industrial societies. In a sense, this has led to an externalisation of the inequalities that build up around specialisation in modern societies. Despite the promises of globalisation and the theoretical potential for all societies to prosper as they deliver specialised products and services to the global market, experience has shown us that inequalities between different countries, societies or regional economies are genuine concerns, providing the fuel for a worldwide rise in populism and geopolitical conflict.

These trends are accelerating in postmodern societies driven by the demand for media and consumer goods. With the ubiquity of smartphones and mobile technology, the Industrial Age fragmentation of social institutions like education, religion and family continues to the point of atomisation. Postmodern society is what we find in most of the world’s developed nations, offering clues about the challenges and opportunities we are likely to face as technology assumes an increasingly powerful role in every aspect of our lives. The growing cyber-physical nature of postmodern society can be seen as an outcome of the creative drive towards higher orders of convenience, efficiency and productivity. In that sense, the postmodern society in which many of us live is the product of the co-evolutionary arc we have shared with the technologies we’ve created dating back to the earliest hunter-gatherers. However, this society is also a source of need, a construct that is by its very nature generating new problems. These are complex, highly-integrated challenges that require multi-disciplinary approaches to decipher, let alone solve.

Alan Watkins and Ken Wilber describe these as ‘wicked problems,’ difficult to define and essentially unsolvable in the usual mathematical sense. Wicked problems, they say, are multi-dimensional, have multiple causes, multiple stakeholders, multiple symptoms and multiple solutions. They are by definition complex and difficult to process.

Crucially, these problems are created or exacerbated by people, so it is perhaps darkly ironic that we turn to technology to solve them. Our sophisticated machines have given us the power and capacity to achieve the unimaginable, but also to do immense harm, threatening not only the wellbeing of the biosphere, but our very own survival too. Technology, of course, is agnostic. Computers, smartphones, the internal combustion engine – these creations have no consciousness, no _mens rea_ and, therefore, cannot be held responsible for any negative impacts of their application. Humans, however, have all the faculties of awareness and, therefore, culpability, despite how regularly we are shown to avoid both. As the following section proposes, our species has proved capable of producing challenges of unfathomable difficulty. We may, however, also prove capable of developing the novel thinking and technological maturity required to overcome them.

As we shall see, a healthy future for humanity will require the navigation of some wicked problems, and technology will play an integral part. Those who see the human race as a species in social freefall and technology as the main corruptor can be forgiven their sentiment. It only takes a few minutes of trawling through social media to find behaviour that represents the worst of humankind. But technology is not the enemy and will instead play a crucial role as we tackle the big trials of our times. The key is rather to find ways to integrate technology safely.