The Nuclear Option

Stuart Sharrock
March 10, 2015

 

I used to be a nuclear physicist; still am presumably. I don’t know how you can stop being a nuclear physicist. Certainly you can stop being a practising nuclear physicist, as I did nearly half a century ago, but you can’t escape from the approach to problems and the outlook on life that a scientific training implants. Nuclear physics in particular instils an analytical methodology tempered by the realisation that the world of quantum mechanics is counterintuitive – you shouldn’t assume anything in a probabilistic world where entities can be in multiple places simultaneously.

But for me nuclear physics brought more. I was not only fortunate enough to be taught by some of the leading pioneers in the field but also grew up within an academic community of physicists who had been involved with the Manhattan Project, the research and development project that produced the first atom bombs during World War II. That exposed me to the consequences that can result when the relationship between the scientific community and the world of national and international politics comes under stress.

Many scientists involved with the Manhattan Project expressed their fears and uncertainties about the effects of atomic warfare long before the United States dropped the first bomb on Hiroshima. They even argued that control of nuclear energy should be out of the hands of the state and made strenuous attempts to prevent atomic warfare from ever taking place. They not only failed but many were subsequently persecuted for their stance.

The experience was traumatic. Many of the scientists felt betrayed, some had nervous breakdowns, a few turned to espionage, and a number committed suicide. They had made monumental advances in understanding and created phenomenal capabilities but had lost all control over the use that could be made of their discoveries.

One positive consequence of that experience was that the research community resolved to create an international laboratory to bring together scientists from different nations to collaborate on further research in nuclear physics. The result was CERN, a European laboratory that brought together research groups from East and West – at the height of the Cold War!

When scientists understand the potential threats posed by their discoveries and policy makers do not, there is a dangerous disconnect – joint ventures such as CERN go some way towards countering such dangers. They foster mutual understanding between scientists from different nations. But there is still a need for dialogue between scientists and policy makers.

Net neutrality

Another dangerous disconnect is apparent today. The principle that internet service providers and governments should treat all data on the internet equally – the so-called net neutrality debate – has been nothing if not controversial. An astonishing 3.7 million comments concerning the issue have been filed with the FCC. But do politicians and policy makers really understand these network issues? Arguments presented within the net neutrality debate indicate clearly that they do not. And this is hardly surprising; members of Congress with a background in or knowledge of science are exceedingly rare animals – some observers claim the species is already extinct. Engineers, who have the most to say about how to manage networks, long ago left the debate, leaving networks to be designed by lawyers.

The internet was planned to be anonymous. Communications engineers understand the potential threats to privacy and security posed by an internet that was designed deliberately without an identity layer. Politicians and policy makers are unconcerned by such technical detail. They have a different perspective and a very different agenda. As a result their relationship with the communications community is becoming increasingly stressful.

Does this matter? I think it does. The need for an open interchange of ideas between the communications industry and policy makers has perhaps never been greater. Recent disclosures about the activities of government security agencies and massively resourced cybersecurity attacks illustrate the urgent need for such a dialogue.

Is the communications industry facing its own Manhattan Project moment? Perhaps not now but it could be just around the corner.

The IoT

The World Wide Web was invented at CERN. It launched the internet from being an academic research network into a global public resource. The subsequent impact of the internet has already been phenomenal but is now set to reach unprecedented levels as the Internet of Things adds new dimensions of interconnectivity.

Citizens in the UK have been told not to worry. The very vastness of the data collections concerned imposes a comforting level of protection. No-one, say the politicians, could possibly ever have the time to trawl through all that lot.

That's not reassuring at all. People will not do the trawling, machines will. And the machines will analyse the data using increasingly complex algorithms designed by other machines deploying increasingly sophisticated artificial intelligence. What rules such systems will obey and what actions they will trigger should surely be a matter for public debate.

And the emphasis should be on debate. The potential benefits of the IoT are so enormous that people waving warning flags can be accused of scaremongering, of raising the spectre of an Orwellian society in an attempt to stifle the adoption of IoT solutions. This is not what we want. What we are highlighting is the need to educate people about responsible risk assessment. If we fail to remain alert to the dangers of unethical behaviour then we sink to the level of certain paranoid psychopaths working in the investment banking industry – and just look at what happened there.

The ability to observe, capture and store almost every interaction between humans and devices has significant consequences for privacy and security. The ability and intent to extract correlations by mining vast quantities of data has significant implications for society and justice. But correlations do not demonstrate causality and a propensity to commit an offence is not a crime. Or is it? The internet lacks an identity layer; the IoT lacks an ethical layer.

I would argue that these issues deserve serious consideration. It would be irresponsible to enter the realms of complex algorithm-driven business models devised by increasingly sophisticated artificial intelligence and delivered by robots without being clear about what society regards as acceptable and what it does not. Remember: machines have no ethics.

 


 

Stuart SharrockStuart Sharrock has been working as an analyst and consultant in the telecommunications industry for the past three decades. He holds a BSc in Natural Philosophy from Edinburgh University and a PhD in Nuclear Physics from University College London and has conducted research in nuclear physics in the Soviet Union, Switzerland, the UK and the USA. He lectured in physics at UCL before entering the commercial world as the Physical Sciences Editor of Nature, the world's leading peer reviewed scientific journal, and subsequently as a publisher of scientific books and journals for a number of major publishing houses before becoming an independent analyst and consultant. Stuart is a member of the IEEE and the Managing Editor of the IEEE IoT eNewsletter.