London is the AI capital of Europe, says a report commissioned by the Mayor of London, Sadiq Khan. According to another report, this one from the Select Committee on Artificial Intelligence issued by the House of Lords, the UK is, ‘in a strong position to be among the world leaders’ in its development.
If being smart about the implications of this critical technology, having the right skills and ecosystem and a thriving entrepreneurial culture in place were only enough, then UK would seem to be on a good course to profit massively from this new wave of technological change. But even the most fervent cheerleaders for the UK’s AI industry acknowledge that it lacks something vital to come anywhere near that goal. Scale.
At the recent FutureFest conference in London there were conflicting narratives on offer about the potential role to be played in the future of AI by the UK and Europe; some fairly optimistic, some less so, and some downright scary. Just for a change the scariness of these more extreme scenarios did not involve visions of AI taking all our jobs, or robot overlords running amok and subjugating humankind. Instead, they were about more immediate and practical concerns; things we hear about on the news every day such as data privacy and the global balance of power. And somehow this immediacy made them all the more unsettling.
Blobs of brilliance
Postive notes were struck in a panel session featuring Tabitha Goldstaub (co-founder at CognitionX and Chair of the UK Government’s AI Council) former Deputy Prime minister Nick Clegg and Geoff Mulgan of NESTA. While acknowledging that Europe doesn’t have the scale and infrastructure of the US or China, Nick Clegg lauded the ‘little blobs of brilliance’ happening in Europe: ‘the challenge is: how do we take those blobs of brilliance and bring them all together?’
‘We don’t need to be the best,’ said Tabitha Goldstaub; ‘just good at what we do – and then start partnering’.
Regulation and assurance were seen as particular capabilities associated with Europe that could give an edge in AI. ‘The lesson of financial services’, said Geoff Mulgan, ‘is that good regulation is good for business’. The rise of London as a globally pre-eminent financial centre has arguably been based on its high standards of regulation (which might come as a surprise to some, but these things are relative, after all).
The Nokia story, again, shows how success in creating a regulatory standard (in this case GSM) can lead to global dominance of the emerging mobile phone market, with the tiny Finland beating the US to the punch in mobile markets. GDPR, the recent data protection legislation that is cluttering all our screens with pop-up privacy notices just now, was seen as an example of how Europe can lead the world in creating appropriate standards for the use of personal data. Personal data being something on which the future of AI vitally depends.
The argument here as I read it is that AI raises so many ethical issues that it can only be made to function effectively and sustainably within a well-regulated environment that allows the owners of the personal data which feeds it (us) to maintain trust.
As I sipped my flat white in the basement café of trendy Tobacco Dock, venue for the conference, that seemed to make sense. But what if your opponent in trade or global politics doesn’t see things the same way you do? What if they don’t put so much store by the regulations and standards that emanate from a loose (and rapidly loosening) agglomeration of European nation states?
For Evgeny Morozov, the Belarussian-born writer and researcher named by Politico as one of the 28 most influential Europeans in 2008, the ethics debate is merely ‘symbolic’. For him, it’s meaningless to discuss such things without situating them in the context of geopolitical realities. And seen through that lens, AI is less about ethics and more about size.
Dominance in AI is only partly about having the best skills and knowledge in place; it’s also about big data, and access to the biggest datasets. And who has those? Well, Facebook for a start. ‘Facebook has more knowledge about citizens than government’ says Morozov. And as was made embarrassingly clear by Mark Zuckerberg’s appearances in front of the US Congress and the European Parliament this year, Facebook also knows a lot more about what can be done with this data than governments.
Seven of the 10 most powerful companies in the world are tech companies, according to Morozov, data-rich businesses which are all heavily invested in AI. And while we are all very familiar with press stories about the dominance of so-called FANG companies (Facebook, Apple, Netflix, Google) this is not solely a story about US companies. The world’s most powerful companies continue to be US-based, but they have powerful challengers such as China’s Alibaba and Japan’s Softbank. Neither is it solely about companies: state interests play an increasing role as well.
We could see more concentration of power in an AI-dominated market, Geoff Mulgan suggested in his session. But where will that power be concentrated? Not solely with governments it seems: Washington has geopolitical reasons for not cracking down too hard on Facebook, Morozov said. And not solely with Western democracies either.
The internet created a new business infrastructure, and the last few decades have been very good for the US, which has largely dominated that infrastructure. However, AI will become the next infrastructure, and it is by no means certain, Morozov implied, that the US will continue to be in control of it. Russia has data that other countries lack, he said. China gives an example of how a state-driven model can produce a comparable power to that of the US tech giants. China, according to Morozov, has masses of data, less restrictions on using it, and is investing heavily in AI, buying up the best of European AI companies driven in part by a desire to extricate itself from its current dependence on the Dollar.
His point about China having less restrictions on their use of data is an important one. Looked at in this context, legislation like GDPR becomes less of an advantage and more of a hang-up. A competitive disadvantage, even. And the Chinese have a further thing going for them, according to Nick Clegg, in that they can plan for the long term while Western democracies are limited by the cadence of four/five year electoral terms and (increasingly, in the Trump era) the 24/7 news cycle.
Morozov thinks Europe is too disorganised to offer much competition to these large-scale players in AI, lacking ‘coordination at industry level’. Nevertheless, ‘at the scale this thing is happening,’ his advice for the UK is ‘to stick with Europe or forget about being anything other than a customer. The money involved is huge.’ This point drew appreciative nods from an audience one would assume to be in the majority remain voters. But no-one was cheering.
Evgeny Morozov’s view will be too overly cynical for some. Indeed, for his warnings about the ‘folly of technological solutionism’ he has been called a neo-luddite and even a troll. However his glass-half-empty view of the geopolitics around AI seemed a useful corrective to the enthusiasm and optimism voiced by many of the other speakers in presentations which, when I later examined my notes, turn out to have been heavily laced with caveats, equivocations and rather unrealistic bright hopes about what a forward-thinking government might do to help.
This troubling session seemed to me to raise a lot of questions for those in the People function, for many of whom trust and control are already top-of-mind issues. The difficult problems around personal data that organisations face are likely to get even more crunchy, it seems, as AI progresses.
For those who want to know more, we discuss these issues in our latest edition of The Curve Magazine, with a cover article on Blockchain, which many feel offers an alternative and timely solution to a few of the problems of reliable certification and data privacy …