fabernovel loader

Apr 10, 2017 | 6 min read

Tech

DeepMind, the Startup Laboratory

Tom Morisse

Research Manager


FABERNOVEL
In just a few years since its inception in 2010, DeepMind has unquestionably become the most famous research outfit of the AI landscape. Its rise has been accelerated by its acquisition by Google in January 2014, for $650m. And of course, the victory of its AlphaGo program against one of the very best professional go players, in March 2016, brought DeepMind to the attention of the general public.

More than a consistent provider of headlines, in our view DeepMind epitomizes the current wave of AI research in 3 major ways, all intimately related to its hybrid nature of startup & research center: it conducts research in an innovative fashion, it explores a wide range of applications, and its achievements also give rise to a good deal of existential questions.

 

1/ DeepMind points to a new (hybrid) way to conduct complex research projects

The best way to describe DeepMind is to call it a startup laboratory. A place with a lofty goal: to “solve intelligence” and “use it to make the world a better place”. In the spirit of its founders, DeepMind is the best of both worlds – halfway between academic institutions and Silicon Valley. According to co-founder Mustafa Suleyman, DeepMind is even imbued with “values of the public sector”.

DeepMind is thus compared by CEO and other co-founder Demis Hassabis as the “Apollo program of AI”. The company gathers 250 scientists who are attracted by a flurry of factors, including high compensation, the ability to tackle a diverse range of really hard problems and collaboration with top-notch peers from various backgrounds (from neuroscientists to physicists).

DeepMind’s office in London

What is interesting is that Google has left DeepMind’s independence untouched after its acquisition. As its annual accounts demonstrate, its leeway is substantial: it had expenses of £38m in 2014, and £54m in 2015 – and no revenue whatsoever.

But DeepMind is also hybrid in the scientific approach its researchers pursue regarding AI. This approach is profoundly idiosyncratic, and Demis Hassabis is its exemplar. At 40, he is a former child chess prodigy who went on to found a video games studio and then earned a PhD in computational neuroscience at UCL. This whole background has nurtured the work of DeepMind, centered on what they call “deep reinforcement learning”, the combination of deep learning – here comes the inspiration from neuroscience – and reinforcement learning – they’ve long used a video game environment to train their models.

DeepMind Lab, a 3D game platform to train AI models

In the end, DeepMind aims for the development of “general-purpose learning systems” – models that, contrary to numerous predecessors or even competitors, could be easily applied from one problem area to the next. And this opens up a raft of possible use cases.

 

2/ DeepMind showcases the breadth of potential applications for AI

The hybrid nature of DeepMind explains why it is interested in advancing research, but not (only) for the sake of theory per se. As Mustafa Suleyman explained in a recent interview, to them “making the world a better place” means tackling pressing problems such as climate change or feeding the world population. To be clear, they have not suggested solutions yet – at least publicly – but it is their endgame. The title of Suleyman, “Head of Applied AI”, is a testament to their drive for tangible results and not just academic glory.

From lip-reading to speech synthesis to image generation, from Atari games to go, DeepMind’s numerous research projects encompass a lot of the sub-problems that AI has traditionally dealt with.

In March 2016, AlphaGo challenged Korean go master Lee Sedol – and won

So far, two application areas have been addressed by the company:

  • DeepMind Health is a 5-year partnership with the UK National Health System to analyze patient records so as to diagnose illnesses early on.
  • DeepMind for Google is the collaboration effort with various teams such as Google Play or Google Ads. In 2016, the major result was the reduction of Google data centers’ cooling bill by 40%.

The Streams app, developed by DeepMind, helps nurses and doctors get relevant real-time information about their patients

What is so appealing about DeepMind is that you cannot know where they will strike next, especially with the Google firepower now behind them – that is to say, potential access to large-scale datasets and computing infrastructure.

 

3/ DeepMind embodies all our AI anxieties

With each of its successes, DeepMind contributes to pushing the boundaries of what artificial intelligence can do… which is often interpreted as further eroding what looked like humanity’s preserve. After all, the millennia-old game of go was supposed to be so complex that machines would not be able to challenge us for a few more decades.

In a too-often heard view of intelligent quantity as a fixed quantity, AlphaGo’s victories inevitably beg the question of our relevance. Hence Gu Li, one of the go masters defeated by a new version of AlphaGo in January 2017, expressed a gloomy vision of our future: “I can’t help but ask, one day many years later, when you find your previous awareness, cognition and choices are all wrong, will you keep going along the wrong path or reject yourself?

Moreover, both because of its tremendous ambition and the breadth of its research projects, DeepMind fuels the fears of the potent-AI-gone-bad. If AI really is a “meta-solution” (Hassabis’ words), then it could become a meta-problem too. Indeed, Elon Musk was an investor in DeepMind so as to keep an eye on AI development – and we can assume that what he saw there was probably one of the reasons that led him to co-found OpenAI, a non-profit whose mission is to build a safe and openly distributed AI.

Through its foray in health care, DeepMind has also stoked fears about privacy, since recent AI models train on and are applied to massive volumes of data. For instance, the NHS collaboration will give the company access to data related to 1.6 million patients – this project has thus received a good deal of criticism from privacy advocates. (This is the problem with health care: it is a large sector ripe for improvement, but at the same time it is one of the most sensitive domains.)

It is noteworthy that DeepMind’s leaders worry about the consequences of their projects, and consider that trust in and control of artificial intelligence are major concerns. The problem is that their efforts to assuage fears are sometimes counterproductive. The most representative – and, we have to admit, slightly hilarious – case is the creation after their acquisition by Google of an ethics board… whose members’ identities the company refuses to reveal.

DeepMind still has plenty of room for improvement in terms of transparency. Until its acquisition by Google in 2014, it had kept available information to a minimum – a bare-bones landing page. Even today, DeepMind does not publish the comprehensive list of researchers who make up its team – all you can do is to connect the dots by looking at the authors of released papers.

As DeepMind keeps improving its models and keeps tackling even more sensitive issues, it will have no choice but to open up. DeepMind can make a major contribution to the maturation of AI, both as a technical endeavor and as a societal debate. A global project of which transparency should be considered a necessary part, not a hindrance.

 


Takeaways

A fascinating aggregate of influences, practices, ambitions and concerns, DeepMind should inspire – in both positive and negative ways – any organization willing to structure a research effort around artificial intelligence:

  1. It’s a team effort! You are not recruiting a series of individuals, but a team of peers, who are looking to tackle tough challenges by collaborating with the best – think about how to reach this kind of “network effect” which attracts the best scientists in a virtuous circle.
  2. Offer a singular research culture. PhDs appreciate environments which combine the best of diverse research “ecosystems” – the ambition and atmosphere of startups, the general interest, long-term focus of universities and the impact and outreach of blue-chip companies.
  3. Encourage the research team to cross the application gap. Contemporary AI models can more easily move from basic research to prototyping phases. You can take advantage of this to at the same time (i) give even more ambitious challenges to your AI team to tackle, and (ii) accelerate the testing and incorporation of research efforts into your services or processes.
  4. Transparency is non negotiable. Be ready to give open access to your current research areas, and to openly publish key results as well as the comprehensive roster of your research team. Create an ethics board whose composition and findings will be made public.

Interested in receiving every week a new episode on Artificial Intelligence - 1st Season by FABERNOVEL?

Subscribe
This article belongs to a story
logo business unit

FABERNOVEL

Distribute the future. Connect leaders. Change the game. We ignite ventures and transform organizations for the new economy.

next read