fabernovel loader

Feb 24, 2017 | 4 min read

Tech

AI for Dummies

Tom Morisse

Research Manager


FABERNOVEL
Straight to the point, we should first give an overall definition of what artificial intelligence is, right? The problem is, such a unique, clear-cut definition does not exist among the community of AI researchers! (Not least because understanding and defining intelligence itself is still an ongoing endeavor.)

3 ways to define artificial intelligence

Indeed, there are several ways to consider what AI is. The first – and the most common – is to look at the sought-after outcome of AI research: roughly speaking, either the “creation and study of machines that behave in a way that denotes intelligence (note: whatever ‘behave’ may mean)” or “the creation and study of machines that think (note: whatever ‘think’ may mean)”.

The second way to define AI is by looking at its components or sub-problems it is aiming to solve. The ones you’ll most often hear of are:

We can’t resist adding a more cultural – or aspirational – way to define artificial intelligence, suggested in 1998 by Astro Teller (now CEO of X, Alphabet’s “moonshot factory”): “AI is the science of how to get machines to do the things they do in the movies.

Indeed, this definition is not far from the concepts of Artificial General Intelligence (or strong AI or full AI) and Artificial Super Intelligence (or superintelligence), of which the examples abound in the works of science fiction. They designate generalist systems that would respectively match or exceed the capabilities of humans – that is, that would combine all the components we just listed.
One of the most popular sports among AI commentators today is thus trying to guess when Skynet will take over. If you notice a wide variation among predictions about Artificial General Intelligence and Artificial Super Intelligence, it’s perfectly normal – really hard to say if such forecasts are under- or overestimated, and if such levels of machine intelligence are even achievable.

 

2 main approaches to AI

Since the beginnings of artificial intelligence in the 1950s, two approaches have been pursued:

In the first approach, you program rules, solving a problem through a tree of steps – the pioneers of artificial intelligence, many of them logicians, were fond of this method. It culminated in the 1980s with the rise of expert systems, programs that intended to encapsulate a knowledge base and decision engine taken from specialists of narrow fields, in order for instance to help organic chemists identify unknown molecules.

The problem is that with such systems, you have to start from scratch when developing a new model – handwritten, specific rules are by nature very difficult or utterly impossible to generalize from one problem to the next, say from speech recognition to medical diagnosis.

In the second approach, you program a general model, but it’s the computer that adjusts the model’s parameters using the data you provide it with. It’s the most popular approach these days.

Some of its models are really close to statistics methods, but the most famous ones are inspired by neurosciences: they are called artificial neural networks (or ANNs). Such ANNs have a common general recipe:

If you’ve heard about a current deep learning frenzy, it’s because this type of ANNs, made up of a large number of layers – hence “deep” – has yielded significant results in tasks such as identifying objects in images.

 

In addition, you’ll probably encounter one of the 3 ways to classify machine learning models:

  • Supervised learning: you feed your model with labeled data – e.g. a stereotypical cat image comes with an explicit “cat” tag attached.
  • Unsupervised learning: you feed your model with unlabeled data, and let it recognize patterns on its own. Since data is usually not labeled – think about all the photos accumulated in your smartphone – and the labeling process takes time, the unsupervised learning approach is harder / less developed and looks more promising than the supervised one.
  • Reinforcement learning: at the end of each iteration of your model, you simply give it a “grade”. Let’s take the example of DeepMind, which trained a model to play old Atari games: there, the grade was the score displayed by the games, and the model progressively learned to maximize it. The reinforcement learning approach is probably the least developed of the 3, but the recent accomplishments of DeepMind’s algorithms have shed new light on this effort.

 

Artificial intelligence is not a tree… it’s a bush!

So, when you combine the problems tackled by AI research, its various “schools of thought”, these schools’ own branches, various goals and sources of inspiration… you understand why the attempts at well-organized classifications of the field are always flawed. Take a look at this one – do you see the problem?

Putting “machine learning” and “speech” at the same level is inaccurate, because you can use machine learning models to solve speech issues – they’re not parallel branches, but rather different ways to classify AI that can get entangled.

Hence, the difficulty – and beauty – of the artificial intelligence field is that it’s certainly not an orderly tree – it’s a bush. One branch grows faster than another and is in the limelight, then it’s another one’s turn, and so on… Some of those branches have crossed, others have not, some have been cut and new ones will appear.
Hence our essential piece of advice is: Never forget the big picture or you’ll get lost!

 

Interested in receiving every week a new episode on Artificial Intelligence first Season by FABERNOVEL?

Subscribe
This article belongs to a story
logo business unit

FABERNOVEL

Distribute the future. Connect leaders. Change the game. We ignite ventures and transform organizations for the new economy.

next read