It’s certainly true that if machines can interpret words like we humans do, they become capable of doing lots of smart tasks, some of which can save us precious time as organizations, developers and users. Importantly, this ability also helps to reduce client-server coupling. Read on to find out more.
The term “semantics” refers to the meaning of words. More specifically, it is a branch of linguistics that studies “what is signified”, i.e.: the things we talk about and what we mean when we do. Memorize this word because it appears throughout this article.
Semantics are already a part of daily life
Whether you are a developer or an internet user, you use semantics all the time. If you are searching for a place or musician on Google, for example, the “knowledge graph card” on the right-hand side (ringed in blue in the screenshot below) uses this technology.
Rich snippets are another example we regularly come across on Google.
Facebook’s Open Graph Protocol uses it too. It lets websites generate previews on social media and instant messaging services.
How do machines and semantics work together?
In this series, when we talk about semantics, we are referring to machine-interpretable semantics.
Let’s take the phrase “John Lennon had a Mercedes brand car”.
One word might have many meanings, so how do we differentiate between them? For instance, “brand” has eight meanings according to merriam-webster.com. How do we identify which meaning applies to the previous phrase? The most explicit way is to write “[…] the brand (in its fourth meaning on merriam-webster.com) was […]”.
This is more or less what machines do. To communicate “(in its fourth meaning on merriam-webster.com)” to the computer, we tell it that the definition of the word “brand” is available from a very precise URI. In our case, that would be: https://www.merriam-webster.com/def/brand#meaning_4. URIs are very useful because they offer a unique way to access a part of the internet. This means we are guaranteed that each URI only ever has one definition. So by simply linking the words we use to their definition, we enable machines to make reasonings about their meaning.
What about synonyms?
If we were to replace the word “car” with “automobile” in our example phrase, without semantics machines would be unable to understand that they mean the same thing. However, with semantics, if both words are linked to the same definition (or if a thesaurus is available), machines would have that capacity. And this is now what’s being done.
Existing technology lets us go even further. If we say “Nicholas’ mom is Christine A” and “Brigitte’s mom is Christine A”, it can be deduced that Nicholas and Brigitte are brother and sister without this being explicitly mentioned.
If the technicians out there would like an example of what we mean, here is one in JSON-LD format. If you aren’t a technician, don’t worry, we won’t need it for the next bit!
If you would like to know more about the theory behind these concepts, check out this article.
What do semantics enable us to do? How do they cut back on coupling?
What if keywords didn’t matter anymore?
If we asked an API what brand of car John Lennon had so we could show this information to users, is the keyword the API uses for its response important? What is the difference between “car”, “automobile”, “auto” or “ride”? After all, all these words mean the same thing.
Without semantics, we can’t ignore the keyword, so we are obliged to write the following code.
With semantics, we can write the code below. This eliminates the need for coupling with a finite list of keywords – and APIs.
Visual components coupled with an information list rather than a fixed data model
To illustrate this point, let’s take a look at what happens at the moment. In this example, the component has been coded so that it works with the <code>Person</code> type visible at the top. If data is sent to it using the visible structure shown at the bottom, it won’t work. Yet both examples contain the same information.
The same problem would arise if the structure were identical but the keywords different (we looked into this earlier).
Semantics offer a solution to this problem. Instead of programming the component so it works with just one data model (structure + keywords + information type), it can automatically find the information it needs in the dataset provided.
So, here, we have looked at coupling with a fixed data model.
A better UX for visual components libraries
Semantics give us a uniform type system for any and every programming language. A type system allows us to specify what type of data we are dealing with. That type might be a person or a car, for example. The main advantage of this system is that, because it is adaptable, it doesn’t insist on any one structure.
You can use it to create smarter visual component libraries. These libraries can display the most relevant component for a data type, for instance, and highlight the most relevant information depending on whether you want to display a person, event or other data type.
This example shows us a solution for reducing the amount of coupling between data sent by APIs and data expected by component libraries. Again, we are talking about coupling with a fixed data structure.
Interchangeable, automatically selected and composed APIs
Thanks to semantics, smart programs are now able to understand that two APIs offer the same services, such as telling us a city’s weather forecast. All it takes is a shared definition, or synonyms, as we discussed in the first part of this article.
As a result, it is possible to automatically switch two APIs without having to change the code. Let’s take a look at two cases where this is particularly useful. First off, let’s say an API you’re using stops responding. To enable you to keep offering these functions to your users, the program replaces that API with another. Second, imagine that the API you are using is slowing down, while another offers a better average response time. Again, you can automatically replace one with the other to improve the user experience.
How do we automatically find these APIs? We need to start by making descriptions of APIs’ services available – you could do this in an online repository, for example. All that needs to happen next is for a program to query the repository using a list of desired functions and non-functional criteria (such as response time).
This ability to figure out what an API does (and, by extension, what an API needs in order to give a response) has another use.
Let’s say that we want to listen to John Lennon’s ten most famous songs. We have a connected speaker with an embedded API at home. Our smart system spots it. This speaker’s API only lets us play music, so we need a different API to find the ten songs and their audio files. Thanks to semantics, our intelligent agent recognizes that it needs this missing API, finds it and automatically sets it up with the API in our connected speakers.
These examples have taught you how semantics can reduce software’s coupling to certain well-defined APIs.
Very powerful, precise searches give us exactly the type of information we want
One great example of this comes from DBPedia, an academic research project that gathered information from Wikipedia and annotated it semantically. It asked to find a list of “soccer players who are born in a country with more than 10 million inhabitants, who played as goalkeeper for a club that has a stadium with more than 30,000 seats and the club country is different from the birth country”. You can see the results in the table below. If you do the same search on Google, you will see that the results are noticeably less precise.
You will notice that all these results are links. As such, you can navigate towards a detailed page about each player, country or team.
Smarter bots and voice assistants
We have seen that it is possible to move away from keyword and data structure coupling, that APIs can be automatically selected, embedded, replaced and composed, and that extremely precise requests can be feasibly made.
Thanks to these new tools, we can resolve one of the main problems encountered when using well-known assistants such as Siri, Google Assistant and Cortana. We are talking about their capacity to communicate with the limited number of APIs they were specifically programmed for. We could potentially break Siri out of its restriction to Uber and Lyft – in France, for example, Marcel offers the same services in the Greater Paris region. At the moment, Google Assistant isn’t able to book a masseur to come to your home – but if we use semantics, it could do so as soon as an API exists to enable it. That is also the case for chatbots on instant messaging platforms.
If we permit it, these assistants could even very easily retain information we give them, such as our first names, surnames and addresses. This is all possible without any of these assistants being specifically programmed to carry out these tasks, and it doesn’t even entail creating duplications.
In this article, we introduced you to machine-interpretable semantics, their uses and the ways in which they can cut down on technical coupling. You have seen how this technology can save developers time, as long as they have the right tools.
In our next article, we introduce you to technology that will enable you to enrich APIs with semantics, as well as a few resources that will give you more insight into this topic.
If you aren’t a technician, feel free to skip that article. Starting next week, we will tell you about what you can do when you combine both hypermedia and semantics to create genuinely smart assistants, more flexible architecture and lots of tools that will help developers with their day-to-day tasks.
Cartoon illustrations: semantic-ui-react
Code screenshots: carbon.now.sh
Interested in this subject?LET'S DISCUSS ABOUT IT