Every enterprise want to expose an API for developers and merchants to consume these days. API as we know it, is supposed to be M2M (Machine to Machine) communication.
So why are people needed in API Turks? What is the human’s role in Machine to Machine communication? People play a critical role in the API discovery and understanding. Once we find a service, we have to understand WHAT exactly we can do with it and HOW to do it. Once we have the answers, we can program the API client. The client will then work autonomously until the API (or our needs) changes. Of course, this assumes the API documentation is in always sync with the implementation.
But if we start to think about it, we must feel APIs as we know aren’t the endgame. They won’t stay here with us forever. So let’s take a look at what might happen with APIs in the future.
If the human role is expensive, slow and error-prone, how can we eliminate it? What would it take to create fully autonomous APIs?
First, we need a way to develop and share domain vocabularies. The next step is to start sharing understanding at the runtime. Then an API discovery system can register the new API altogether with its vocabulary.
The workflow in an autonomous system without human intervention could work as follows:
A machine exposes its interface together with a profile describing the interface and its vocabulary. This service registers itself with an API discovery service.
Later, another program queries the API discovery service using terms from a vocabulary. If found, the API discovery service returns the matching service back to the program.
The program (now API client) has already been trained to work with a requested vocabulary. It now can use the API to perform the actions needed.
The client is programmed declaratively for a particular task, not tightly coupled to a distinct service interface at a particular point in time.
Such an approach not only allows for clients resilient to API changes, but it also brings the reusability across multiple APIs.
For example, you would no longer develop a weather application for a particular service. Instead, you would build a generic client that knows how to display a weather forecast. This application could use various services such as AccuWeather, Weather Underground, or any country-specific weather forecast provider as long as the service uses (a portion) of the same vocabulary.
To wrap it up, the building blocks of Autonomous APIs are:
1. Vocabulary Registry
2. Understanding shared at a runtime
3. API discovery service
4. Programming for vocabulary not data structure
So where are we with all of this at the start of 2017? The good news is we have the building blocks, and they are getting the traction.
We are starting to share the understanding at a runtime. HATEOAS controls are shared using one of the hypermedia formats. The JSON-LD format adoption is increasing across the API industry, and search providers like Google, Microsoft, Yahoo, and Yandex backing the Schema.org vocabulary.
Formats such as ALPS are giving us the power to carry semantic information on both data and affordances. At the same time, GraphQL Schema can be introspected at the runtime to discover the surface of a GraphQL API.
Finally, dedicated API catalogs are starting to appear with HitchHQ and Rapid API leading the movement.
XtraDot Technologies is a leading innovative technology firm, specialised in the highest level of design, application development and enterprise integration. Our consulting services include e-business, project collaboration, product development, unconventional technology, social media, e-marketing as well as branding.