What are Microservices?
Before we can implement microservices in a cloud environment, we must understand what microservices are. A microservice is a software development technique using any programming language, built independently and run as a self-contained process to work with other microservices in a distributed environment that delivers unique granular level functionality.
There are various uses for microservices, for example, if you are purchasing a product from an e-commerce website, multiple microservices are working together, including:
- Login – allows existing users to log in
- Search – supports various user searches and displays needed products
- Product Details – services to get technical product details
- Review & Rating Services – Gets multiple reviews and feedback info
- Cart service – adds/removes products from cart
- Billing & Shipping – manages billing and shipping information
- Payment Gateway – takes care of product payments
- Review & Submit Order – review details and completes current orders
- Order fulfillment – fulfills the request and coordinates with other services
- Shipping & delivery – take cares the shipping the products and delivery
Let’s discuss how to make microservices compatible with a cloud environment and what qualifications/principles microservices must have to run on a cloud platform with high availability and scalability.
If you plan to deploy your microservice to run in a cloud, it’s crucial that you choose a cloud-compatible programming language to develop your service. For example, categories like Binary, Go, HWC, Java, .NET Core, NGINX, Node.js, PHP, Python, Ruby, and Static file to develop microservices are more standardized and supported by the top cloud providers.
Continuous Integration & Delivery
This is a development practice to build, test, and release code changes automatically. The three steps to this process are:
- Continuous integration – this deals with developers making changes in code in the code repository. From there, various code audit and validations automatically run unit tests and verify that the code is valid.
- Continuous delivery – delivering the validated code from the development environment and making it available in QA, STAGE and UAT types of higher environments and creating a facility to build, deploy, and test code automatically in all NON-PRO environments.
- Continuous deployment – an automated service deployment into production after a manual approval process.
Continuous Integration and Delivery provides faster response and service delivery to cloud for planned and unplanned releases by following all configure, build, test, and deploy steps.
Microservices use configurations like username, password, external servers, hosts, ports, URLs, keys, tokens, etc. When you want to set up a cloud-compatible service, it’s always recommended to externalize configuration information instead of serving it from the same packaged code. This way, if you need to change a few parameters, you don’t have to rebuild or re-develop just because of a configuration change. When service starts, it connects to the central server, loads it one time and starts serving based on that configuration. If there are any changes in the configuration, they are automatically picked up and the service restarts.
Service Registry and Discovery
Microservices deployed in the cloud will continue to change the configurations due to cloud operations’ auto-scaling, failures, and frequent deployments. There is no guarantee that each service will be running in the same server and same network location like DNS. Due to this reason and the dynamic routing of service calls, microservices need Service Registry and Discovery features.
There are three main concepts here: Service Registry, Server-Side Discovery, and Client-Side Discovery. Service Registry is a database of all services’ global endpoint URLs. It stores other DNS information, making it highly available. This uses management API when a service wants to register and de-register, and query API to discover what services are running.
Server-Side Discovery involves a facility where the client makes calls to an intermediate router service, and the router service, in turn, queries the service registry and forwards the request to the appropriate service. Client-side discovery means the client directly calls the service registry using query API and sends the request to the proper service.
For example, Netflix Eureka, Apache Zookeeper, and Consul are some of the famous Service Discovery implementations in the market.
In a Cloud environment with remotely distributed services, sometimes a request to a microservice can error out due to a technical error such as connectivity, temporary dependency service being un-available, timeouts, etc. These errors may be corrected automatically at a later time, so in this context, a highly available cloud microservice should perform an appropriate action when this situation occurs. This is called a circuit brake implementation, where service will have Open, Closed and Partially Open states to accept client calls.
Each microservice must have some logic implemented in a way that determines how it should respond with continuous errors, and for a particular period retain the send responses of the errors without executing the dependency functionality calls. After some time, service opens the calls to execute functionality calls in order to verify if the issue is resolved. Once it’s determined that service is functioning like the original state, then service continues as normal.
Many services will be running to support single functionality in the cloud. When end-clients leverage a service, it is difficult to manage which service to call in the event that there is more load on one end-point. In order to more effectively route a client request, a proxy/routing/gateway type service will contain a pre-map based logic that, in the URL or content, has code that automatically redirects the request to the appropriate service.
This includes filter, pre-execute, execute, and post-execute type functionality. For example, it can call a service when its notified of a priority vendor request, and once call execution is done, it will send metric information to an internal monitoring service.
It’s critical that all microservices are secure in terms of network, access control, authorization, data transit, data at rest, etc. It is a widely accepted standard that Oauth2 security pattern is suitable to secure. The Resource server, the Authorization server, Resource owner, and client all play different roles in this aspect. It’s also a recommended best practice that microservices be implemented with defense in depth in terms of securing the assets. That means if one layer of service fails in terms of security, it can be caught at another level.
It is also recommended that developers leverage more robust industry standard libraries. This will secure data at rest and secure data at transit, such as encryption and other crypto libraries, instead of coding new libraries. It is always recommended that input verification and validation output encoding to avoid various attacks like buffer overflow and SQL injection etc. Security libraries should automatically update without developer intervention, such as standard libraries which should be leveraged in microservices development.
Design for Failure
There is no guarantee that cloud server and hardware will always be available like in a traditional static environment. There is a possibility that there can be autoscaling outages, re-starts, and upgrades due to the nature of cloud infrastructure. So, in this context, microservices should be designed in a way so that more than one node/service instance deployment, like horizontal scaling and other applications, do not incur data loss, or there is no situation that data is received and not processed. This ensures that business continuity is not interrupted during data processing in case of outages because other replicated horizontal instances will process the data where it left off. This is how micro-services should be designed to run in the cloud.
Microservices should leverage a feature called Containerization, which is an Operating System level virtualization approach used to deploy and run distributed services without launching an entire OS. Multiple microservices run on a single host and access the same OS kernel. Basically, the entire process can be started, stopped, and restarted by starting the container process. This feature is growing rapidly and supporting microservices architecture in the cloud. Technologies like Docker, Kubernetes etc., are commonly used in various cloud platforms.
There are various other factors like local file system, ports, config properties, test data, resource or configuration files, environment based profiles, etc., which are not recommended in local, or static environments. It is always recommended that a developer should leave this in independently in a way that cloud distributed environment will choose these things dynamically. These should not be tightly coupled with the microservices. Or another way to communicate this is that any changes in the above service should not be re-built and deployed again. The dynamic nature should be used to develop cloud microservice.
Cloud microservices with the above qualifications can be leveraged to develop highly scalable Applications in a distributed cloud environment. These will help management respond faster with rapid development and high-end quality services delivery and at the same time support business continuity leveraging cloud microservices.