This article is all about Stream Processing and Cloud Nativeness. You might already have an idea about this article based on the title. Then, without much sugar coating let’s discuss the subject.
Cloud Nativeness one of the prominent term or buzz words you could hear in the tech industry in recent days. There are a lot of organizations are revisiting their enterprise architecture and system to move them to the cloud. For most of the organization, moving into the Cloud is considered as the next milestone in the digital transformation effort.
OK, now. What is meant by Cloud Nativeness?
Cloud-native is an approach to building and running applications that exploits the advantages of the cloud computing delivery model.
There are a lot of things are happening around Cloud Native applications, infrastructure, and ecosystem with the support and moderation of CNCF.
As per Gartner, Cloud native infrastructure is growing rapidly,
Also checkout the promising predictions made by Forrester regarding cloud computing and infrastructure for 2020.
Currently, you can find a lot of legacy services or server-based services (monolithic services) that are moving to microservices as a step to move into the cloud. A lot of organizations are succeeded in that as well. In the high level below are the advantages that you can gain when moving into the cloud.
Cost effectiveness - Let’s consider the on-premise server environments, there are situations that servers are Idle (or not fully utilized) because traffic varies based on the time, etc.. Then, it ended up spending up resources and money unnecessary without any use. With the help of cloud native applications (and its container architecture), unused resources are eliminated.
Reliability - Due to the container nature of the cloud native application, they are self-sufficient and works uninterrupted. Also, since the application is built as a separate microservice for each independent business function, failure in one container will not affect other business functions.
Scalability - Each business function is built into independent micro services and they are loosely coupled. Due to that, each of those microservices can be scaled independently as separate containers. Cloud platforms such as Kubernetes, open shift, etc also support to achieve this natively without much hassle.
Now, you can understand that cloud native application provides some unique advantages over the on-premise applications/services. It is straightforward and easy to move traditional monolithic stateless applications to the cloud because cloud infrastructure highly supports or fits into that. Also, as you aware Kubernetes plays an important role in achieving cloud nativeness and various technologies are gathering around Kubernetes to build cloud native applications
What about Stream Processing?
Stream processing applications are part of the enterprise system which is responsible for event-driven integrations and real-time analytics, then with this cloud native evolution, those stream processing applications also need to be moved into the cloud to gain the advantages of the cloud native transformation; it is not beneficial to run stateless services/applications in cloud and stream processing applications in on-premise. But moving stream processing applications which are stateful into the cloud is not easy as stateless services. To play in the cloud native space, a stream processor must-have characteristics such as being lightweight, loosely coupled and adhering to the agile dev-ops process. However, most of the traditional stream processors are heavy and depend on bulky monolithic technologies which makes them harder moving to the cloud. Below are some of the areas that blocking existing Stream Processors to move in to cloud.
- Majority of the existing Stream Processors are heavy & built on top of bulky monolithic technologies
- Node to node traffic is high
- Based on master-slave architecture
- Streaming applications are stateful
- Cannot scale the nodes independently
As given above, some design/architectural factors of the existing Stream Processors are not cloud-friendly and it hinders to move into the cloud. Then, Stream Processors should have an architecture that supports cloud deployment. But the stateful nature of the streaming applications is unavoidable in stream processing context then it is important to invest time/energy to achieve those requirements in cloud frameworks such as Kubernetes.
Siddhi Stream Processor is designed to build stream processing applications and run them in the cloud. It is built on top Siddhi stream processing library which is lightweight, it can boot up within a few seconds and able to process more than 100K events per second. Due to the micro nature of the Siddhi architecture, it allows you to deploy as containers. Siddhi provides native support to deploy a streaming application in Kubernetes with Siddhi K8s operator. Siddhi Kubernetes deployment patterns are built on top of features provided by Kubernetes and other cloud native technologies such as NATS and Prometheus.
Siddhi deployment patterns are not based on master-slave architecture and there is no traffic involved between the nodes thus container-based deployment becomes easy and it leads to deploying Siddhi in Kubernetes natively. In Kubernetes, Siddhi supports to run stateful stream processing applications by involving a messaging system such as NATS and state snapshot persistence with a volume mount.
Let’s discuss more Siddhi Cloud Native Stream Processor and how it achieves scalability, reliability and other benefits of cloud infrastructure in the next article. You can refer to the Siddhi documentation here to get more understand on this.