Friday, 1 October 2021

Infrastructure as Code (IaC)

Is the process where Developers and DevOps team auto-magically manages and provision tech stack for an application using a software rather than doing it manual configuration of Hardware or OS using declarative definition file. 

With IaC we can build stable environment much Faster and Scale it, we can also enforce the consistency via code. 

There is no single standard syntax for declarative IaC, different platform supports different and often multiple ,file formats as well such as YAML, JSON and XML.

IaC is supported by list of popular third party tools such as Terraform, Ansible, Puppet. In this post we will use Terraform and cloud provider will be AWS. Terraform is supported by all major cloud provider i,e GCP, Azure.

First we need to install Terraform , I am using Mac so I can use brew to install Terraform if you are on Windows or Linux you can download it from here 


Now, you might be working for different client who uses different version of terraform for that there is a quick switchTo program which you can install and its really helpful in such cases.                          

 brew install warrensbox/tap/tfswitch


Now let's use code and create Infrastructure but before that I need to configure the aws-cli and secure the key, you can put them in the env. variables or in Vault providers as well, first I would like to create a VPC, its very simple just couple of lines of code and we will see who easy it is.

provider "aws" {
region = "eu-west-1"
}

resource "aws_vpc" "myvpc" {
cidr_block = "10.0.0.0/16"
}


In the above code snippet, I have aws as provider and resource which I want to create is aws_vpc with cidr_block given To run this we need to execute two commands "terraform init" and "terraform apply".



Let's check the AWS console and see if we have this VPC ID 


Next , let's take an another example and create a free-tier EC2 instance with web security enabled for both ingress(Inbound) & egress(outbound). 


and we have our t2.micro EC2 instance is ready, with Security Groups for both inbound and outbound  



We saw couple of examples of IaC, how it can help us in managing and provision infrastructure using simple code and provides consistency. We can do everything that is required to manage Infrastructure.

IMPORTANT NOTE :- Use "terraform destroy" command to delete the resource.

Happy coding and keep sharing!! 


Wednesday, 29 September 2021

Event Driven Architecture using Apache Kafka

A quick recap of what we discussed in the previous post about the EDA, and In this post, we will see more insight. EDA It is a pattern that uses events to communicate between decoupled components or services, and these events will need to be published to an event broker platform and then sent to the consuming applications.  

Event-Driven Architecture is comprised of three components.

  • Producers:- are the apps or services that publish the events to an event broker platform.
  • Router:- Routes them to their respective consuming applications 
  • Consumers:- Another app or service that consumes a particular topic in an event router.
When Designing the Event-Driven Models we can implement two Models.

  • Pub/Sub (Publish/Subscribe):- Events are published to a topic and sent to one or more subscribers, once received, the event cannot be backtracked or reread again, and new subscribers do not see the event.
  • Event Streaming:- Events are written to a log and ordered in a partition. A client app can read from any part of the stream and reply to the events.

We are going to use Apache Kafka, to implement the event-driven architecture, which is an open-source, distributed, event streaming platform. 

Using Apache Kafka we could have multiple apps or services that write event events to Kafka cluster and at the same time, we could also have multiple consumers apps that subscribe or stream events from Kafka, Where a Kafka Cluster is a collection of brokers and they could be actual physical servers or single rack and If you are using Kafka on the cloud or as (PaaS) then you don't have to concerned about it. 

Here, Zookeeper is the one who is responsible for the cluster & Failure management and decides which among the replicated brokers can be the new leader.   

The broker can store multiple topics where producers write events, where a topic is a collection of related events or messages. When producers produce an event we need to specify the topic where we want to write or publish it.


In the next post, we will build Spring Boot API, which will produce events and we will see end to end flow of producers, routers and consumers using that, until then.

Happy coding and keep sharing!!
 

Monday, 27 September 2021

An Overview :- What is Event-Driven Architecture ?

Before, We jump into the EDA, let's understand the standard guidelines for system design or generally reactive manifesto. Which is something is community driven guidelines that are indented to give cohesive approach to system.     

So, the core of the reactive manifesto is make system message driven, more specifically "async" messaging.  



We want to make the system async messaging driven with scalabilityresilient and this helps us to build distributed systems or K8s. Where Scalable means our hardware should expand as the workload expands and By resilient we don't want any single point of failure and if it does we should be able to handle it elegantly.

Based on the above three foundations, we should be able to build a system that is responsive.   

Now, we have our core setup let's understand What is an event ?. In simple words, an event is a statement of facts that happened in the past. Let's talk about an example of a Retail application.




So, In an application, we have a checkout service and that service wants to talks to other services such as "Inventory", "Shipping", "Contact".

In the messaging model, if the inventory wants to know what the checkout is doing, the checkout will send a message directly to inventory to let it know a checkout happens. and to others as well directly to Shipping and to Contact service OR these services can message directly to checkout as well "Conversational Messaging", till now the message is sitting on a host machine.

When we design the event application our event producer might web app, mobile app, etc.. this will enable the events logs being produced by all the producing applications. 

Event logs can be used to Trigger an action In the case of IoT when any device turns on, It spins a pod on the Infra and that pod is a function as a Service (FaaS) that sits on top of Serverless Infrastructure and turns down when our function finishes by sending the event.  

With event logs we can optimize and custom data persistence, so can be possible that our Inventory service will consume data from stream send by web application m it will modify the local data and produce in the event backbone and this new stream is giving the most correct inventory data to any other application in the system.

Important things which happen here is we can save all our data raw or transformed in a Data Lake, this will help heavy application like AI.

Another thing that EDA enables is stream processing which is built on top of the Apache Kafka streams API.  

Benefits of Event/Driven Architecture
  • Asynchronous
  • Scalable and Failure Independent
  • Auditing and Point-in-time recovery 

Till now we have seen an overview and benefits of EDA that sit on top of reactive manifesto ideas for system designing.

In the next post, we will learn more about this in detail with some demo examples, Until then 

Happy coding and keep sharing!!