By - admin

Auto containerization of large monolithic application with the help of Machine learning

Continuation …. In search of Cloud compute strategy :

In the previous two articles, we understood the bunch of advantages of containerization, apart from establishing cleaner boundaries, separation of concerns, team ownership delineation and resource allocation. Based on my experience 2 Cores and 8 GB pod the common unit containerization – micro services architecture. Remember micro services architecture patterns are more for the entire application, not just the design of micro-service itself .  I Intentionally didn’t prefer to dwell deeper into “how-to-kubernet” for now, as there is a ton of content on the internet.

This series concerns itself on micro services patterns and machine learning, which often consists of many different applications running on a single or hybrid cloud environments and/or one or many different on-prem servers.

For example, if you have an intranet application made up of a front-end (on any of the popular MEAN stack) displaying a bunch of operational Analytics to analysts, and a bulky middle wear running on legacy software and back-end, then end user request latency is most critical, so the user facing application should have sufficient resources to be highly responsive. That is why I emphasized on resources isolation before thinking of decomposing monolithic architecture. Keeping end user in mind before splitting single node application into multiple containers, has many advantages and the most significant of it is end user contract! It is important to dove-tail your end-users along with you on your path of digital transformation.

The architecture of micro services evolved into systematic design patterns with the rise of distributed computing. Most single node patterns such as- Sidecar, Ambassadors and Adapters are very help full in non-cloud-native application environments. Serving patterns such as replicated load balancers, sharded services, scatter gather and lambda or event driven processing techniques are very useful in micro services architecture if you are designing from scratch. Batch computational patterns such as work queues, Event-driven & coordinated batch processing is used widely in enterprises that are on the path of massive digitization. However, the selection of a pattern is specific to an enterprise and is purely dependent on the end user contract. An appropriate API front end design enables effective contract-first design approach. Once an API is designed and built changes are prohibitively expensive and resource intensive/ This is where API design and tools and techniques such as Swagger/ RAML/OpenAPI comes handy. Think about the API before building it so that you have a proper end user agreement and the specs are clear, another beauty of API design is the design and consumption patterns can run parallel. Now let us switch gears a little bit here

Fun of UN-supervised learning…

is in most cases we don’t have a kind of input output mapping of supervised learning. Unsupervised learning often involves performing vague tasks by understanding the information provided through data. Every data science course and ML course out there talks about one well connected example identifying the domain from a technical whitepaper or patent, or news topics classification using Support Vector Machines, where we have many texts or documenents without any labels associated to them. All we have to do is extract what is inside the texts or the document topic themes, this methodology is called topic modelling. Another example is recommendation engines on Netflix or Amazon Prime or for that matter coupon recommendation to online shoppers, click through prediction with tree-based or logistic regression. Those who worked in this field are well aware of how the ML techniques work, we expose all the necessary data and attributions to an algorithm iterate over multiple folds until a reasonable performance and accuracy is obtained. If time permits in future I will definitely dwell deeper into the unsupervised learning techniques such as clustering and non-negative factorization etc…

. For now, let us concern ourselves on auto containerization of large monolithic application into micro services and discuss the ML techniques. Auto containerization of micro services can yield better results than compare to manual design of micro services from a monolithic application. The thought process around this is based the concept of flight cockpit analysis with a combination of Machine learning techniques. Let us for example an enterprise has a large monolithic application spanning both extra and intranets with ‘000 of users the task of manually decomposing into autonomous self-reliant micro services architecture can be a daunting task, this is where a novel Machine learning approach can be less expensive and can yield faster TAT without impacting customer response time..

Leave a Reply

Your email address will not be published.
*
*