Reasons to keep with monolithic architecture for web-applications

Worldwide microservices hype is gradually fades away. Developers more and more prefer to evaluate efficiency of moving to microservices first. We are already seen reverse migration of rather big applications back to monolithic architecture.

There are even more examples of extremely successful applications still built and operated as monolithic ones.

In this article we will look at what types of DDoS traffic and what types of DDoS attacks exist. For each type of attack, brief recommendations will be made how to prevent and restore security.

In transition from one big application to number of smaller ones you are going to face several fresh challenges. Let us shortly name them.

Installation time: forget of be up and running of first version in an hour

Launching database with certain basic structure and background process application was always easy and simple. Just publish readme on Github and after couple of hours everything works and you may start your new project. Adding and launching code for initial environment was being made in first day.

In case of microservices architecture time needed for initial setup and roll-out becomes immeasurably longer and may kill all of the developer’s starting enthusiasm and will. There is Docker, of course, with beautiful orchestrators, allowing to effectively utilize any cluster. But for rookie programmer or young development team it all looks substantially complicated. One way or another, to effectively use say Kubernetes, team has to have relevant skilled professional, costing a fortune.

Complicated debugging and error fixing

In monolithic architecture errors are easy to track and debug. Methods of debugging were matured over dozens of years.

Now you have several service instances talking to each other via messages, by putting them on the message bus in truly asynchronous manner, where result of processing of the message may affect other process behaviour. If error pops us somewhere in the middle of the processing, you have to collect all services together, just to realize that dependant services have different versions. You will be forced to use interactive debugger to trace the whole process step by step. Debugging and understanding of the system became much more complicated by definition.

Testing instead of debugging

Continuous integration and continuous development are becoming standard practices nowadays. Most of new applications, that we see now, automatically create and execute test with each new release and demand that tests to pass through and be viewed before registration. Introduction of such processes became great improvement for many serious developers. Nobody would stop using them today.

But in order to test service after changes you have to install fully operational version of the application. But what to do if the application is already distributed between numerous cluster nodes and consists of hundreds separate microservices?

As a result of it developers have to teach their own CI environment to install and run whole that monster simply to make sure if it works at all. It seems to be too much effort and trouble, because it should be possible to test each component isolated way, in case if we are sure in quality of our specifications. APIs are clean, when service failure is isolated and will not affect other components.

One need a very good reason to fall in all of it

There are many reasons in favour of the microservices:

  • greater flexibility;
  • command scaling;
  • efficiency;
  • fault tolerance etc.

HostingHub.eu team offers to its clients support in rolling out and servicing of Docker Swarm and Kubernetes clusters. Frankly saying, we prefer Swarm.

Listed reasons not always can overweight advantages of still developing traditional instruments and approaches, matured in the industry over tens of years. In the end of the day most of attempts result in simple database scaling. But leading vendors are not idling and propose their own solutions for scaling.

In turn, HostingHub.eu team possesses extensive experience and all necessary competences and skills in adopting of scaling strategy and fault tolerance based out of traditional virtualization technology.

What to do: To reduce harm, pay attention to the distribution of an SSL encryption infrastructure (i.e., performing SSL encryption on a separate server, if possible) and controlling application traffic for attacks or policy violations on the application platform. A reliable platform ensures that the traffic is encrypted and sent back to the initial infrastructure with the decrypted content stored in the secure memory of the safe bastion node.

Thus you decide which way to go.