The design goals of

Small modules: KISS principle

"Keep it simple, stupid!"

This old saying is the reason for us to push the use of small modules. Small modules...

  • are clear
  • have a small API
  • can be quickly overviewed
  • have low maintenance requirements
  • can be easily replaced
  • last forever
  • are inexpensive

Non aging software

We all know software that has grown until it could no longer be maintained or rebuilt. At least not at a manageable cost.

We know many companies that, in addition to the team for the further development of their productive software, have one or even more teams to build alternatives. The larger a monolith becomes, the smaller the chance of fundamental change.

This is different for small modules that are only loosely coupled. They can be easily rebuilt (relinked) or replaced. The low cost of this is the reason why necessary work is carried out promptly.

The software does not age, but remains up-to-date.

Easy maintenance and development

Small modules do not only lead to clear code. They also lead to the software reflecting areas of responsibility.

Teams can maintain their software completely on their own, without having to touch other areas. The software can be maintained, renewed and deployed independently by the team.

Teams and software can be cut so that the entire team knows its software. Vacation and sickness lose their scare due to the presence of effective substitutes. Junior staff can be easily incorporated. There is always someone who can maintain the software.

The application is only the first connected microservice

The ideal application for the is just an entry point for the software. It provides only the frame; all further content is outsourced in independent modules. Even what is displayed in the main window is already outsourced to services running on other nodes.

The services also do not communicate with the UI via the application, but directly. This is already intended by the architecture. UI inputs also go directly to the services, not the application.

Even applications not built for the can benefit from this principle. You can mutate it to a node by including the kernel and start outsourcing services.

Inclusion of services outside the cloud

The has a geospatial dimension. It can be formed into larger networks, with nodes in each office, behind the firewalls, connected via other nodes hosted in the cloud.

Databases and machines that are not located in the cloud can be connected without any problems. Services that are located in their spatial neighborhood can prepare entities, because the user interface usually shows only strings.

Services can be hosted in close distance to the teams that created them. There they can be monitored and updated by the teams. Other services are hosted where they are in high demand to keep response times down. In a mesh network, you are quite flexible if services can be freely distributed.

Very fast communication

Fast communication is a prerequisite for software to be cut into small pieces. Unlike REST services, service in are not only addressed by their stakeholders. They even send messages to other services in order to be able to perform their tasks. And at a frequency that would not be possible with HTTP.

Therefore, we rely on our own protocol and open TCP channels.

Although relies on spatially distributed node networks, an architect can keep performance high by cleverly designing the routes between nodes.

Everything asynchronous relies on asynchronous sending and handling of messages. Messages are often not directly answered. Often the recipient of a message generates other messages. Active waiting would be contra-productive and a waste of resources.

Since the handling of messages is very common, the code also looks similar everywhere. There are many small message handlers that are easy to keep track of. This keeps development simple and quick to learn.

Another advantage of the asynchronous character is the full use of resources. Multi-core processors can be fully utilized. The use of nodes eliminates the need for expensive large computers. A handful of small inexpensive servers achieve the same performance.

Hide everything possible

To ensure that code remains maintainable, it is important to allow code to communicate only through dedicated interfaces. Of course, these should be well documented at the same time. The implementation, on the other hand, should be invisible so that no connections are used away from the intended interfaces.

The use of messages to connect small modules is fully consistent with this paradigm. Messages are also not passed in code, but their description in JSON or XML. The client's programmers use it to generate code. There is no need to exchange interfaces and classes. By the way, with this is already prepared for the use of other programming languages than JAVA, because for each language a generator can be designed easily.

Another highlight is the use of the service registry in . Services are used in the kernel and are based on interfaces. For low-level tasks, the use of interfaces is necessary. The service registry and the package starters ensure that only one file in a JAVA package is public (visible).

Messages instead of interfaces as APIs

Messages have another advantage when used as an API. They are much more resilient than the interfaces often used in JAVA programming.

Interfaces must be distributed, i.e. the client of one service receives code from another service for direct use. Now, if even a small thing changes in the interface, not only the service has to be recompiled. The service user must also always be on the same level.

We have seen large companies with large projects using large APIs on an interface basis and despairing of it. Even minor changes require a new deployment on both sides, which leads to an expensive and lengthy coordination process. Downgrading in case of errors was almost impossible.

Messages work even if parameters are missing, new ones are added, and versions are different. In case of major changes, new features may not be called. But the basic components will continue to work. Raising the version by introducing new details can be done unilaterally. Dependent users of the interface may also support these details at a later date. The software remains executable at all times.

Factory services

For the introduction of services that are stateful, factories are needed to create service instances. Instances are bound to a session, they are deleted when the session ends. Often the existence period is even shorter: a dialog service is only active as long as the dialog is open.

Instances are usually terminated again by the owner. The question arises as to what happens to the instances if the owner does not get in touch anymore. tracks all nodes. When a node becomes unreachable, all other nodes are notified. Each target can use timers, after which the target can be deleted. There are also activity trackers that destroy the target if necessary. A ping support can do the same. There is also support for automatic tracking of target address lists in case multiple targets need to be monitored.

Everything is scalable

Of course, a distributed environment also serves primarily scaling purposes. As the load increases, nodes can be added. The spatial expansion of the network can also increase.

However, if needed, the application is not simply installed on another computer. In , only the services that are needed are installed on additional nodes. A simple dialog that is rarely needed probably does not need to be scaled. Another service is probably installed on dozens of nodes. Automatic load balancing then ensures that the services are evenly utilized.

Since we ideally use small modules, hundreds of different services can be installed on a single node. And the distribution can be changed at any time without having to adjust the program. Thus, it is sufficient to start with a few nodes and adjust the network as needed. The distribution is completely decoupled from the programming. - Innovative Distributed System