Enterprise Software Deployment: a Docker vs Package Manager Comparison

Marco Cello
5 min readNov 27, 2020
Genova Port, by Marco Cello, 2019.

This article was previously published also on LinkedIn.

In large enterprises, the adoption of Docker containers to distribute enterprise software is slow or has not yet started due to traditional IT practices and technologies still in place. This is a missed opportunity as Docker containers represent a convenient and viable way to distribute, deploy and maintain software and the non-adoption can potentially slow down the entire innovation cycle of the enterprise.

We embraced the Docker journey a long time ago and with the experience we have gained, I would like to show the benefits and value brought by Docker containers, compared to traditional software packaging methods like RPMs and DEBs when you need to deploy and maintain enterprise software. This should help enterprise IT to make better decisions on software deployment.

What’s the difference between Packages and Containers?

Packages are bundles of files that are installed by a package manager such as RPM in RHEL or APT in Debian GNU/Linux, which checks to make sure that multiple packages use compatible libraries, do not use the same filenames, etc., before writing the files into one shared filesystem.

On the other hand, containers or OS-level virtualization, refer to an operating system paradigm in which the kernel allows for the existence of multiple isolated user space instances. In this way a program or group of programs can run, isolated from the rest of the system. They can only access their own files, with a separate root directory and mount table, network interfaces, etc.

Docker is a container manager; it has brought a lot of innovation and commodities to the container’s world:

  • Docker allows you to create Docker images, which are basically bundles of files that form the initial state of a container. Each container then includes all the dependencies needed to run it (e.g., binaries, libraries). You can also search and distribute Docker images through so-called Docker registries. Through Docker registries developers or sysadmins can create and distribute their own created images, so that other users have plentiful repositories of heterogeneous, ready-to-use images at their disposal. You can use your package manager to create Docker images.
  • Docker provides commands to start, stop and restart containers from specified Docker images. You can also connect containers together and make network. It is possible to configure their networking options, connect them to third-party services, mount external directories and files, …
  • Docker provides monitoring tools that allows IT operations to observe the behaviour and resource consumption of all the running Docker containers. Moreover, other Docker containers, explicitly created for monitoring, can be spun up along the Docker containers serving the Enterprise software.

Docker as a way to deploy and maintain Enterprise Software

Enterprise software typically refers to software that provides mission critical solutions to the entire organization or the majority of it. It has much more tight requirements compared to consumer software in terms of availability, monitoring and performance.

So, the question is: which aspects of Docker overcome a Package Manager?

Dependency Hell

If your software is provided as a regular package, you will have to face the problem of dependencies. In fact, it is highly likely that the enterprise systems (e.g. VM, bare metal servers) where you deploy the software will have different library versions from those required by your software. You have 2 choices: -) ask the software vendor to change their software in order to align it with the same library versions used in your system; — ) prepare a subset of your servers just for the software with the required libraries.

Neither choice is ideal: in both cases you slow down deployment and in the second case you pre-set servers just for that software, making a hard choice on which servers will run which software and consequently rendering all your enterprise software “machine-dependent” which makes it impossible to easily migrate the software in case of scale/redundancy and so on.

If the software is provided as Docker container(s), on the other hand, you’ll just need to install a Docker engine (i.e. the engine that makes it possible to run Docker) on the systems you want and spin up the containers. There’s no need for the same kind of coordination as there is with packages, because of the sandboxing. The libraries installed in your system, in fact, can coexist with the libraries installed in the Docker images. As the IT manager, you’ll have to keep your system updated with the server-wide libraries while the vendor is responsible for the libraries shipped inside the Docker container.

Portability with Simple Configuration

As I mentioned previously, packages are not really portable, especially if your systems are heterogeneous in terms of libraries, and OSs. This can become problematic and create overload for the IT operations team if you want to migrate to more powerful server or you want to change or update the OSs.

Docker containers, on the other hand, are more portable. You can run the software on practically all the Linux OSs and any private and public cloud can support them (i.e. Openstack, AWS, Azure, GCP, …). Docker containers enable you to use more advanced container orchestrators, such as Kubernetes or Openshift.

Multi App Capabilities with Resource Isolation

Even though the comparison is not technically correct, from a logical point of view, containers can be viewed as very light and fast VMs. For that reason, each software running on a Docker container “thinks” it is the only software running in the system and any other container is viewed as an external server. With this in mind, it is a natural next step to create an architecture where servers contain different Docker containers, each running different applications with the great advantage that each Docker container is isolated. Docker, in fact, lets you reserve CPU and memory for each container in the system, so that containers don’t use each other resources. This scenario is taking us towards real multi app capability.

Docker in Rulex

In Rulex we are moving fast by constantly shipping new features and improvements in the product and in the platform. We want our clients to experience the same speed and quality. For that reason, years ago, when we started our Docker journey, we decided to move to Docker containers as the primary way to ship, deploy and maintain our software in Enterprise scenarios.

During these past years we have increased our technical and business skills related to Docker: we are now able to ship and maintain our software in practically any Docker scenario while also highlighting and bringing value to our company.

If you want to start the Docker journey or get more info then contact us by sending me a message on my LinkedIn page.

Further Readings

https://www.cio.com/playlist/the-cloud-innovators/collection/cloud-management/article/episode-5:-accelerate-your-cloud-journey-with-containers

https://docs.docker.com/get-started/

Container-based Operating System Virtualization: A Scalable, High-performance Alternative to Hypervisors

Cloud Container Technologies: a State-of-the-Art Review

--

--