I am studying the use of Docker in a big scale project that is actually deployed on production. I never used docker before, but for what I read, it consinst about a new layout called "Container engine" that gives you the opportunity to deploy many aplications that are independent to each other and use the resources of the host.
In the case that I am working at, the machines that where our app is deployed can have different OS and architecture like; Windows, Linux, arm, Debian, etc... but they don't have any VM working on, just the OS and the aplications that we deployed.
These machines can have 4-5 aplications running on the same system, having each one of them different dependencies. We had some problems already with that: for example with the file descriptors, where one app was taking the log writing from another app generating erroneous logs and crashing.
These apps have communications with other parts of the machines via TCP/IP sockets and use gRPC, QPID and SFTP to communicate with another elements of the environment (external servers, own libraries, etc...). **I don't know if the use of this protocols would complicate the implementation of the docker in our system.**
Talking with my work mates, they told me that is not worth as it would not bring any no optimisation or benefit, but I don't think so.
I've been reading that by using containers, we get OS independence, making the app work on different OS using a docker image, library independence and therefore isolation between apps.
Asked by ShadowFurtive
(13 rep)
Jul 4, 2024, 09:00 AM
Last activity: Jul 4, 2024, 09:28 AM
Last activity: Jul 4, 2024, 09:28 AM