What is the purpose of running PHP-FPM in its own container instead in the same container as apache2/nginx?

I’m fairly new to Docker so excuse any obvious ignorance or misunderstandings. That said, I’ve been coding and configuring web applications for a long time now. I have recently been dabbling with some more sophisticated (Docker and “traditional”) web/application server setups and experimenting with performance enhancements and simplifying deployments.

My personal favorite configuration to use for most projects thus far is nginx as a (mostly) “static file” web server &&|| caching mechanism &&|| Load Balancer in a reverse proxy config with Apache v2.4+ as the “backend” which runs PHP (also Perl &&|| Python) in a “FastCGI” (PHP-FPM) configuration (with mod_php disabled) and mpm_events (instead of mpm_prefork). We can always add in something else also such as REDIS or memcached, where applicable (but I rarely see a significant performance gain when using Cloudflare caching).

I’ve been dabbling with a few different ways to keep my “favorite web server composition” flexible and “ephemeral” enough to accommodate any and all possible options for further performance gain or resource load management. What I keep seeing is that there seems to have been a shift towards using PHP-FPM in its own separate “stand-alone” Docker container sometimes around late 2019.

Why?

While I can appreciate keeping resources isolated and separate for an easier to debug/config/secure configuration(s), I don’t fully understand what the benefits are to having PHP-FPM in a separate Docker container that is implicitly REQUIRED by the application that the Docker containers are comprising.

If anything, having a separate PHP-FPM container seems like additional configuration, additional resource consumption and an even more complicated build/deploy consideration in a CI/CD pipeline.

I can even get onboard with “simple preference”, such as instead of using Apache2 and nginx on the same Ubuntu/Debian or RHEL/CentOS container, break off nginx into its own container(s) and simply config your hostname(s) &&|| IPs and ports appropriately.

But what is a practical use case and advantages for having PHP-FPM separated from Apache or nginx containers for any additional benefit beyond perhaps using more Dockerfile &&|| docker-compose.yaml and less ba.sh scripting to get the same job done? If any part of the application fails or falters, then functionality will suffer. The additional network &&|| docker-network communications and hostname resolutions seems like it would just be a performance penalty over combining certain aspects of the docker-composer-ed application together, rather than splitting them up simply for the sake of splitting them up.

Go to Source
Author: kanidrive

How to resize docker volume after creation?

I would like to limit my docker volumes in size, So I don’t have one container crashing my entire environment because of excessive writes depleting my storage space.

I found it is possible to limit volumes size at creation time using --opt o=size=100m
However Is it possible to change the size limit after I have created the volume?

Go to Source
Author: GKman

Deployment of Docker containers on ECS(Tcp Server Java)

The given problem:

  • Created a Java Tcp Server with Eclipse vertx(Toolkit for building
    reactive,event driven)applications on JVM. The applicatino has two
    Tcp Servers which are running on port numbers:
    server1(port:5050);server2(5051)

  • I created a Docker image and was running the application inside a
    container and all works fine locally.

  • I want to deploy the docker container on ECS(Amazon Container
    Service) and to use Network Load balancer,the question is how can i
    assign different ports to different containers if in the Java code i
    was setting up explicity

//tcp server 1 listens on port 5050

server..listen(5050)
  • and how does ECS manages what ports to assign to what container?

    The tcp client is always targeting one ip adress and one port number
    it cannot be changed.

    Thanks!

Go to Source
Author: user581026

What is the best way to deploy a short living process when you have no machine?

I’m new to devops .

I created a converter and want to deploy it . The converter converts a 3D model from a format to another and then you’ll be able to visualize the outpout file on the platform and then you can download the file if you want to.

when bench-marking the process , i found out that for now it runs up to 1 minute when the files are really big , I’m using azure just moved to aws this week . For now the converter converts one file at a time and uses blender’s python library and a c++ library ( when trying to use docker i built these inside the container )

I started by creating a docker container that will read a heavy blob and then output the converted file but figured out that docker is not designed to read local files .

I’m searching for the right model to host this . Is docker a good solution ? If not , is there other ways to do this ?

Go to Source
Author: tawfikboujeh

Combining Two Containers

I’m working on a Windows laptop and running Docker for Windows on it. I want to run an Ubuntu container with a specific version and a MySQL server on it. So the question is, do I have to download MySQL on the Ubuntu container or can I run 2 containers (Ubuntu and MySQL) and combine them? How do I combine these 2 containers?

Go to Source
Author: user19215