What is the purpose of running PHP-FPM in its own container instead in the same container as apache2/nginx?

I’m fairly new to Docker so excuse any obvious ignorance or misunderstandings. That said, I’ve been coding and configuring web applications for a long time now. I have recently been dabbling with some more sophisticated (Docker and “traditional”) web/application server setups and experimenting with performance enhancements and simplifying deployments.

My personal favorite configuration to use for most projects thus far is nginx as a (mostly) “static file” web server &&|| caching mechanism &&|| Load Balancer in a reverse proxy config with Apache v2.4+ as the “backend” which runs PHP (also Perl &&|| Python) in a “FastCGI” (PHP-FPM) configuration (with mod_php disabled) and mpm_events (instead of mpm_prefork). We can always add in something else also such as REDIS or memcached, where applicable (but I rarely see a significant performance gain when using Cloudflare caching).

I’ve been dabbling with a few different ways to keep my “favorite web server composition” flexible and “ephemeral” enough to accommodate any and all possible options for further performance gain or resource load management. What I keep seeing is that there seems to have been a shift towards using PHP-FPM in its own separate “stand-alone” Docker container sometimes around late 2019.

Why?

While I can appreciate keeping resources isolated and separate for an easier to debug/config/secure configuration(s), I don’t fully understand what the benefits are to having PHP-FPM in a separate Docker container that is implicitly REQUIRED by the application that the Docker containers are comprising.

If anything, having a separate PHP-FPM container seems like additional configuration, additional resource consumption and an even more complicated build/deploy consideration in a CI/CD pipeline.

I can even get onboard with “simple preference”, such as instead of using Apache2 and nginx on the same Ubuntu/Debian or RHEL/CentOS container, break off nginx into its own container(s) and simply config your hostname(s) &&|| IPs and ports appropriately.

But what is a practical use case and advantages for having PHP-FPM separated from Apache or nginx containers for any additional benefit beyond perhaps using more Dockerfile &&|| docker-compose.yaml and less ba.sh scripting to get the same job done? If any part of the application fails or falters, then functionality will suffer. The additional network &&|| docker-network communications and hostname resolutions seems like it would just be a performance penalty over combining certain aspects of the docker-composer-ed application together, rather than splitting them up simply for the sake of splitting them up.

Go to Source
Author: kanidrive

How to resize docker volume after creation?

I would like to limit my docker volumes in size, So I don’t have one container crashing my entire environment because of excessive writes depleting my storage space.

I found it is possible to limit volumes size at creation time using --opt o=size=100m
However Is it possible to change the size limit after I have created the volume?

Go to Source
Author: GKman

Difference between a buildpack and a dockerfile?

I understand a dockerfile to be a comprehensive set of instructions to completely build an application’s production (and development for that matter) environment from scratch (or at least from a bare linux distribution).

I understand a buildpack to be something responsible for retrieving and installing collections of dependencies and (similarly to docker) merging these to provide the app’s running environment. I also see from here:

Buildpacks were first conceived by Heroku in 2011. Since then, they have been adopted by Cloud Foundry and other PaaS such as Google App Engine, Gitlab, Knative, Deis, Dokku, and Drie.

But I wonder what are the differences then? E.g.

  • Are dockerfiles allowing more options than heroku buildpacks?
  • Do both have the same capabilities in terms of permissions and users?
  • Are they as versatile as eachother? (I know docker images can be run almost anywhere there is docker, is the same true of buildpacks?)
  • What other differences are there?

Go to Source
Author: stevec

Deployment of Docker containers on ECS(Tcp Server Java)

The given problem:

  • Created a Java Tcp Server with Eclipse vertx(Toolkit for building
    reactive,event driven)applications on JVM. The applicatino has two
    Tcp Servers which are running on port numbers:
    server1(port:5050);server2(5051)

  • I created a Docker image and was running the application inside a
    container and all works fine locally.

  • I want to deploy the docker container on ECS(Amazon Container
    Service) and to use Network Load balancer,the question is how can i
    assign different ports to different containers if in the Java code i
    was setting up explicity

//tcp server 1 listens on port 5050

server..listen(5050)
  • and how does ECS manages what ports to assign to what container?

    The tcp client is always targeting one ip adress and one port number
    it cannot be changed.

    Thanks!

Go to Source
Author: user581026

What is the best way to deploy a short living process when you have no machine?

I’m new to devops .

I created a converter and want to deploy it . The converter converts a 3D model from a format to another and then you’ll be able to visualize the outpout file on the platform and then you can download the file if you want to.

when bench-marking the process , i found out that for now it runs up to 1 minute when the files are really big , I’m using azure just moved to aws this week . For now the converter converts one file at a time and uses blender’s python library and a c++ library ( when trying to use docker i built these inside the container )

I started by creating a docker container that will read a heavy blob and then output the converted file but figured out that docker is not designed to read local files .

I’m searching for the right model to host this . Is docker a good solution ? If not , is there other ways to do this ?

Go to Source
Author: tawfikboujeh

docker-compose fail to maintain up centos build

Im building a basic centos image with only some packages but Im forgotting something because finish well but not maintain the container. What Im doing bad?

docker-compose:

version: '3'
services:
  config_server:
    build: ../common/docker/ansible/.
    stdin_open: true
    volumes:
      - ../../api:/var/www/api
      - ../../server:/var/www/server
      - ../server/provision/server:/tmp/ansible
    working_dir: /tmp/ansible
    networks:
      net:
        ipv4_address: 172.28.0.10
    command: ansible-playbook playbook.yml

networks:
  net:
    driver: bridge
    ipam:
      driver: default
      config:
        - subnet: 172.28.0.0/24

Dockerfile:

FROM centos:7
RUN yum makecache && yum install -y epel-release wget
RUN yum install -y ansible

I would like to check If all tasks was well, and using:

docker logs 

I see logs from ansible script finishing well. But I don’t know why not maintains container with Exited (0) error

command used is

docker-compose up -d --build 

Suggestions?

Go to Source
Author: deconya

Combining Two Containers

I’m working on a Windows laptop and running Docker for Windows on it. I want to run an Ubuntu container with a specific version and a MySQL server on it. So the question is, do I have to download MySQL on the Ubuntu container or can I run 2 containers (Ubuntu and MySQL) and combine them? How do I combine these 2 containers?

Go to Source
Author: user19215

Printing all existing windows on a windows container

I wrote a simple cpp console app that prints all the open windows titles:

Printf("hellon");
for (HWND hwnd = GetTopWindow(NULL); hwnd != NULL; hwnd = GetNextWindow(hwnd, GW_HWNDNEXT)) { 
    if (!IsWindowVisible(hwnd)) continue; 
    int length = GetWindowTextLength(hwnd);
    char* title = new char[length+1];     
    GetWindowText(hwnd, title, length+1); 
    printf(" Title: %s", title);
}

The main reason of that app is to monitor the windows another exe is opening.
However, it only prints hello and no titles at all.
While it does works on the host windows os and prints all the existing windows on it.
What is the difference on docker and how can I make it work similarly?

Go to Source
Author: flowerProgrammer

haproxy container not responding that i have created with remote machine ansible playbook although container is running

I have alredy created ansible playbook for one haproxy container and 2 apache container. Moreover I also mapped ports and volumes in order to distribute client request with haproxy to apache servers. but haproxy server not responding. containers were created and mapped. . . here is my playbook .—
.

  • hosts: all
    vars:
    img:
    – httpd:latest
    – haproxy:latest
    tasks:

            - name: pull image
              docker_image:
                  name: "{{ item }}"
                  source: pull
              loop: "{{ img }}"
            - name: copy my contain1
              copy:
                      src: ./index1.html
                      dest: /home/roy/volume/index.html
              notify: apache1
            - name: copy my contain2
              copy:
                      src: ./index2.html
                      dest: /home/roy/volume1/index.html
              notify: apache2
            - name: create containers
              docker_container:
                 name: haproxycon1
                 image: haproxy:latest
                 volumes:
                         - /home/roy/ha/:/usr/local/etc/haproxy/
                 state: stopped
                 ports:
                         - "80:80"
                 command: sleep 1d
              notify: pop
              register: haproxycon1
              ignore_errors: yes
            - name: create apache1 container
              docker_container:
                  name:  con1
                  image: httpd:latest
                  state: started
                  volumes:
                          - /home/roy/volume:/usr/local/apache2/htdocs
                  exposed_ports:
                          - 80
              register: con1
              ignore_errors: yes
            - name: create apache2
              docker_container:
                      name: con2
                      image: httpd:latest
                      state: started
                      volumes:
                              - /home/roy/volume1:/usr/local/apache2/htdocs
                      exposed_ports:
                              - 80
              register: con2
              ignore_errors: yes
            - name: haproxyconf
              template:
                     src: ./haproxy.cfg
                     dest: /home/roy/ha/haproxy.cfg
              notify: pop
    

    handlers:
    – name: pop
    docker_container:
    name: haproxycon1
    state: started
    – name: apache1
    docker_container:
    name: con1
    state: started
    – name: apache2
    docker_container:
    name: con2
    state: started

Go to Source
Author: KKSpro

Testable architecture of micro-services

Into:

There are mutliple ways to test code: unit tests /e2e / manual testing /..

I’m developing a project that it’s implementation details changes very quickly (and sometimes the core functions as well).

Some of our microservers talk to each other directly while some others communicate using events like kafka.

Problem:

When I create an e2e test (for the backend image only), before each test(s), I build the docker image of my micro service and run it again on (each) test.

I find it really hard to set up this kind of e2e test for a micro service that directly talks to other microservices (sending get/post/.. requests).

As a result, I also build/pull the other images and run them before each tests as well. But it’s not that easy because you can end up implementing a version of docker-compose in you tests infrastructure.

I would like to minimize the amount of errors that can come from other services and test a specific microservice.

Possible solution:

Changing the microservices architecture.

When ever it is possible, a micro service will communicate with others using events. So in the tests, we only need to setup a kafka and the microservice that we try to test.

I only though of this solution from testing perspective and not from “what is best”, for example, it’s faster to communicate without kafka.

Question:

What are the pros and cons of my proposal? From your experience, is it maintainable?

Go to Source
Author: Stav Alfi

Sending HTTP requests to private EC2 DNS from other EC2 host works, but not from container on EC2 host

I want to make a container hosted on one EC2 host communicate with another container hosted on another EC2 host.

Currently, if I send an HTTP request from host A to host B using host B’s private DNS and the exposed container port, I receive a response correctly.
However, if I try the same thing within the container on host B, I get a “Host is unreachable” error.
To me, this would indicate it is not a host networking issue, which is configured in AWS, so I won’t enter into detail on it.

What puzzles me is that I have done the exact same thing in our pre-production environment and it works perfectly.

What could be the issue / where do you think I should look to troubleshoot this? Please keep in mind I’m a noob 🙂

Go to Source
Author: Konrad

Failure to build with docker-compose: Invalid signal: SIGQUIT

PS D:>docker-compose up
Building app
Step 1/7 : FROM php:7.4-fpm-alpine
—> 5ae58b49885c
Step 2/7 : RUN apk add --no-cache autoconf curl dpkg-dev dpkg freetype-dev file g++ gcc git icu-dev jpeg-dev libc-dev libmcrypt-dev libpng-dev libjpeg-turbo-dev libxml2-dev libzip-dev make mariadb-dev postgresql-dev pkgconf php7-dev re2c rsync unzip wget zlib-dev
ERROR: Service ‘app’ failed to build: Invalid signal: SIGQUIT

Version: 19.03.8

OS/Arch: windows/amd64

Experimental: true

Switching containers doesn’t help. What can I do or how to debug?

Thank you.

Go to Source
Author: dick-justice