docker-compose healthcheck for rabbitMQ

I’m trying to run rabbitMQ using docker-compose, but the service is always starting or unhealthy.
rabbit is running fine, so I suspect there is something wrong with my health check.

Running the healthcheck command locally does return a value.

> curl -f http://localhost:5672
AMQP    %

But docker-compose ps always says the service is unhealthy (or starting, before it runs out of time).

> docker-compose ps rabbi ...   Up (unhealthy)   15671/tcp

Here is what my docker-compose.yml file looks like.

# docker-compose.yml
version: '2.3' # note: I can't change this version, must be 2.3


    hostname: 'rabbit'
    image: rabbitmq:3.8.5-management
      test: ["CMD", "curl", "-f", "http://localhost:5672"]
      interval: 30s
      timeout: 30s
      retries: 3
      - '5672:5672'
      - '15672:15672'
      - 'rabbit-data:/var/lib/rabbitmq/mnesia/'
      - rabbitmq

    driver: bridge

I have also tried using nc instead of curl in the healthcheck, but got the same result.

  test: [ "CMD", "nc", "-z", "localhost", "5672" ]


Go to Source
Author: con–

Integrating HTTP / Webhooks with Message Queues

I’m working at a project which integrates several Applications mostly SaaS Applications. The SaaS solutions have all the possibilities to hook into the internal event system with webhooks. The webhooks give us the ability to send a message to a single system but we have to create multiple webhooks to send a single event so several systems.

My idea is to implement a message bus as a centralized middelware but the problem is that the SaaS solutions only provide an integration by http(s) and not with protocols like AMQP.

RabbitMQ for example provides the possibility to publish to a topic over http. To consume you can also use http but if the message is once consumed, the queue removes it or keeps it in the queue.

Has anybody a good solution to bridge the gap between http and aqmp? I thought about small consumer services which subscribe to a topic and then forwards the message to the RESTful API.

We try to avoid a huge enterprise service bus/iPaaS project currently. I know that this could be one of the best approaches but due to internal decisions and project time, costs and so on it’s not a possibility for the moment.

One of our requirements is to have a guaranteed delivery, so that no message will be lost.

Thanks for your suggestions.

Go to Source
Author: Gulliva

Is it a bad design to have 50K bindings on a single RabbitMQ queue?

Is it a bad design to have 50K bindings on a single RabbitMQ queue?

We are designing a new feature in our system where consumers (consumer == internal application) need to receive messages about changes in items they are interested in.
From the statistics we gathered we see the maximal number of items a single consumer can be interested in is 50K (on average it would be ~15K).
Initial tests shows that this is works OK and RabbitMQ handles it, but when we delete such a queue (if for example we scaling down the system and shutting down one of the instances) it takes a few minutes for it to be deleted and the RabbitMQ management portal becomes unresponsive.

Does it make sense to have so many bindings or is it a bad design?

  • We we’ll have around 50 instances of the consumers, each one with its own queue which is not persistent and should be auto deleted when the consumer shut down

Go to Source
Author: Tamir Dresher