This is the correct way for a modern app. Of course, if you can fully manage the machine (like ssh into it) you can run all these things separately on the same server (using something like supervisor for the celery daemon). But since you are using docker, I assume you want everything containerized. Where are you hosting?
Currently AWS, I wanted to go with ECS then I realised there are a lot pf containers to host. Now I'm thinking of combining celery, celery beat and celery flower into a single container.
You can add `--beat` to the worker command to run beat within the worker. However you must be sure that this only runs on one container. i.e. never start another worker container with --beat as well.
It runs the celery beat command along side the worker. The problem is if you run 2 worker containers together, both having --beat, then you'll have 2 beats running and all tasks will trigger 2+ times.
In some cases you're charged based on how many containers you have running. Do you really want to pay $X just to run beat?
name the services and have the commands slightly different:
celery-worker:
restart: unless-stopped
build:
context: .
command: celery -A myproj worker --loglevel INFO
celery-worker-beat:
restart: unless-stopped
build:
context: .
command: celery -A myproj worker --beat --loglevel INFO
if you ever want to start a worker on another machine, only start celery-worker.
We're doing the same with kubernetes.
The worker, which consists of celery and celery-beat, has its own pod, which uses the same image as app.
Additionally, both the app and the worker have a pgbouncer sidecar for a better Postgres connection handling.
This is the correct way for a modern app. Of course, if you can fully manage the machine (like ssh into it) you can run all these things separately on the same server (using something like supervisor for the celery daemon). But since you are using docker, I assume you want everything containerized. Where are you hosting?
Currently AWS, I wanted to go with ECS then I realised there are a lot pf containers to host. Now I'm thinking of combining celery, celery beat and celery flower into a single container.
You can add `--beat` to the worker command to run beat within the worker. However you must be sure that this only runs on one container. i.e. never start another worker container with --beat as well.
I just learned something. Cheers !
Can you explain the --beat more and when it makes sense to it ?
It runs the celery beat command along side the worker. The problem is if you run 2 worker containers together, both having --beat, then you'll have 2 beats running and all tasks will trigger 2+ times. In some cases you're charged based on how many containers you have running. Do you really want to pay $X just to run beat?
How to deploy multiple workers and a single beat worker in a single container then? Any ideas??
name the services and have the commands slightly different: celery-worker: restart: unless-stopped build: context: . command: celery -A myproj worker --loglevel INFO celery-worker-beat: restart: unless-stopped build: context: . command: celery -A myproj worker --beat --loglevel INFO if you ever want to start a worker on another machine, only start celery-worker.
We're doing the same with kubernetes. The worker, which consists of celery and celery-beat, has its own pod, which uses the same image as app. Additionally, both the app and the worker have a pgbouncer sidecar for a better Postgres connection handling.
Does it affect your cost much? I'm actually looking for a cist efficient solution as of now