How I reduced gitlab memory consumption in my docker-based setup
I’m currently running 4 separate dockerized gitlab
instances on my server. These tend to consume quite a lot of memory even when not being used for some time.
Reduce the number of unicorn worker processes
The gitlab
default is to use 6
unicorn worker processes. By reducing the number of workers to 2
, my gitlab memory consumption decreased by approximately 60%
:
unicorn['worker_processes'] = 2
In my dockerized setup, I justed updated the GITLAB_OMNIBUS_CONFIG
in docker-compose.yml
and restarted the instance. If you didn’t install gitlab
using docker, you might need to sudo gitlab-ctl reconfigure
.
Note that you need at least 2
unicorn workers for gitlab
to work properly. See this issue for details.
Also note that reducing the number of workers to the minimum will likely impact your gitlab
performance in a negative way. Increase the number of workers if you notice a lack in performance.
Disable Prometheus monitoring
Most small installation do not need Prometheus, the monitoring tool integrated into Gitlab:
prometheus_monitoring['enable'] = false
Reduce sidekiq concurrency
sidekiq
is the background job processor integrated into Gitlab. The default concurrency is 25
. I recommend reducing it.
sidekiq['concurrency'] = 2
This might cause background jobs to take longer since they have to wait in queue, but for small installations it does not matter in my experience.
Reduce the PostgreSQL shared memory
This was recommended on StackOverflow.
postgresql['shared_buffers'] = "256MB"
Setting this too low might cause a heavier IO load and all operations (including website page loads) might be slower.
The complete config
This is the configuration (combined from all strategies listed above) in order to get down the memory consumption:
# Unicorn config
unicorn['worker_processes'] = 2
# PostgreSQL config
postgresql['shared_buffers'] = "256MB"
# Sidekiq config
sidekiq['concurrency'] = 2
# Prometheus config
prometheus_monitoring['enable'] = false