Installing Hollo with Docker: A Complete Guide
This guide covers setting up Hollo, an ActivityPub server, using Docker in a homelab environment with S3 Storage and NGINX Proxy Manager.
Hollo leans toward single user instances, but can be configured for more than one person. This guide focuses on a single user instance with external media storage and a PostGres database stored outside the docker container.
Note that there is a much easier route to try out Hollo! You can get up and running with just a single click using Railway. But of course I wanted to do it the hard way. š
Environment Overview
Here is my configuration. External storage isnāt necessary, but it saves space and keeps things tidy. By default, Hollo creates a local minio server for media storage. My composer file removed that and uses external storage instead.
- Host: Homelab server
- Container Management: Portainer
- Reverse Proxy: NGINX Proxy Manager
- Postgres Database: Stored on separate volume for easy backups
- DNS: Netlify, Cloudflare, Porkbun - up to you!
- CDN/Storage: External - you can use any S3-compatible storage, Hollo configures a local minio instance by default
- Domain Setup:
- Main site: your-instance.example.com
- Media: media.your-instance.example.com
Prerequisites
- Docker and Portainer installed
- NGINX Proxy Manager configured
- S3 bucket set up with your public and private keys and endpoint url
- Domain or sub-domain configured with your DNS provider
Domains are used for federation, and you should be warned that once you setup federation on that domain or sub-domain, it canāt be changed, and you canāt use it for another instance. Ever. So choose wisely! If you decided you want to change from Hollo to IceShrimp, you would need to use a different domain, otherwise youāre going to be getting a ton of traffic for the wrong platform.
Storage and Backup Strategy
Database Storage Location
The PostgreSQL data is stored in a custom location (e.g., /path/to/docker-volumes/hollo/data
) instead of using Dockerās built-in volume management. I chose this option because it supports integration with an existing Borg backup solution I have in place following this excellent guide from GoToSocial: Backing Up Your GoToSocial Instance.
You could completely skip this and use Docker volumes, but I like the flexibility of managing the data backups.
- Your chosen backup directory can be included in regular backup routines
- Database files are automatically included in system backups
- BorgBase has scripts ready to go for backing up PostgreSQL data
- Data is easily accessible for backup verification or manual management
- No need for separate database backup scripts
Docker Compose Configuration
I wandered off the beaten path a bit with my configuration. I wanted to use a custom location for the PostgreSQL data directory and external storage for media. In addition, I moved my secrets and other custom values into environment variables.
I also added a healthcheck, which is the first time Iāve done this on my own in a Docker Compose file. I was running against an issue where the Hollo container would start before the Postgres container was ready. This healthcheck and related ādepends_onā section fixed that issue.
This composer file is based off official Hollo composer file, but it has been modified to suit my needs. Please refer to the official documentation before implementing this in your environment as it may have been updated since this post.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
services:
hollo:
image: ghcr.io/fedify-dev/hollo:canary
ports:
- "3131:3000"
environment:
DATABASE_URL: postgres://${POSTGRES_USER}:${POSTGRES_PASSWORD}@postgres:5432/${POSTGRES_DB}
SECRET_KEY: ${SECRET_KEY}
LOG_LEVEL: ${LOG_LEVEL}
BEHIND_PROXY: ${BEHIND_PROXY}
DRIVE_DISK: s3
ASSET_URL_BASE: "https://media.your-instance.example.com/"
S3_REGION: us-east-1
S3_BUCKET: your-bucket-name
S3_ENDPOINT_URL: your-r2-endpoint.r2.cloudflarestorage.com
S3_FORCE_PATH_STYLE: "true"
AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}
depends_on:
postgres:
condition: service_healthy
restart: unless-stopped
postgres:
image: postgres:17
environment:
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_DB: ${POSTGRES_DB}
PGDATA: /var/lib/postgresql/data/pgdata
volumes:
- /path/to/docker-volumes/hollo/data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}"]
interval: 10s
timeout: 5s
retries: 10
start_period: 30s
restart: unless-stopped
Installation Steps
If you are not storing your PostgreSQL data in a custom location, you can skip the first step. I found permissions to be a bit tricky with LXD and Docker, so Iāve included a solution that worked for me.
-
Prepare PostgreSQL Data Directory
1 2 3 4
cd /path/to/docker-volumes/hollo sudo mkdir data sudo chown lxd:docker data sudo chmod 770 data
-
Configure Environment Variables in Portainer
I moved all the secrets and custom values into environment variables in Portainer. This is a more secure way to manage secrets and allows for easy changes in the future.
SECRET_KEY is used to sign federation requests. It should be kept secure and not changed once federation is working. Donāt lose it!
You can generate the SECRET_KEY and POSTGRES_PASSWORD using openssl:
openssl rand -base64 32
1
2
3
4
5
6
7
8
POSTGRES_USER=hollo_user
POSTGRES_PASSWORD=your_secure_password
POSTGRES_DB=hollo
AWS_ACCESS_KEY_ID=your_r2_access_key
AWS_SECRET_ACCESS_KEY=your_r2_secret_key
SECRET_KEY=your_secure_random_key
LOG_LEVEL=info
BEHIND_PROXY=true
- Configure DNS and NGINX Proxy Manager
The only tricky thing here is I had to use a Porkbun DNS challenge using their API for the SSL certificate, which isnāt well documented. I had to get an API Key and Secret from Porkbun and add them into the āCredentials File Contentā field of the Letās Encrypt DNS challenge prompt within NGINX Proxy Manager.
- DNS Provider: Create new A record for your-instance.example.com pointing to your serverās IP address
- NGINX: Create new proxy host
- Domain: your-instance.example.com
- Scheme: http
- Forward to: http://your-docker-host:3131
- SSL: Letās Encrypt with a DNS challenge
Common Issues and Solutions
1. Port Configuration
Issue: Application not accessible Solution: Hollo must run on internal port 3000, but can be mapped to any external port. However, I had it set to 3131:3131 in my docker compose file initially.
1
2
ports:
- "3131:3000" # External:Internal
2. Itās always DNS
Issue: Browser showing redirect loops or SSL errors Solution:
- Ensure your DNS / SSL configuration is correct (that isā¦a broad statement lol)
- Keep NGINX PM scheme as http for internal communication
- Set BEHIND_PROXY=true in Hollo configuration
My personal DNS issue was ensuring the NGINX Proxy Manager certificate and āRequire SSLā setting were correct. Porkbun DNS challenge was a bit tricky, but once I got the API key and secret in NGINX PM, it worked like a charm.
Additional Notes
- Keep your SECRET_KEY backed up and donāt change it once federation is working
- Monitor PostgreSQL logs for any database issues
- Regular backups of the PostgreSQL data directory are recommended
Finishing Configuration
Access https://your-instance.example.com/setup to complete the initial configuration of your Hollo instance.
Follow Holloās official documentation to complete the configuration.
When I filled out the initial account creation form, it took a bit longer than I expected for a response to come back, but it finally did. So be patient if it takes a bit longer than you expect.
You probably want to upload a profile picture and a header image. I might be wrong about this, but it appears you can only do that from a client app. I used Fedicat, but there are tons to choose from that work with Hollo.
Now youāre ready to start hollo-ing! š
Follow me on my Hollo instance: @[email protected]
Final Thoughts
When I first pulled up the Hollo setup page, it felt very familiar. Itās clean and simple, with large UI controls. I then realized itās built with PicoCss, which is the same CSS framework I used for my little mastodon API playtoy, Devtodon. I love the simplicity and ease of use of PicoCss and glad to know others do, too!
Need to figure out why Hollo isnāt working with Devtodon. š¤
Hollo is a great project and Iām excited to see where it goes. Iām happy to have it running in my homelab and look forward to seeing how it evolves. I hope this guide helps you get started with Hollo in your own environment.
Comments
Total Interactions: 0Likes
No webmentions were found.
Posts, Re-Posts and Bookmarks
No webmentions were found.
Replies
No webmentions were found.