How to Set Up an NGINX Caching Proxy in a Docker Container for High-Load Web Servers
If you're running a high-load web server that needs to manage hundreds of images, you know that performance is key. Using a caching proxy for image management is a crucial technique to optimize website performance and make the user experience smoother, especially in scenarios where different image versions need to be generated. In this tutorial, I'll show you how to set up an NGINX caching proxy in a Docker container that uses a specific directory to keep configuration files and cache.
Install Docker
First, make sure that Docker is installed on your machine. If you don't have it installed, you can download it from the official Docker website.
Create a directory for NGINX configuration files and cache
Create a directory where you'll store the configuration files and cache for NGINX. For example, you can create a directory called /opt/nginx-cache
. Inside this directory, create two subdirectories: etc
and cache
.
mkdir -p /opt/nginx-cache/etc
mkdir -p /opt/nginx-cache/cache
Create an NGINX configuration file
Create an NGINX configuration file that specifies the caching rules for the proxy. Here's an example configuration file:
worker_processes 5;
worker_rlimit_nofile 8192;
events {
worker_connections 4096;
}
http {
include conf/mime.types;
include /etc/nginx/proxy.conf;
include /etc/nginx/fastcgi.conf;
index index.html index.htm index.php;
default_type application/octet-stream;
sendfile on;
tcp_nopush on;
server_names_hash_bucket_size 128;
proxy_cache_path /opt/cache levels=1:2 keys_zone=img_cache:10m max_size=10g inactive=60m use_temp_path=off;
server {
listen 8080;
location /image/ {
proxy_cache img_cache;
proxy_cache_background_update on;
proxy_cache_lock on;
proxy_pass http://[your_app_server]:[port]/image/;
}
}
}
This configuration file specifies that NGINX should cache responses for requests to the /image/
path, using a cache named img_cache
. It also sets the maximum size of the cache to 10GB and sets it to be inactive after 60 minutes. Finally, it specifies that NGINX should listen on port 8181 and proxy requests to the upstream server at http://[your_app_server]:[port]/image/
.
Run the NGINX Docker container
Now that you have the NGINX configuration file and the cache directory set up, you can run the NGINX Docker container using the following command:
docker run -d --name nginx-images-cache \
-v /opt/nginx-cache/etc:/etc/nginx:ro \
-v /opt/nginx-cache/cache:/opt/cache:rw \
-p 8080:8080
-d nginx
The -d
option runs the container in the background and gives it a name (nginx-images-cache
) that you can use to interact with the container later.
The -v
option specifies two volume mappings. The first volume mapping maps the /opt/nginx-cache/etc
directory on the host machine to the /etc/nginx
directory inside the container. The :ro
at the end of the volume mapping sets the mount as read-only. This means that the container can read the NGINX configuration files but can't modify them.
The second volume mapping maps the /opt/nginx-cache/cache
directory on the host machine to the /opt/cache
directory inside the container. The :rw
at the end of the volume mapping sets the mount as read-write. This means that the container can read from and write to the cache directory.
The -p
flag maps a port on the host machine to a port inside the container. In this case, port 8080
on the host machine is mapped to port 8080
inside the container. This allows traffic to be sent to the container through the host machine's IP address and port number.
Finally, the nginx
at the end of the command specifies the Docker image to use for the container. This will download the latest official NGINX Docker image and use it to start the container.
Verify that the caching proxy is working
To verify that the NGINX caching proxy is working, make a request to a URL that should be cached. For example, if you've set up the proxy to cache requests to the /image/
path, you can make a request to:
http://[your_proxy_address]:8080/image/[image_file_name].jpg
The first request to this URL should take some time to complete as NGINX fetches the image from the upstream server and caches it. Subsequent requests to the same URL should be much faster as NGINX serves the image from the cache.
You can also check that the cache is working by inspecting the contents of the cache directory (/opt/nginx-cache/cache
). You should see some files and directories that correspond to the cached content.
Conclusion
Setting up an NGINX caching proxy in a Docker container is a great way to speed up the work of your high-load web server. By following the steps in this tutorial, you can easily set up a caching proxy that uses a specific directory to keep configuration files and cache. With the NGINX caching proxy in place, you can expect faster response times and better performance for your website.