I have a bigFile.avi which is 800MB and is at http://example.com/bigFile.avi.
When I use this link to download the bigFile.avi from the browser, my nginx server jumps to 100% CPU load during the download session with no static content nor PHP (normal PHP scripts use 1-3% CPU).
Is this normal for the server? Does it consume so much CPU to serve large files?
I have even tried turning off the gzip in the nginx config, but there is not much difference.
3 Answers 3
As nginx can write large files in disk before sending them to the client, it's often a good idea to disable this cache if the site is going to serve big static files, with something like:
location / {
proxy_max_temp_file_size 0;
}
1 Comment
Take a look at these articles
- http://www.facebook.com/topic.php?uid=122166917825068&topic=325
- https://calomel.org/nginx.html (search for the "sendfile off" section)
I will admit that some of that is beyond me. But in short they suggest disabling sendfile, enabling aio, and increasing your output buffers if you're sending large (>4MB) files. What I took away is that most default server configs assume many small files will be sent, rather than few or many large files. These two different scenarios can require some very different configs to work efficiently.
Comments
Nginx's own website has a lot of good information on how to optimize your web-server for (large) static files hosting.
Personally I'm using the following configuration (taken straight from the nginx documentation).
sendfile on;
sendfile_max_chunk 1m;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;