We recently came across an all-too-commonly asked question over at Stack Exchange - that was worthy of a lengthy answer on our blog.
Our client we are currently working with has a requirement that first response from the web server must come in under 200ms in the UK. Currently under 2 dedicated web servers under load balancer and 1 db server, we are coming in at 800ms.
The site at the moment has less than 5 customers, 2 products, 4 categories, there is no frontend to the site at the moment, it is style free and image free.
It is also being run on EngineX with Varnish.
Can anyone give me any advice on web server setups? Why is ours coming in slowly? What can you recommend to optimize this? Need to get 400% quicker!
Here's what we had to say
You won't achieve those figures without the aid of either Varnish or FPC (or both). I would certainly hope that figure doesn't also have to include static content (whenever you decide to add it) - as its near impossible to achieve (short of having little to no images/js/css).
We are coming in at 800ms
It is also being run on Nginx with Varnish
You've got Varnish configured wrong.
A properly configured installation of Varnish will deliver <100ms page load times (we see closer to <10ms).
In fact, for Magento, you should expect to see something like this,
When a customer is not logged in ...
Ie. Not having created a unique session (add-to-cart/wishlist, logging in etc.)
--1.2s--------0.8s-----------------0.6s----------------0.1s--------------0.08s---- Uncached Mage default cache Partial FPC cache Total FPC cache Varnish
When a customer is logged in ...
Ie. Having created a unique session (logged in, items in cart etc.). At this point Varnish will likely be off. And if you've chose to use ESI's - depending on the reverse calls, it can either maintain a similar page load time as the FPC cache (due the bootstrap overheads) - or actually increase page load times beyond being uncached.
--1.4s--------0.8s-----------------0.6s-------------- Uncached Mage default cache Total FPC cache
Its not a case of tuning Varnish - its a case of - "you are not actually caching anything at all".
The ideal Magento server configuration files
There isn't one, well, not quite.
We operate over 400 servers, all purely Magento stores - of varying sizes and capacity. And it is rare that the configuration files we have on one - will match that of another. That is because not all businesses are alike.
Bottlenecks can form due many different reasons:
- High numbers of visitor concurrency, with active sessions
- Victims of 'bad' crawl bots, generating necessary, unvaluable load
- High proportion of layered navigation hits
- High numbers of search queries
- High volume of transactions per hour
- Poorly built template
- Too many/slow/bulky 3rd party extensions
- Outdated inbound links leading to high proportion of 404 hits
- Network interface capacity at limit
- Large/complex catalogue (lots of products/categories/attributes)
So with stores all across this spectrum, each have a different approach to more optimum performance.
To solve the issues outlined above; we'll deliberately avoid just stating "more/better hardware"
- Use a FPC beyond that of Varnish
- Filter out/block bad bots at the network edge - or redirect all requests to Varnish regardless of cookie state/URL
- Change stock layered navigation to SOLR, make layered navigation filters dependant
- Change stock search to SOLR
- Distribute MySQL load across Master/Slave configuration - only do this when you've guaranteed browsing load is absorbed by Varnish/FPC
- Re-build the template
- Strip them out
- Monitor access logs continuously and rewrite URLs at Nginx/Varnish prior to delivery. If doing it at Nginx level - ensure Varnish is caching 301/302 redirects.
- Split off static content to a CDN, or improve connectivity
- Add more hardware (well, we had to say it at some point)
So with this in mind - you'll see there probably isn't going to be an Nginx config file, PHP fcgi pool config file, MySQL config file or Varnish config file that are going to be the same. Couple that with hardware changes itself; available memory, I/O performance (HDD and Network) and CPU - and you'll find there is subtle variations that lead to the 400% performance gain you desire - but no one quick answer you'll readily find on-line.
You could copy+paste the Peer 1 sponsored Magento white paper on peformance (we wouldn't recommend it); hope that the settings don't exceed your available memory, thread limits, TCP/IP states, I/O capacity and lead to lesser performance than a vanilla Apache/mod_php configuration.
So lets continue on.
The ideal Magento server stack
This is more likely to get you closer to reality. A good example to demonstrate this is to show how a dedicated Magento OS is configured, MageStack
Take the separate sub-components and you've got a list of the most optimum/critical software, when configured properly, to run a Magento store. Notably:
Firewall, DOS filter, Load Balancer, Varnish, Nginx, PHP, Redis, Memcached, MySQL
So when you ask:
What is the Best Magento Server Setup?
What is your goal exactly?
- High availability
- Simplicity of administration
Enough lecturing, how would we do it
To partially mirror the answer given on server fault down a similar vein. You've got 3 servers at your disposal - so first orient them as optimally as possible. We'll avoid a highly-available solution as that is far beyond the scope of this answer.
The sub-components required for a multi-server configuration are:
- Load Balancer
- Web Server
- MySQL Server
- Common Storage
So we'll multi-purpose some of the systems. PCI-DSS compliance dictates a role, for each server. So with 5 roles and 3 servers - you'll be in breach immediately. MageStack gets round this by using virtualisation - you could do the same.
Server 1: Load Balancer + Web server
Server 2: Web server
Server 3: Database server
Without low-latency and significant network bandwidth (>1Gbps, <125µs), rather than having common storage - its better for you to merely store the store root directory on each machine and replicate the data, either in real-time using
ionotify or lapsed using a
cron job. Again, we'll avoid network file systems like NFS, or replicated block devices like Gluster or DRBD - as vast tuning and decent network bandwidth is required.
Varnish needs to sit as close to the front as possible. But Varnish cannot decrypt SSL. So combine it with an SSL terminator; Nginx, Pound, Stunnel, Stud etc. The built in load balancer in Varnish isn't great - but would be adequate for a 2 server set up.
Nginx + PHP-FPM is fine, but don't drink too much of the Nginx kool-aid. It will perform almost identically to a traditional Apache/mod_php configuration, here's some good reading on why not to use Nginx. Nginx is good, very good infact, but its certainly not a bottleneck of a Magento store - and given its complexity and lack of native Magento support. Most novice system administrators would benefit from using Apache/mod_php over anything else. This may seem like an archaic recommendation over using PHP-FPM - but our performance tests have shown performance is only ~7% faster with the Nginx solution - when properly configured. The tuning and experience required for a high-performance, reliable Nginx/PHP-FPM set-up is fairly vast to get it to outperform Apache/mod_php. Whichever you choose to use, is your call.
The database server is simple, MySQL. But as mentioned earlier, if you have a high converting site, a Master/Slave configuration is advised. Whether you should follow this approach can be determined by reading this article.
Then your peripheral back-end caches, Memcached and Redis. On smaller stores, storing sessions in one Memcache instance and the fast backend cache in another will yield good performance benefits. We don't advocate storing the cache tags in a slow backend - as it causes more problems than it gives. So with a Memcached set-up, you'll have to forfeit cache tagging. Instead, we use a configuration like this.
Redis isn't native to Magento, but with the extension from Colin Mollenhour - its a better solution than Memcache, supports cache tags, session storage and even persistent cache storage - its not quite as volatile as Memcache. But it does have its drawbacks. We've found on large scale production stores (>500 orders/hour, >30k uniques/hour) that the cache (and tags) can fill up very quickly and once the point of saturation has been hit, the LRU mechanism fails somewhat (despite different settings) and causes a massive immediate bottleneck. So it is prudent to regularly prune old records manually.
So what hardware should be used for what?
Web servers: Fastest CPU, most CPU cores, ratio of 2GB RAM/ Core
DB server: Fast CPU, fastest disk I/O, most RAM
So when multi-purposing your 3 machines, the best layout would be:
Server 1: SSL Terminator -> Varnish -> Nginx/Apache > PHP
Server 2: Nginx/Apache > PHP, Redis, (MySQL Slave)
Server 3: MySQL
As to the specific configuration of each application. Well, that's down to your hardware specifications, your store complexity, your type and nature of visitor and the sheer volume of visitors.