giving you a taste.

Using Rails 4.0.0.beta

Rails 4.0.0.beta compiled for production feels faster than Rails 3 to me, especially when proxied by Nginx and initialized with Puma. First I clone the master branch of Rails on Github. There’s a multithread option, and background process queue option available by default in the Rails 4 initializers. You’ll need ruby 1.9.3 and probably a public key linked to your github, as usual in all things of this nature.

pull down the master branch -
$ git clone git://

After the pull, generate Rails 4 beta apps by executing railties binaries in our local copy

generate the rails app -
$ rails/railties/bin/rails new myrails4app --edge --skip-bundle --skip-test-unit

Wait to run bundle for an opportunity to isolate and to manage gem dependencies.

rvm -
$ cd myrails4app
$ rvm --rvmrc --create 1.9.3@myrails4app

Add the puma gem for multithreaded processing.

adding puma - Gemfile
gem 'puma'

If you’re going with a relational database, kick up your pool: params in the database.yml file, absolutely the higher pool is ideal, set it to something like pool: 16. Puma defaults to 16 threads when no configuration is specified. Assuming modern server hardware equipped with multi-core processors, this means 16 request/response threads by default will be handled, routed and delivered by the app over one puma process.

inside the config directory - database.yml
database: db/development.sqlite3
pool: 16

A potential block arises where the database is unable to write at the speed of the incoming request/response thread. We call that blocking I/O. If we’re using Active Record, we might opt for PostgreSQL. Redis would be among the best options available for storing decoupled, thread-safe logic handled by Puma despite Redis’ single threaded architecture. In fact, we err to dismiss a Puma/Redis combination on the basis of Puma’s multithreaded and Redis’ single-threaded architectural difference alone.

The Redis Server was built for pipelining. A Request/Response server can be implemented so that it is able to process new requests even if the client didn’t already read the old responses. This way it is possible to send multiple commands to the server without waiting for the replies at all, and finally read the replies in a single step. Redis Pipelining avoids I/O blocking, however it must be supported by available memory. Redis stores data in memory and later writes to disk.

Go bigger when you let puma loose if you like. Be mindful that your default database.yml limits threading explicitly by pool size and more implicity at the limit of its ability to write and return data from threads. Pool params represents max simultaneous (or close to it) data transactions into or out of the database. In general we want to increase the size of our pool wherever possible. Set your gems and run Bundle. Beyond this step it’s typical that you’ll need to nudge on rails executables using ‘bundle exec’ or try to pass in –binstubs if it can prevent the need for ‘bundle exec’ if you ever rake migrations or ‘rails s’.

generate the rails app -
$ bundle install --binstubs

Edge guides are helpful, but I spend more time hitting the edgeapi like a boss. Build up something and try to break it in order to check out the new error messaging and exception handling interface in development mode.

New syntax contraints await throughout the config/routes.rb file. Routes complained about how I matched the controller action using the older => syntax still perfectly fine in rails 3.2.7. I thought it was cool Rails 4 asked me to revise this syntax and preface all GET requests with GET. Not such a bad idea…

Play with the process queue initializer setting. Uncomment config.threadsafe! in the production.rb file to test multithreads with puma. If you do this, watch out for closely coupled procedural calculations. Anything coupled is suspect and ought to be refactored anyway. Look out for object ordering or dependent, interwoven actions generated in the controller, (I’m thinking like ActiveRecord’s build method available for relating has_many and blongs_to model relationships like for shopping cart and order relationships .. so caution on implementing e-commerce cart/order models and controllers under this enviornment). To protect business critical object-integrity across the app we might take note of these patterns and wrap them in the mutex class in order to shift the alignment of threads into a data-safe, linear approach. I think mutexing on multi-threads is problematic because it bottlenecks the app and your request is sitting there spinning out where nowhere to go until the threads ahead complete their route. People hate waiting for slow web apps. Forget the mutex class, the answer is decoupled design through refactoring.

Let’s now observe the multithreaded rails4 app in production: proxy it with nginx on a remote server, a VPS. I like to proxy-pass the puma process upstream at the unix socket. Unix sockets: fast, reliable, secure. We can run rails apps run in parallel on the server, seemingly proxied by their own nginx.conf. I say seemingly because a centralized proxy config file references a sub-directory (sites-enabled or in some installations sites-available) with an Nginx syntactical include out to all these nginx.conf files associated with its own app.

We set a symbolic link in the sites sub-directory to complete this type of Nginx include. Each sym link can reference behavioral configuration in nginx across many different rails apps in parallel on a single IP. Perfect example of the power of unix. Our symbolic link brings config options available to the nginx proxy server as incoming requests to the server occur over the internet. The nginx include function is basically a PHP include() on steroids.

$ sudo ln -s /home/myusername/allmyedgeapps/myrails4app/config/nginx.conf
upstream {server unix:/tmp/myrails4app.sock fail_timeout=0;}
server {
 root /home/myusername/allmyedgeapps/myrails4app/public;
 try_files $uri/index.html $uri;
 location {
  proxy_redirect off;
 error_page 500 502 503 504 /500.html;

Nginx will use port 80 by default so I wouldn’t even specify that in the nginx.conf files across all the rails apps sharing port 80 at the proxy outset. Now unleash and initialize puma. Keep in mind that all its incoming/outgoing request/response objects are proxied and passed to and from nginx. Restart Nginx if you have not done so already so that it picks up the configs and knows which upstream socket to point the incoming request associated with the specified servername. Make sure the hand-off occurs at the same place (the shared unix socket)… so double check your upstream block, it’s ok you’re supposed to put three forward slashes in what’s passed to puma and only one forward slash ahead of that same location in your nginx upstream block.

let puma run multi-thread from the root directory of your rails app -
$ puma -b unix:///tmp/myrails4app.sock

I’ll go against the trend to advocate configuration over convention just this once in order to be realistic. Deployment is as important as anything else. Now most of our sequence here is arbitrary. After the sym link is made you have to restart nginx but the rest of it could occur before or after the unix socket is held down at puma initialization. Once proxy and puma’s socket are up go check your, we used in this discussion. You might forward domains or subdomians any domain or even a naked IP address would be all treated equally for any number of rails apps pointed to that IP address, or the IPv6 address too if possible. As Rails developers I know its a little unkosher to mention lower level IPv6, but come on, drawing from all this I hope you have a sense for the possiblities and found this post useful. Ryan Bates taught me how to do much of the puma stuff on