It didn’t take very long at all to develop an initial version of my rails app – in fact – I’ve found rails to be immensely productive. Using yum and gem to install components also seems to be a big win. The whole development process was so ludicrously fast, in fact, that it caused me to be suspicious of what exactly I was giving up. It took me quite a while to figure it out; and in the end it came in the deployment stage.
It turns out that Rails is not thread safe. I was quite surprised to learn this, especially since ruby does in fact support threading; it’s just that Rails … doesn’t. This is a bit of a scalability issue, but it turns out that it’s not really a huge problem: I save SO much time in development, and what I was really trying to do was to throw an idea out and see if it sticks, which I am able to do. If it does stick and I need to scale it, I will gladly either hire some engineers to re-implement in Java, or buy some more servers, or probably both.
As for deployment, that isn’t fully sorted out in the same kind of way that it looks like it might be. My original thought was to go to FastCGI and plug it into Apache. Way back when (1997-1998 vintage) I worked with some large perl applications that used FastCGI, and it was always a nightmare with a bunch of processes spun up and some of them dying, turning into zombies and consuming large amounts of CPU. This experience still seems to be the state of the art, and the main reason why FastCGI is a bad idea. James Duncan Davidson has an excellent discussion on this topic on his blog. Further trips through the blog-o-sphere reveal yet more information, including the “pack-of-mongrels” approach which is detailed on Coda Hale’s blog. Thanks to both of these guys for very succinctly laying out solutions and issues on rails deployment.
The summary of this is that I have a cluster of mongrel servers running on my box, and I can deploy the application in an automated way (“cap deploy”) from my user account and have my latest code rolled out from development to production. The front-end server is apache, which seems to have grown load-balancing abilities in the form of mod_proxy_balancer. This is reasonably slick, and better than I would have had setup for a java app. Having said that – I would have been able to deploy my java app in tomcat, which is capable of handling a lot more traffic than my pair of mongrels. Of course, I don’t actually have any traffic, so much of this is mute anyway. All of this probably took about 4 hours of work or so (including some other minor deployment issues I ran into). Not bad at all.