Deploying ServiceStack to debian using capistrano hosted using nginx and fastcgi

Over the last month we’ve started using ServiceStack for a couple of our api endpoints. We’re hosting these projects on a debian squeeze vm using nginx and mono. We ran into various problems along the way. Here’s a breakdown of what we found and how we solved the issues we ran into. Hopefully you’ll find this useful. ##Mono We’re using version 2.11.3. There are various bug fixes compared to the version that comes with squeeze. Namely, problems with min pool size specified in a connection string. Rule of thumb, if there’s a bug in mono then get the latest stable! ##Nginx We’re using nginx and fastcgi to host the application. This has made life easier for automated deployments as we can just specify the socket file based on the incoming host header. There’s a discussion about what’s the best way to host ServiceStack on Linux at stackoverflow. Our nginx config looks like this: When we deploy the application we nohup the monoservice from capistrano Where: fqdn is the fully qualified domain name, i.e. the url you are requesting from latest_release is the directory where the web.config is located. ##Capistrano To get the files onto the server using capistrano we followed the standard deploy recipe. In our set up we differ from the norm in that we have a separate deployment repository form the application. This allows us re-deploy without re-building the whole application. Currently we use Teamcity to package up the application. We then unzip the packaged application into a directory inside the deployment repository. We use deploy_via copy to get everything on the server. This means you need to have some authorized_keys setup on your server. We store our ssh keys in the git repository and pull all the keys into capistrano like this: ssh_options[:keys] = Dir["./ssh/*id_rsa"] ###No downtime during deployments ... almost Most people deal with deployment downtime using a load balancer. Something like take server out of rotation, update the server, bring it back in. The problem with this is it's slow and you need to wait for server to finish what it's doing. Also in our application we can have long running background tasks (up to 1000ms). So we didn't want to kill those of when we deployed. So we decided to take advantage of a rather nice feature of using sockets. When you start fastcgi using a socket it gets a handle to the file. This means that you can move the file! Meaning that you can have a long running process carry on whilst you move you new site into production, leaving that long running task running on the old code to finish. Amazing! This is what we have so far: <script src=""> </script> There's room for improvement feedback appreciated.

ServiceStack the way I like it

Over the last month we’ve started using ServiceStack for a couple of our api endpoints. Here’s a breakdown of how I configured ServiceStack to work the way I like it. Hopefully you’ll find this useful. ##Overriding the defaults Some of the defaults for ServiceStack are in my opinion not well suited to writing an api. This is probably down to the frameworks desire to be a complete web framework. Here’s our current default implementation of AppHost: For me, the biggest annoyance was trying to find the DefaultContentType setting. I found some of the settings unintuitive to find, but it’s not like you have to do it very often! ##Timing requests with StatsD As you can see, we’ve added a StatsD feature which was very easy to add. It basically times how long each request took and logs it to statsD. Here’s how we did it: It would have been nicer if we could wrap the request handler but that kind of pipeline is foreign to the framework and as such you need to subscribe to the begin and end messages. There’s probably a better way of recording the time spent but hey ho it works for us. One of my biggest bugbears with ServiceStack was the insistence on a separate request and response object, the presence of a special property and that you follow a naming convention all in the name of sending error and validation messages back to the client. It’s explained at length on the wiki and Demis was good enough to answer our question. ##RestServiceBase The simple RestServiceBase that comes with provides an easy way of getting started, but there aren’t many hooks you can use to manipulate how it works. It would be nice if you could inject your own error response creator. We ended up inheriting from RestServiceBase and overriding how it works: We basically chopped out the bits we’re not using and changed the thing that creates the Error Response: This allows us to respond with an error no matter what the type is for the request or what response you are going to send back. It provides us with extra flexibility above what is provided out of the box. In a nutshell, if there’s an exception in the code we will always get a stack trace in the response if debug is on. ##Validation We had the same issue with the validation feature; if you don’t follow the convention you don’t get anything in the response body. So we followed the same practice and copied the ValidationFeature and tweaked it how we wanted it. [Read More]

Run All sql files in a directory

Create a a batch file and change the params accordingly to run all sql files in a directory agaisnt your db

Stub responses with Nest.ElasticClient

We’ve started using ElasticSearch at work for some of our projects. When we started out doing simple web requests was easy enough but as the complexity of what we where doing grew it became obvious that we where starting to write our own DSL for elastic search. Surely someone else has already done this? [Read More]

Cache Fluffer / Gorilla Caching / Cache Warmer

The relatively simple introduction of a cache fluffer can make a huge difference to performance, particularly at peak load. The idea is simple, keep the cache up to do date so you don’t have to go and get data when the user requests the site. [Read More]