Antony Denyer

Tilling the land of software

about me

I am passionate about creating great software and am particularly interested in associated supporting methodologies, namely XP and TDD.

recent public projects

Status updating…

found on

contact at

email@antonydenyer.co.uk

Working With AWS Lambda

- - posted in aws, aws lambda, faas, serverless | Comments

Here is a collection of tips for using AWS lambdas. These tips are based on our experiences of using lambdas to ingest a legacy database into a new elasticsearch cluster.

Some tips are FaaS specific whilst others are more generic.

Separate your concerns and be loosely coupled

For some reason, people seem to forget best practices when working with FaaS and to a certain extent microservices. The late Jim Weirich did an excellent talk on decoupling from Rails. It’s a common (mis)practice with Rails developers to tightly coupling themselves to the framework. With FaaS no matter if you’re using a framework or the service provider directly you should aim to abstract yourself away, put clear water between your code and integration code. It can be difficult as most examples are glib and to a degree they want you to be tightly coupled. Create and test libraries in isolation by defining your inputs and outputs, then wire it up to your framework of choice. Separate your concerns - framework integration and business logic.

try/catch all the things

With AWS Lambda it appears that if an exception is thrown in your code, the function will exit without completing. A simple solution is to wrap your entry point code in a massive “try catch”.

1
2
3
4
5
6
7
8
module.exports.handler = (event, context, callback) => {
  try {
    const response = myLibrary.handle(event)
    callback(null, {statusCode: 200, message: response})
  } catch (error) {
    context.fail({statusCode: 500, message: JSON.stringify(error)})
  }
}

use promises

Personally, I prefer using promises. I think they’re aesthetically more elegant and it allows you to bubble errors to their most appropriate level. In this case the lambda response.

1
2
3
4
5
6
7
8
9
10
module.exports.handler = (event, context, callback) => {
  myLibrary
    .handle(event)
    .then(response => {
      callback(null, {statusCode: 200, message: response})
    })
    .catch (error => {
      context.fail({statusCode: 500, message: JSON.stringify(error)})
    })
}

However, there’s a catch (rimshot), if any of your code is doing anything synchronously, i.e. in this case not in a promise you still need that big “try catch” again. It has caught us out with the most mundane of errors. When you’re trying to debug and understand what’s going on it’s a life saver.

So our code becomes:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
module.exports.handler = (event, context, callback) => {
  try {
    myLibrary
      .handle(event)
      .then(response => {
        callback(null, {statusCode: 200, message: response})
      })
      .catch (error => {
        context.fail({statusCode: 500, message: JSON.stringify(error)})
      })
  } catch (error) {
    context.fail({statusCode: 500, message: JSON.stringify(error)})
  }
}

log it out

Simply put you’ve got this black box that you deploy to, and that’s it. No debugging, no helpful information, nothing. We log just about everything so that when we get problems (and you will get them), we can reproduce them locally. We use winstonjs but you can just console.log if you wish.

Now that we want to log things out our handler code becomes:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
module.exports.handler = (event, context, callback) => {
  try {
    logger.info('Handling:${context.awsRequestId}', event)
    myLibrary
      .handle(event)
      .then(response => {
        logger.info('Finished:${context.awsRequestId}', response)
        callback(null, {statusCode: 200, message: response})
      })
      .catch (error => {
        logger.error('Failed:${context.awsRequestId}', error)
        context.fail({statusCode: 500, message: JSON.stringify(error)})
      })
  } catch (error) {
    logger.error('Failed:${context.awsRequestId}', error)
    context.fail({statusCode: 500, message: JSON.stringify(error)})
  }
}

Keep it warm

We’ve noticed that the larger your package gets, the longer it takes to warm up. You can keep your lambdas warm by having a status check call the lambda periodically. As your load increases, you will cross a threshold whereby you will not need to do this. However, given that you’re keeping a function warm as opposed to a service the chances are you’ll have functions that don’t get called very often.

Conclusions

FaaS pushes you to think about software in a different way. It makes it easier for you to think about messages and separating concerns.

Thoughts on Feature Branches

- - posted in continuous delivery, continuous integration, feature branches | Comments

There seems to be two main schools of thought with regards to feature branches. Some argue that feature branches are an abomination and should not be considered as ‘continuous’ integration let alone delivery. Whilst others suggest that feature branches are a way of allowing each developer to work without disturbing everyone else.

In my opinion the golden rule for doing continuous delivery (and therefore integration) is keeping the master branch pristine. By that means I mean every commit on master should be buildable, testable and deployable. If you commit something to master you should be happy for it to be deployed.

Lets start by exploring some different mindsets and working practices that I’ve experience when not using feature branches.

Feature Toggles or Branch By Abstraction

I’ve seen this work with disciplined developers. They ensure that before every commit all tests are passing. They stop things from being available to users too early by using Branch By Abstraction or Feature Toggles. They tend be happy with this arrangement and feel the benefits outweigh the overhead.

The ‘other’ way

Some developers don’t agree with this, they think that the added work and overhead of Branch By Abstraction or Feature Toggles isn’t worth the effort. They do, however, think that committing to master as much as possible is a good thing. The problems happen when they perceive that what they’re working will take more than a few days and they don’t know how to break it down. The developer will work on their local machine until everything is done. Along the way they make several commits locally but do not push them upstream.

The question is what happens next, in my experience one of a few things can happen.

Squash commits

The developer rebases their local version and squashes their work down to a single commit. They run all the tests locally and then push their changes. The problem with this is that you get one large commit at the end of the feature. However, you do get a single deployable commit that represents the feature that was worked on. It can be difficult to see what has happened but at least you can see when it happened. It falls down when people feel the need to work on a feature over an extended period of time. They think that the problem can not be broken down.

Rebase commits

The developer sees the downside of squashing their commits. They want to maintain that incremental history so rebasing their work against master makes sense. They run all the tests locally and then push the many commits upstream. A common thing I see is developers prefixing every commit message with the feature they were working on. It can be used to make it clearer when work on a feature started and ended, but it’s just convention. There are a couple of problems with this approach. You haven’t run a deploy for every commit. The chances are your CI tool is just going to build the latest commit. And it’s not as easy to revert as you have to revert many commits. You’re also relying on the developer to clearly indicate when the feature started.

Merge maniac

I’ve seen many developers who just don’t understand git or its elegance. However, they still want to do continuous integration so they can put it on their CV. They pull from master all the time without rebasing and push their commits whenever they can, otherwise they might have to do a merge. The teams git history ends up looking like Clapham Junction.

Summary

Doing small single deployable commits with a combination of Branch By Abstraction and Feature Toggles has worked for me in the past. It does require buy-in, discipline and understanding from the whole team.

Feature branches - a simpler option?

I’m not a huge fan of feature branches. But they can push people towards better behaviour. To clarify, a feature branch is when someone takes a branch of master and works on it to implement a single feature.

The horror scenario

The worst case for feature branches is when it’s a flaccid feature branch. It’s used a means of isolating development from the rest of the team. When the developer is finished with their work they merge everything onto master. Again the team git history ends up looking like Clapham Junction. The only benefit the developer has gained is that it makes it easy to pair with other people on the team.

The long lived scenario

Another bad case is when someone is working on a branch for a long period of time (anything over 1 day in my book). The developer doing the work probably wont have any problems merging or integrating their code the rest of the team will. The problem occurs after the developer has finished with the their branch and merges back onto master. Their team mates will try and integrate their code. The repository history may not look horrific but the experience of other developers on the team will be. This is the very opposite of what continuous integration is about.

The ideal scenario

For me if you decide to use feature branches you should do so in the following manner. Firstly you should ensure that your CI tool supports it and is setup to build each feature branch (Teamcity, Jenkins). Every commit to the branch should be rebased against master and pushed to be built. Any build failures should be corrected by amending that commit and being force pushed against that feature branch. Every time you need to rebase against master each commit on your feature branch should be re-tested. When you are happy with the feature, merge with master using –no-ff. This gives you a nice indication of what was included in a feature. This way of working also provides pressure on the lifetime of the branch. The older it is the harder it is to rebase.

Conclusions

If you want to benefit from continuous integration and therefore continuous deployment you need to seriously think about how to do it. Your team needs to be disciplined, mature and highly collaborative. The problems people face with continuous integration are not the tooling, but how they breakdown work. Often you’ll see people start working on a feature before they understand the problem and the goal. Personally I find pairing realy helps me out with this. Somehow it seems more justified spending time breaking down and understanding a problem when there’s two of you. The act of making something clear to both pairs is normally enough to break the work down to the size of a single commit.

There are times when this may not be possible, perhaps a large file reorganisation. I’ve found it’s better for everyone to just stop working whilst the change happens or better yet try mobbing the problem. Everyone in the team needs to understand these changes. Could there possibly be a better use case for it?

Angularjs Performance Tweaks

- - posted in angularjs | Comments

Performance tips for angularjs

Two way data binding using AngularJS is pretty sweet, but it comes at a cost. When dealing with complex data structures or large lists things can get very slow very quickly. Here are some simple things you can check to give your site the performance boost it needs.

Watcher count

A good rule of thumb is to keep the number of watchers as low as possible. In a nutshell when you tell angular to watch something it will keep checking it to see if it has changed. Whenever you do an expression or use a directive that takes an expression (e.g. ng-src=“expression”) in your html template or a $scope.$watch in your controller code Angular adds a watcher to the digest cycle.

Personally I use ng-stats to keep an eye on things.

ng-if vs ng-show

The difference between ng-if and ng-show is that ng-if actually removes or recreates the element where as ng-show will, obviously, just hide or show the element. The intersting thing about this is that if you have something that is hidden any watchers you have will still be updated even though you are not presenting the information to the user.

So for example consider the following:

1
2
3
4
5
6
7
<div ng-if="showexpression">
    <span ng-bind="expression"></div>
</div>

<div ng-show="showexpression">
    <span ng-bind="expression"></div>
</div>

They will have the same amount of watchers if the showexpression evaluates to true. However, if the showexpression evaluates to false in the ng-if then there will be less watchers because it is not rendered onto the DOM.

Prefer simple expressions

This is not usually a big issue but try and make sure all your expressions are properties rather than functions. So favor things like:

1
2
3
<div ng-if="model.show">
    <span ng-bind="expression"></div>
</div>

Rather than:

1
2
3
<div ng-if="model.showFunction()">
    <span ng-bind="expression"></div>
</div>

showFunction will be called every time there is a digest cycle which can cause performance pains.

One time binding

There are many scenarios where all you are doing is displaying data and don’t need two-way data binding. This is where bind once comes in. It allows you to specify what to bind to and when to watch for changes on the thing that you have bound to.

For example:

1
2
3
4
5
6
<div bindonce="Person" bo-title="Person.title">
    <span bo-text="Person.firstname"></span>
    <span bo-text="Person.lastname"></span>
    <img bo-src="Person.picture" bo-alt="Person.title">
    <p bo-class="{'fancy':Person.isNice}" bo-html="Person.story"></p>
</div>

Basically there’s only one watcher on Person. When that changes all the other bo-* directives will fire. This is in contrast to having a watcher on every single expression.