Antony Denyer

Tilling the land of software

about me

I am passionate about creating great software and am particularly interested in associated supporting methodologies, namely XP and TDD.

recent public projects

Status updating…

found on

contact at

email@antonydenyer.co.uk

Thoughts on Feature Branches

- - posted in continuous delivery, continuous integration, feature branches | Comments

There seems to be two main schools of thought with regards to feature branches. Some argue that feature branches are an abomination and should not be considered as ‘continuous’ integration let alone delivery. Whilst others suggest that feature branches are a way of allowing each developer to work without disturbing everyone else.

In my opinion the golden rule for doing continuous delivery (and therefore integration) is keeping the master branch pristine. By that means I mean every commit on master should be buildable, testable and deployable. If you commit something to master you should be happy for it to be deployed.

Lets start by exploring some different mindsets and working practices that I’ve experience when not using feature branches.

Feature Toggles or Branch By Abstraction

I’ve seen this work with disciplined developers. They ensure that before every commit all tests are passing. They stop things from being available to users too early by using Branch By Abstraction or Feature Toggles. They tend be happy with this arrangement and feel the benefits outweigh the overhead.

The ‘other’ way

Some developers don’t agree with this, they think that the added work and overhead of Branch By Abstraction or Feature Toggles isn’t worth the effort. They do, however, think that committing to master as much as possible is a good thing. The problems happen when they perceive that what they’re working will take more than a few days and they don’t know how to break it down. The developer will work on their local machine until everything is done. Along the way they make several commits locally but do not push them upstream.

The question is what happens next, in my experience one of a few things can happen.

Squash commits

The developer rebases their local version and squashes their work down to a single commit. They run all the tests locally and then push their changes. The problem with this is that you get one large commit at the end of the feature. However, you do get a single deployable commit that represents the feature that was worked on. It can be difficult to see what has happened but at least you can see when it happened. It falls down when people feel the need to work on a feature over an extended period of time. They think that the problem can not be broken down.

Rebase commits

The developer sees the downside of squashing their commits. They want to maintain that incremental history so rebasing their work against master makes sense. They run all the tests locally and then push the many commits upstream. A common thing I see is developers prefixing every commit message with the feature they were working on. It can be used to make it clearer when work on a feature started and ended, but it’s just convention. There are a couple of problems with this approach. You haven’t run a deploy for every commit. The chances are your CI tool is just going to build the latest commit. And it’s not as easy to revert as you have to revert many commits. You’re also relying on the developer to clearly indicate when the feature started.

Merge maniac

I’ve seen many developers who just don’t understand git or its elegance. However, they still want to do continuous integration so they can put it on their CV. They pull from master all the time without rebasing and push their commits whenever they can, otherwise they might have to do a merge. The teams git history ends up looking like Clapham Junction.

Summary

Doing small single deployable commits with a combination of Branch By Abstraction and Feature Toggles has worked for me in the past. It does require buy-in, discipline and understanding from the whole team.

Feature branches - a simpler option?

I’m not a huge fan of feature branches. But they can push people towards better behaviour. To clarify, a feature branch is when someone takes a branch of master and works on it to implement a single feature.

The horror scenario

The worst case for feature branches is when it’s a flaccid feature branch. It’s used a means of isolating development from the rest of the team. When the developer is finished with their work they merge everything onto master. Again the team git history ends up looking like Clapham Junction. The only benefit the developer has gained is that it makes it easy to pair with other people on the team.

The long lived scenario

Another bad case is when someone is working on a branch for a long period of time (anything over 1 day in my book). The developer doing the work probably wont have any problems merging or integrating their code the rest of the team will. The problem occurs after the developer has finished with the their branch and merges back onto master. Their team mates will try and integrate their code. The repository history may not look horrific but the experience of other developers on the team will be. This is the very opposite of what continuous integration is about.

The ideal scenario

For me if you decide to use feature branches you should do so in the following manner. Firstly you should ensure that your CI tool supports it and is setup to build each feature branch (Teamcity, Jenkins). Every commit to the branch should be rebased against master and pushed to be built. Any build failures should be corrected by amending that commit and being force pushed against that feature branch. Every time you need to rebase against master each commit on your feature branch should be re-tested. When you are happy with the feature, merge with master using –no-ff. This gives you a nice indication of what was included in a feature. This way of working also provides pressure on the lifetime of the branch. The older it is the harder it is to rebase.

Conclusions

If you want to benefit from continuous integration and therefore continuous deployment you need to seriously think about how to do it. Your team needs to be disciplined, mature and highly collaborative. The problems people face with continuous integration are not the tooling, but how they breakdown work. Often you’ll see people start working on a feature before they understand the problem and the goal. Personally I find pairing realy helps me out with this. Somehow it seems more justified spending time breaking down and understanding a problem when there’s two of you. The act of making something clear to both pairs is normally enough to break the work down to the size of a single commit.

There are times when this may not be possible, perhaps a large file reorganisation. I’ve found it’s better for everyone to just stop working whilst the change happens or better yet try mobbing the problem. Everyone in the team needs to understand these changes. Could there possibly be a better use case for it?

Angularjs Performance Tweaks

- - posted in angularjs | Comments

Performance tips for angularjs

Two way data binding using AngularJS is pretty sweet, but it comes at a cost. When dealing with complex data structures or large lists things can get very slow very quickly. Here are some simple things you can check to give your site the performance boost it needs.

Watcher count

A good rule of thumb is to keep the number of watchers as low as possible. In a nutshell when you tell angular to watch something it will keep checking it to see if it has changed. Whenever you do an expression or use a directive that takes an expression (e.g. ng-src=“expression”) in your html template or a $scope.$watch in your controller code Angular adds a watcher to the digest cycle.

Personally I use ng-stats to keep an eye on things.

ng-if vs ng-show

The difference between ng-if and ng-show is that ng-if actually removes or recreates the element where as ng-show will, obviously, just hide or show the element. The intersting thing about this is that if you have something that is hidden any watchers you have will still be updated even though you are not presenting the information to the user.

So for example consider the following:

1
2
3
4
5
6
7
<div ng-if="showexpression">
    <span ng-bind="expression"></div>
</div>

<div ng-show="showexpression">
    <span ng-bind="expression"></div>
</div>

They will have the same amount of watchers if the showexpression evaluates to true. However, if the showexpression evaluates to false in the ng-if then there will be less watchers because it is not rendered onto the DOM.

Prefer simple expressions

This is not usually a big issue but try and make sure all your expressions are properties rather than functions. So favor things like:

1
2
3
<div ng-if="model.show">
    <span ng-bind="expression"></div>
</div>

Rather than:

1
2
3
<div ng-if="model.showFunction()">
    <span ng-bind="expression"></div>
</div>

showFunction will be called every time there is a digest cycle which can cause performance pains.

One time binding

There are many scenarios where all you are doing is displaying data and don’t need two-way data binding. This is where bind once comes in. It allows you to specify what to bind to and when to watch for changes on the thing that you have bound to.

For example:

1
2
3
4
5
6
<div bindonce="Person" bo-title="Person.title">
    <span bo-text="Person.firstname"></span>
    <span bo-text="Person.lastname"></span>
    <img bo-src="Person.picture" bo-alt="Person.title">
    <p bo-class="{'fancy':Person.isNice}" bo-html="Person.story"></p>
</div>

Basically there’s only one watcher on Person. When that changes all the other bo-* directives will fire. This is in contrast to having a watcher on every single expression.

Continuous Delivery - Chaos Build/Release Monkey

- - posted in continuous, delivery | Comments

I think some people are missing the benefits of continuous deployment and are focusing on infrastructure and tools rather than process. The initial incarnation of this chain of thought was continuous integration, where by whenever you check in your code it’s built by a central server. Ensuring that your code builds with other people’s code. Then it was taken a step further by having an automated deployment process. Your code would be automatically deployed after your build & unit test has successfully run. Then people took it further and started declaratively building the infrastructure that was required to run your application.

I think we’ve missed a step.

I think we’ve missed the continuous part of the puzzle. When you’re actively working on a project this isn’t a problem, code is being deployed on a regular basis, your tests are running and everything is being deployed. Most software projects are not being actively developed, they’re the legacy applications that’s running your cms system. In my opinion to truely benefit from continuous deployment you need to be building and deploying your code continuously, even if there’s no new code. This helps keep everything upto date. Some automatic updates may break your deployment process and you won’t know until you try and deploy that urgent bug fix. If your CI server is updating external dependencies you’ll know when something gets deprecated straight away rather than in six months.

This is why we need a release chaos monkey

The Netflix Chaos Monkey brings about the idea of randomly failing infrastructure to push you into a state where your code and infrastructure becomes anti-fragile. Your builds and deployments should be the same, you should be able to deploy at a moment’s notice. The only way to, in my opinion, is to be randomly deploying your code. The tangential benefits are numerous. Your monitoring will improve as you won’t be sitting there watching a deploy go out to production. The cost of code maintenance will be moved to the forefront as things break sooner. Developers will be less scared to work on legacy code as they know it’s in a deployable state.

In summary ….

Release all the things