Using Puppet to manage AWS agents (on Ubuntu)

One of the first thing any cloud-ification and/or devops-ification project needs to do is figure out how they are going to manage their assets. In my case, I use puppet.

AWS is starting to do more intensive integrations into things using agents that sits in your environment. This is a good, if not great, thing. Except if you want to, oh, you know, control what is installed and how in said environment.

Now, it would be extremely nice if AWS took the approach of Puppet Labs and host a package repository which would mean that one could do this in a manifest to install the Code Deploy Agent.

  package { 'codedeploy-agent':
    ensure => latest,
  service { 'codedeploy-agent':
    ensure  => running,
    enable  => true,
    require => Package[ 'codedeploy-agent' ],

Nothing is ever that easy of course. If I was using RedHat or AWSLinux I could just use the source attribute of the package type such as below to get around the lack of repository but I’m using Ubuntu.

  package { 'codedeploy-agent':
    ensure   => present,
    source   => "",
    provider => rpm,

So down the rabbit hole I go…

First, I needed a local repository which I setup via the puppet-reprepro module. Which worked well — except for the GPG part. What. A. Pain.

After that, I cracked the install script and fetched the .deb file to install…

$ aws s3 cp s3://aws-codedeploy-us-west-2/latest/VERSION . --region us-west-2
download: s3://aws-codedeploy-us-west-2/latest/VERSION to ./VERSION
$ aws s3 cp s3://aws-codedeploy-us-west-2/releases/codedeploy-agent_1.0-1.751_all.deb . --region us-west-2
download: s3://aws-codedeploy-us-west-2/releases/codedeploy-agent_1.0-1.751_all.deb to ./codedeploy-agent_1.0-1.751_all.deb

…and dropped it into the directory the repo slurps files from.

Aaaannnnnd, nothing.

Turns out that the .deb AWS provides doesn’t provide an optional trait in its control file. But reprepro wants it to be mandatory. No problem.

$ mkdir contents
$ cd contents/
$ dpkg-deb -x ../codedeploy-agent_1.0-1.751_all.deb .
$ dpkg-deb -e ../codedeploy-agent_1.0-1.751_all.deb ./DEBIAN
$ grep Priority DEBIAN/control

Alright. Add in our line.

$ grep Priority DEBIAN/control
Priority: Optional

And now to package it all back up

$ dpkg-deb -b . ../codedeploy-agent_1.0-1.751_all.deb
dpkg-deb: building package 'codedeploy-agent' in '../codedeploy-agent_1.0-1.751_all.deb'.

Ta-da! The package is now able to be hosted by a local repository and installed through the standard package type.

But we’re not through yet. AWS wants to check daily to update the package. Sounds good ‘in theory’, but I want to control when packages are updated. Necessitating

  cron { 'codedeploy-agent-update':
    ensure  => absent

Now we’re actually in control.

A few final comments;

  • It’d be nice if AWS would provide a repository to install their agents via apt — so I can selfishly stop managing a repo
  • It’d be nice if the Code Deploy agent had the Priority line in the control file — so I can selfishly stop hacking the .deb myself. The Inspector team’s package does…
  • It’d be nice if AWS didn’t install update scripts for their agents
  • The install script for Code Deploy and Inspector is remarkably different. The teams should talk to each other.
  • The naming convention of the packages for Code Deploy and Inspector are different. The teams should talk to each other.

(Whinging aside, I really do like Code Deploy. And Inspector looks pretty cool too.)

Saunter 2.0

Welp. After 2+ years of tinkering and appearing to be an absentee open source landlord, I just pushed Saunter 2.0.0 up to PyPI. When I list the things that have changed, it is rather silly to have allowed so much time to elapse, but…

  • Remove all references to Selenium Remote Control(RC). Enough time has passed that there are no excuses anymore to not be on WebDriver
  • Config file format has fundamentally changed to be YAML. This is why there has been a major version bump. The Saunter page has the details of the new format.

There are still some hiccups, but the main one is that random ordering of script execution and parallelization doesn’t work yet. I know how to fix it via monkey patch, but…

Per always, if you find any bugs, log them in github. I’ve fixed my notification settings to actually see these ones and will be going through the existing ones over the remainder of the month.

Stop Being A Language Snob: Debunking The ‘But Our Application Is Written In X’ Myth

The folks over at Sauce Labs just published a guest post on their blog I wrong on Stop Being A Language Snob: Debunking The ‘But Our Application Is Written In X’ Myth.

This doesn’t get debunked nearly enough. Consider this fair warning that this might end up being a 2015 theme.

Lessons learned from 19 months of a delivery manager

This is one of the talks I did at Øredev last week. As usual, my decks are generally useless without me in front of them. But lucky(?) for you, all the sessions were recorded.


But if you are too lazy to listen to me for 40 minutes, here is the deck and the content I was working from on stage. Of course, I don’t actually practice my talks so some content was added and others was removed at runtime, but…

WTF is a Delivery Manager?!?!

For about a year and a half I had to the title of ‘Delivery Manager’ which means a whole lot, and nothing at the same time. And therein lies it potency. Just as Andy Warhol famously said that ‘Art is anything you can get away with’, being a Delivery Manager is anything you make it. In my case it was essentially anything and everything to do with getting our application into the hands of the end users.

Tip: Don’t put yourself in a box

Before we landed on this title other ones we considered were ‘Doer of Stuff’, ‘Chaos Monkey’ (blantantly stolen from Netflix), and ‘Minister Without Portfolio.’ But we eventually went with the more business palatable of ‘Delivery Manager’. Since Delivery Manager is a made up title, it is useful to describe it in terms and titles people are used to seeing; Product Owner, Production Gatekeeper and Process Guardian are the three umbrella ones I most associated with. But even those could be sub-divided. And possibly sub-sub-divided. Its also important to recognize that the percentages of these roles are ever in flux. And just to keep things interesting, can sometimes be in conflict with each other.

Because of the mix of problems Delivery Managers will have to, erm, manage there is a certain skillset required to be effective at it. Or perhaps not a specific one, but a breadth of one. Testing, Development, Operations, Marketing, Systems, Accounting, etc.. And I would suggest that you have done a stint consulting as well. There is nothing like it in terms of being a crucible for problem identification and solving. That doesn’t mean of course that you have to be a perfect mix of all these things. It is inevitable that you will be more specialized in one over the other, and I would be suspicious of anyone who said they weren’t. I for instance come up through the testing ranks. Specifically the ‘context’ ranks. That, for me is my secret sauce.

And yes, there is a tonne of irony around the idea that I spent a decade saying ‘I am not a gatekeeper! I am a provider of information!’ to moving precisely into the gatekeeper role. But in that irony I learned a lot. Not just about being /a/ Delivery Manager, but about how /I/ am a Delivery Manager.


While everything is important in one degree or another, this is perhaps the one thing I leaned on every single day. When faced with a request, the default answer is always No. Well, it is more ‘No* (* but help me to say Yes)’. And don’t be subtle or selective about the application of this rule. At 360 there is an entire department I dealt with on a daily basis and they could tell you my default answer is going to be ‘No’ to any request. But that doesn’t stop them from asking since they know about the asterisk. What it does is force them to think about their request ahead of time beyond simplistic ‘because’ terms.

This is not a new idea that I ‘discovered’. I blatantly stole it from someone who was at one point the Product Owner for Firefox (I think… I can’t find the article now, if you find it please let me know). It all boils down to an economics problem around opportunity cost. If you say Yes to everything then the queues will over flow and nothing will get done. But if you say No to everything and selectively grant Yeses then there is order [rather than chaos] in the pipes.

Tip: Learn about economics; specifically Opportunity Cost (but Sunk Costs are also useful to understand when involved in No* discussions)

Tip: Unless you really understand the problem you are being asked to solve, you cannot say yes

Mature organizations understand this at their core. It might be you that levels them up to this understanding though.


Being the person who always says No won’t always make you friends. At first at any rate. You will become everyone’s enemy … and everyone’s friend. Welcome to the balancing act. I would argue that if you are everyone’s friend all the time then you are not doing your job properly. Part of the animosity can be dealt with though explaining the asterisk, but also by communicating who ‘your’ client is. Remember the hats that are being warn have words like ‘Owner’, ‘Guardian’ and ‘Gatekeeper’. Your client in this role may not being whom it is people think it is. In fact, it almost assuredly isn’t. Yours is the application and the [delivery] pipeline.

Tip: The Delivery Pipeline is a product

This will cause friction; and depending on how your company is structured it could be a non trivial amount. But as long as you are consistent in your application of No* and are transparent in the reasonings why, in my experience, it is easily overcomable.

Tip: Do you know what business you are in? Is that the business the business thinks it is in? It’s really hard to win that battle.


The role of ‘Delivery Manager’ can sometimes be a lone wolf one, but at other times you will have people working for you [as I did]. It is critical to remember is that as a ‘people’ manager your primary goal is to protect everyone under you. Physically, psychologically and work-ly. You need to be able to do their job but also to let /them/ do it. Just because you /could/ be the hero doesn’t mean that it is healthy for you or them. Like you would a child, let them work through it and be ready to catch them if they start to fall. [The existence of that metaphor does not mean of course that you should treat them like kids though…] Don’t hold them to higher standards than you hold yourself to. But also don’t inflict yourself on them as well. I’m a workaholic (thanks Dad!); its unfair to put than onto others. I also don’t believe in work-life balance (especially in startups) favouring harmony instead — but what is harmonious for me is likely not the same for someone else.

In order to do that you need to constantly be running defence for your charges; human and software. Invite yourself to meetings, constantly be vigilant for conversations that will affect them. Which unfortunately means you miss out of plugging in your headphones and listening to music all day.

Tip: Ensure grief from No* comes back to you, not your people

Tip: People, not resources

Tip: Ask the people who work for you if they feel you have their back. If not, you’re doing something wrong.

You Will Screw Up

I tend not to speak in terms of absolutes, but here is a truth; You will screw up, potentially largely, in this role. You are making decisions that require a crazy amount of information to be assimilated quickly and if it is not perfectly done or you are missing any [maliciously or innocently] then you are hooped. And that’s ok. Pick yourself up, and go forward. That is the only way you can go. We no longer have the luxury of going back. Remember, tough calls are your job.

Bending to go forward is not a new thing. I’m sure I heard it a couple times before it really stuck, but I credit Brian Marick’s talk at Agile 2008 for that sticking. I can’t find a video of it [though didn’t try hard] but the text of it can be found a

Tip: Be careful though; screw up too much and Impostor Syndrome can set in. And it sucks. A lot. Get help. See Open Sourcing Mental Illness and Mental Health First Aid

Tip: Make sure your boss is onboard with the ‘go forward’ approach

Tip: Confidence is infectious, be patient zero

Know and be true to yourself
One of the biggest things I’ve learned in the last bit is around how /I/ function. Some people find the MBTI as hand-wavy and hokey, but I think its useful not in terms of how I choose to interact with people but in understanding how I am. I’m ENTP. Hilariously so. That’s not going to jive well with organizations that are ‘typed’ differently. That’s been a huge insight for me.

Tip: For a lark, take an MBTI test. Its heuristic, but still interesting

Being a geek I also think of things in terms of the classing AD&D alignment scale. I lean towards the Chaotic Good. We have a goal; there are no rules here. Especially ‘stupid’, ‘artificial’ ones.

And that has got me into trouble more than once. I don’t doubt that it will again in the future.

But I also have a strongly defined set of ethics and philosophy around how things should be done. Entrepreneurs don’t necessarily make good employees…

Putting a bow on it
Being a ‘Delivery Manager’ is great fun. Challenging as heck, but great fun and super rewarding. As someone who cares deeply about quality and the customer experience and has experience backed opinions on how to achieve them I don’t see myself going back to a ‘Just X’ role.

(P.S. I’m now available for hire if your organization needs a Delivery Manager)

Continuous Delivery in a .NET World

Here is one the other talk I did at Øredev this year. The original pitch was going to be show a single character commit and walk it through to production. Which is in itself a pretty bold idea for 40 minutes, but… But that pitch was made 7 months ago with the belief we would have Continuous Delivery to production in place. We ended up not hitting that goal though so the talk became more of a experience report around things we (I) learned while doing it. I would guess they are still about a year away from achieving it given what I know about priorities etc.

Below is the video, and then the deck, and the original ‘script’ I wrote for the talk. Which in my usual manner deviated from on stage at pretty much every turn. But, stories were delivered, mistakes confessed to, and lots of hallways conversations generated so I’m calling it a win.

CONTINUOUS DELIVERY IN A .NET WORLD from Øredev Conference on Vimeo.

I’ll admit to have being off the speaking circuit and such for awhile and the landscape could have changed significantly, but when last I was really paying attention, most, if not all talks about Continuous Delivery focused on the ‘cool’ stack such as Rails, and Node, etc. Without any data to back up this claim at all, I would hazard a guess that there are however more .NET apps out there, especially behind the corporate firewall than those other stacks. Possibly combined. This means that there is a whole lot of people being ignored by the literature. Or at least the ones not being promoted by a tool vendor… This gap needs to be addressed; companies live and die based on these internal applications and there is no reason why they should have crappy process around them just because they are internal.

I’ve been working in a .NET shop for the last 19 months and we’re agonizingly close to having Continuous Delivery into production… but still not quite there yet. Frustrating … but great fodder for a talk about actually doing this in an existing application [‘legacy’] context.

Not surprisingly, the high level bullets are pretty much the same as with other stacks, but there of course variations of the themes that are at play in some cases.

Have a goal
Saying ‘we want to do Continuous Delivery’ is not an achievable business goal. You need to be able to articulate what success looks like. Previously, success as looked like ‘do an update when the CEO is giving an investor pitch’. What is yours?

Get ‘trunk’ deliverable
Could you drop ‘trunk’ [or whatever your version control setup calls it] into production at a moment’s notice? Likely not. While it seems easy, I think this is actually the hardest part about everything? Why? Simple … it takes discipline. And that is hard. Really hard. Especially when the pressure ramps up as people fall back to their training in those situations and if you aren’t training to be disciplined…

So what does disciplined mean to me, right now…

  • feature flags (existence and removal of)
  • externalized configuration
  • non assumption of installation location
  • stop branching!!

Figure out your database
This, I think, is actually the hardest part of a modern application. And is really kinda related to the previous point. You need to be able to deploy your application with, and without, database updates going out. That means…

  • your tooling needs to support that
  • your build chains needs to support that
  • your application needs to support that (forwards and backwards compatible)
  • your process needs to support that

This is not simple. Personally, I love the ‘migration’ approach. Unfortunately… our DBA didn’t.

Convention over Configuration FTW
I’m quite convinced of two things; this is why RoR and friends ‘won’ and why most talks deal with them rather than .NET. To really win at doing Continuous Delivery [or at least without going insane], you need to standardize your projects. The solution file goes here. Images go here. CSS goes here. Yes, the ‘default’ project layout does have some of that stuff already figured out, but it is waaaaay too easy to go of script in the name of ‘configurability’. Stop that! Every single one of our .NET builds is slightly different because of that at 360, which means that we have to spend time when wiring them up and dealing with their snowflake-ness. I should have been able to ‘just’ apply a [TeamCity] template to the job and give it some variables…

Make things small [and modular]
This is something that has started to affect us more and more. And something that doesn’t be default in the RoR community with their prevalence of gems. If something has utility, and is going to be across multiple projects, make it a Nuget package. The first candidate for this could be your logging infrastructure. Then your notifications infrastructure. I have seen so much duplicate code…

Not all flows are created equal
This is a recent realization, though having said that, is a pretty obvious one as well. Not all projects, not all teams, not all applications have the same process for achieving whatever your Continuous Delivery goal is. Build your chains accordingly.

Automate what should be automated
I get accused of splitting hairs for this one, but Continuous Delivery is not about ‘push a button, magic, production!’. It is all about automating what should be automated, and doing by hand what should be done by hand. But! Also being able to short circuit gates when necessary.

It is also about automating the right things with the right tools. Are they meant for .NET or was it an afterthought? Is it a flash in the pan or is it going to be around? Does its project assumptions align with yours?

Infrastructure matters
For Continuous Delivery to really work, and this is why its often mentioned in the same breath as DevOps (we’ll ignore that who problem of ‘if you have devops you aren’t doing devops’…), the management of your infrastructure and environments needs to be fully automated as well. This is very much in the bucket of ‘what should be automated’. Thankfully, the tooling has caught up to Windows so you should be working on this right from the start. Likely in tandem with getting trunk deliverable.

But even still, there are going to have to be things that you need to drop down to the shell and do. We made a leap forward towards our goal when we let Octopus start to control IIS. But they don’t expose enough hooks for the particular needs of our application so we have to use the IIS cmdlets to do what we need afterwards. And there is absolutely nothing wrong with this approach.

Its all predicated by people
Lastly, and most importantly, you need to have the right people in place. If you don’t, then it doesn’t matter how well you execute on the above items, you /will/ fail.

SaunterPHP and the Browsermob Proxy

At this point, running all your scripts through a proxy should just be an accepted good practice. And if not, go watch Proxy & Executor. Back? Excellent. Now let’s get your scripts going through the proxy.

First, you need to get the proxy. It can be run on any host that the Selenium Server machines can contact and you’ll only have one for your entire platform setup. Once you have it downloaded, just run the script in the bin directory to get it going.

I’ll point out that the BMP works in a rather interesting manner regarding ports that need to be open [since I lost all afternoon to stupid firewall rules]. Let’s say you start it on port 9090. That is the port you will tell Saunter about. When the script starts it will contact the server on that port and ask for a different port that will be used as the actual proxy. The port it returns is the next available one sequentially so you will need this port, and a large chunk of ports after that too.

(The need for all these ports is actually a bug in Saunter that will be fixed one of these days.)

But now we have the BMP running and able to accept connections. Next we need to tell Saunter about it. And like with everything else configuration wise it goes in conf/

$GLOBALS['settings']['proxy'] = "localhost:9090";
$GLOBALS['settings']['proxy.browsermob'] = true;

And that, in theory, is all you need to do in order to get things running though the proxy. To then use any of the functions available to you you can use $this->client is the object you want to use.

Going Dependent

Earlier in the year I took my idea for a ‘mindmap based test idea management app’ all the way to the finals of Ignite Durham. And while I didn’t win, one of the judges was the founder and CEO of a local (like, 8 minute walk from the house local) tech startup, and so I followed him on twitter and then promptly forgot I did it.

Until that is he advertised a ‘QA’ role [now ‘testing’] and while I didn’t really want to give up consulting we did come to an arrangement of three days a week. It turns out, I suck at working somewhere part-time and being able to ‘punch out’ at the allotted hours. Or at least somewhere there are fun challenges to solve. [Heh, and oh boy are there ever!]

There are a few more twists-and-turns to the tale, but the end of it is that as of this past Monday I am the ‘Software Delivery Manager’ at Which is kinda a made-up title we came up with the encompass the various things I was doing. The job description focuses on;

  • Manage delivery of software products from development into production
  • Manage team Software Testers to test all product changes and new features, including development of automated test suites
  • Champion software delivery best practices, such as continuous delivery, automated testing and operations automation, and work to continuously improve the team’s software delivery capabilities
  • Work with Product Management, Development and Operations teams to identify requirements, design new features, estimate development efforts and deliver on product roadmap
  • Work with Operations team to deploy and support production systems

Essentially, Element 34’s consulting practice — but for a single entity.

Which brings us to some business related FAQ-y stuff.

  • How will this affect existing support contracts? – It likely won’t. I’ll still turnaround email responses within a couple hours, and any larger code samples / upgrades will be done in the evenings or weekends. Which is when a lot of them were anyways
  • Are you taking new clients? – Likely not. But the stuff will still be active if you want to chat around a very specific problem you are experiencing
  • What about Saunter? – Saunter will absolutely continue to exist and I have some interesting things planned for it. We’ll be using it at 360 as well. Though there would be some hilarity in using something else.

It should be an interesting ride as we change a monolithic, hand deployed application into a nimble continuous delivery-ed one. The question though is, who really won the Ignite contest now?

(Oh, and if you’re a devops-y minded person who knows both Windows and Linux and live in the Eastern GTA please get in tour — we’re hiring!)

Cooking With Web Automation – JQuery UI Menus

So I am officially sick and tired of seeing webinars on ‘locators’ and ‘[basic] synchronization’ and ‘page objects’. There is enough good content out there for all those topics … and a sea of horrid, but that’s a separate problem. What I want to see is more of the ‘secret tricks’ around the things that cause automation folks to pull out their hair in frustration.

To this end, Jim Holmes and I are co-hosting a webinar on June 21 at 11:30 EDT on wrestling JQuery UI Menus and all it ‘mouse over, then wait, then mouse over and click’ goodness. To some definition of goodness.. And if you know how to do it, its actually quite a simple problem to solve and whose solution lends itself to other problems.

This ‘recipe’ is the first of couple that Jim and I have talked about in a ‘Cooking With Web Automation’ series. I’ll be scripting with Python WebDriver and Jim will be working with Test Studio. Go register now!

So you want to build a framework…


My first foray into the ‘framework’ business was likely 1999 at one of the big Canadian banks. We were automating binders, literally 3″ ring binders, of manual test cases into WinRunner. There were 5 or 6 application silos with some shared things for login, etc. It was the ‘shared things’ that made it a framework. From there I have written one at pretty much every employer, the lessons learned having resulted in Py.Saunter and SaunterPHP. That was 14 years ago though. I’ve had a tonne of time to make mistakes [and hopefully learn from them] that a lot of people getting sucked into the automation whirlpool don’t have the advantage of having. They don’t know what they don’t know as it were.

This talk is about the various things a framework designer needs to be thinking about constantly from the perspective of someone who has lost sight of them at some point. The goal was not ‘here is how you write a framework’ since I could just point to my github… but to cause the attendees to go to their office later and start questioning the decisions they have made implicitly to see if those are the ones they are comfortable having as explicit ones.


One of the things you learn as a consultant is that things displace other things. This includes automation frameworks. It could be that it is replacing ‘nothing’, or another framework, or manual exploratory testing. But it is replacing something. Your job as a framework author is to be better than what you are replacing. And to keep improving it so that it doesn’t get displaced by something else.


Lightsabers is a favourite meme I use over and over. See Lightsabers, Time Machines, & Other Automation Heuristics


One of the things you learn writing frameworks is that the vast majority of stuff you write, your users won’t see if you do your job properly. Frameworks are all about hiding the abstractions and details behind the scenes. This can be a management problem though if you don’t keep them appraised of what you are working on. Trust me, it is possible to make a tonne of framework improvements and then catch trouble for not being productive on the automation…



Also a common rant of mine, and has a section in the Lightsabers article.


At the heart of your framework will be a runner. This is Py.Test, JUnit, PHPUnit, etc. Its job is to collect, execute and report on your scripts. Ideally the execution part will be done through some variation of the xUnit setup/run/teardown pattern. A well written framework tightly integrates into/around the runner. Look at the idioms, patterns and integration points of the runner. Once you choose one, it becomes really hard to replace it. Remember that the Audience is telling you which language you are using, but most languages have multiple runners you can choose from.

(Also, a hilariously meta photo.)


This is one of the biggest things you, the framework author, gets to control. Configs go here, and they look like this. Logs will go here, and they are in this format. Etc. The most successful frameworks all limit the decisions their users can make.


Don’t put your configuration details in your scripts. Don’t put them in your page objects. This is a pretty huge code smell. Put them somewhere that is completely separate and can have a different life within version control. The format of this is also dictated by the Audience. And is why, after 3.5 years of ‘selling’ Saunter to people I am switching the format to YAML since it has taken me that long to really understand who my ‘typical’ customer is.


Recall that one of the roles of the Runner is to discover the scripts that will be executed. Is it going to be by annotation/decoration/tag (my current favourite approach), method calls in a class, xml listing methods, etc. This seems like a small thing, but it actually has a pretty big impact since it also affects structure of scripts.


Logging is about diagnostics to the users. Presenting information to the user is crazy difficult. I pretty much avoid this problem and show stack traces from the the underlying runner. Heck, automation is programming … stack traces is how you diagnose crashes. Right? The key thing here is that I explicitly made that decision.


This is the dashboard-y stuff. The easiest thing here is to use the non-standard-yet-standard Ant JUnit XML format. I’m pretty sure every framework author has implemented this at some point. Please don’t come up with a new format unless you are also writing the consumer[s] of the reports.


Commercial frameworks live and die based on where / what they integrate with. OSS ones still somewhat do, but the expectation isn’t there so front-and-center. If your framework reports in the ant format you get most CI integrations ‘for free.’ Know what your framework is displacing. Unless it is part of a larger process displacement (waterfall to agile), it needs to integrate with at minimum what the existing thing does. And even within the context of a larger change it might need to.


Does the execution have to be on iron behind the firewall? Or can be in The Cloud? At this point I think all frameworks need to have a cloud execution story. What’s yours? Does it integrate with a specific cloud, all clouds? By configuration or by documentation?


Welcome to the house of cards. How important is backwards compatibility? How do you coax your users to upgrade? Is the upgrade process manual or automagic? You need to decide where you land on the spectrum with this. And then be consistent with it. When I release new versions of stuff, this is what worries the most. Especially for enterprise-y clients.


Lighting up browsers is slow. Its just a fact. Parallelizing the run is part of the solution to make making this tolerable. How does your framework handle this? Mine for instance doesn’t do parallelization, instead pushing it onto the CI servers it integrates with. (See how all these things tie together?)


Yes, your framework should run through a proxy. If yours doesn’t, this is your homework.


Sometimes you need to provide the user with the ability to hit things really, really hard in ways that they shouldn’t employ all the time. The framework should let ‘advanced’ users do this sort of thing. For instance, I think runtime parameterization of test methods is a really bad idea. But that doesn’t mean I have disabled the hooks in Py.Test that allows you to do that (though I could have..). This is analogous to the JS Executor in WebDriver.


One of the things that used to burn us with WinRunner was how it interacted with version control. Or more correctly how it didn’t. Everything in you framework should have a version control story around it. For instance, with my stuff, the actual config files that get used don’t get checked in, though the templates for them do. (Another thing I stole from Rails.)


Packaging is also one of the most horrific parts of most languages. But it is important that you work with the default packaging system of the language. Regardless of how horrific it is. [*cough* PEAR] Remember, ‘clone from github’ is not a distribution strategy.


It frustrates me that people are building frameworks for just ‘web’ or just ‘mobile’. You want to win? Be able to use the same framework for both. Figure it out.


Perhaps more important than what your framework does is what it doesn’t do. Well, doesn’t do on purpose. If it doesn’t do it because it is missing a feature, then you are at risk of having it displaced by something that has it. If you have a story / explanation about why you don’t support something, then that’s so much better. Of course, you could still get displaced if someone really, really wants that thing. But don’t compromise on your vision for the framework


At this point there isn’t much technical reason to not open source your framework. Of course, there are lots of business reasons not to, like ‘OMG! We don’t have a business model’ which is fine. But if the framework is a supporting application for your real business, open it up. Don’t underestimate the effect github has had on both distribution and instance community.


And finally, don’t be afraid to screw up. Often. And when you do, apologize and fix it. And then make a new mistake while trying to push things forward.

Page Object Contest #1: TimelineJS

Every two weeks I’ll be coming up with a dastardly bit of web automation and running a contest to see how others solve the problem. The chosen task will not be around what I am automating for work right now so this is not “Please do Adam’s work for him”. Though I do hope that these contests become an archive of sorts for how to tackle problems like this, where ‘this’ is likely to be ‘have to use the javascript executor’ as I’m quite convinced that this is where we are heading.

Contest #1: TimelineJS

TimelineJS looks like a pretty cool little widget, and one I can see it also being something that would be a ‘fun’ rabbit hole to fall down when automating. Unfortunately rabbit holes can get you into trouble with your boss.

How to play;

  • Create a Page Object against one of the example timelines. Which one shouldn’t matter since a PO should be generic enough to work on any timeline.
  • Add a comment to this post which links to either a blog post or public repo which has the code by 12 noon EDT by Monday, May 20, 2013
  • Make sure the comment has a real email address so I can contact you mid next week

The Judging Criteria is going to be completely subjective, but will be a combination of both utility of the PO and how well it actually works.

Of course, its not a contest if there isn’t a prize associated with it? The prize for this round will be an hour of coaching via skype with me on your Selenium problems.