Lessons learned from 19 months of a delivery manager

This is one of the talks I did at Øredev last week. As usual, my decks are generally useless without me in front of them. But lucky(?) for you, all the sessions were recorded.


But if you are too lazy to listen to me for 40 minutes, here is the deck and the content I was working from on stage. Of course, I don’t actually practice my talks so some content was added and others was removed at runtime, but…

WTF is a Delivery Manager?!?!

For about a year and a half I had to the title of ‘Delivery Manager’ which means a whole lot, and nothing at the same time. And therein lies it potency. Just as Andy Warhol famously said that ‘Art is anything you can get away with’, being a Delivery Manager is anything you make it. In my case it was essentially anything and everything to do with getting our application into the hands of the end users.

Tip: Don’t put yourself in a box

Before we landed on this title other ones we considered were ‘Doer of Stuff’, ‘Chaos Monkey’ (blantantly stolen from Netflix), and ‘Minister Without Portfolio.’ But we eventually went with the more business palatable of ‘Delivery Manager’. Since Delivery Manager is a made up title, it is useful to describe it in terms and titles people are used to seeing; Product Owner, Production Gatekeeper and Process Guardian are the three umbrella ones I most associated with. But even those could be sub-divided. And possibly sub-sub-divided. Its also important to recognize that the percentages of these roles are ever in flux. And just to keep things interesting, can sometimes be in conflict with each other.

Because of the mix of problems Delivery Managers will have to, erm, manage there is a certain skillset required to be effective at it. Or perhaps not a specific one, but a breadth of one. Testing, Development, Operations, Marketing, Systems, Accounting, etc.. And I would suggest that you have done a stint consulting as well. There is nothing like it in terms of being a crucible for problem identification and solving. That doesn’t mean of course that you have to be a perfect mix of all these things. It is inevitable that you will be more specialized in one over the other, and I would be suspicious of anyone who said they weren’t. I for instance come up through the testing ranks. Specifically the ‘context’ ranks. That, for me is my secret sauce.

And yes, there is a tonne of irony around the idea that I spent a decade saying ‘I am not a gatekeeper! I am a provider of information!’ to moving precisely into the gatekeeper role. But in that irony I learned a lot. Not just about being /a/ Delivery Manager, but about how /I/ am a Delivery Manager.


While everything is important in one degree or another, this is perhaps the one thing I leaned on every single day. When faced with a request, the default answer is always No. Well, it is more ‘No* (* but help me to say Yes)’. And don’t be subtle or selective about the application of this rule. At 360 there is an entire department I dealt with on a daily basis and they could tell you my default answer is going to be ‘No’ to any request. But that doesn’t stop them from asking since they know about the asterisk. What it does is force them to think about their request ahead of time beyond simplistic ‘because’ terms.

This is not a new idea that I ‘discovered’. I blatantly stole it from someone who was at one point the Product Owner for Firefox (I think… I can’t find the article now, if you find it please let me know). It all boils down to an economics problem around opportunity cost. If you say Yes to everything then the queues will over flow and nothing will get done. But if you say No to everything and selectively grant Yeses then there is order [rather than chaos] in the pipes.

Tip: Learn about economics; specifically Opportunity Cost (but Sunk Costs are also useful to understand when involved in No* discussions)

Tip: Unless you really understand the problem you are being asked to solve, you cannot say yes

Mature organizations understand this at their core. It might be you that levels them up to this understanding though.


Being the person who always says No won’t always make you friends. At first at any rate. You will become everyone’s enemy … and everyone’s friend. Welcome to the balancing act. I would argue that if you are everyone’s friend all the time then you are not doing your job properly. Part of the animosity can be dealt with though explaining the asterisk, but also by communicating who ‘your’ client is. Remember the hats that are being warn have words like ‘Owner’, ‘Guardian’ and ‘Gatekeeper’. Your client in this role may not being whom it is people think it is. In fact, it almost assuredly isn’t. Yours is the application and the [delivery] pipeline.

Tip: The Delivery Pipeline is a product

This will cause friction; and depending on how your company is structured it could be a non trivial amount. But as long as you are consistent in your application of No* and are transparent in the reasonings why, in my experience, it is easily overcomable.

Tip: Do you know what business you are in? Is that the business the business thinks it is in? It’s really hard to win that battle.


The role of ‘Delivery Manager’ can sometimes be a lone wolf one, but at other times you will have people working for you [as I did]. It is critical to remember is that as a ‘people’ manager your primary goal is to protect everyone under you. Physically, psychologically and work-ly. You need to be able to do their job but also to let /them/ do it. Just because you /could/ be the hero doesn’t mean that it is healthy for you or them. Like you would a child, let them work through it and be ready to catch them if they start to fall. [The existence of that metaphor does not mean of course that you should treat them like kids though…] Don’t hold them to higher standards than you hold yourself to. But also don’t inflict yourself on them as well. I’m a workaholic (thanks Dad!); its unfair to put than onto others. I also don’t believe in work-life balance (especially in startups) favouring harmony instead — but what is harmonious for me is likely not the same for someone else.

In order to do that you need to constantly be running defence for your charges; human and software. Invite yourself to meetings, constantly be vigilant for conversations that will affect them. Which unfortunately means you miss out of plugging in your headphones and listening to music all day.

Tip: Ensure grief from No* comes back to you, not your people

Tip: People, not resources

Tip: Ask the people who work for you if they feel you have their back. If not, you’re doing something wrong.

You Will Screw Up

I tend not to speak in terms of absolutes, but here is a truth; You will screw up, potentially largely, in this role. You are making decisions that require a crazy amount of information to be assimilated quickly and if it is not perfectly done or you are missing any [maliciously or innocently] then you are hooped. And that’s ok. Pick yourself up, and go forward. That is the only way you can go. We no longer have the luxury of going back. Remember, tough calls are your job.

Bending to go forward is not a new thing. I’m sure I heard it a couple times before it really stuck, but I credit Brian Marick’s talk at Agile 2008 for that sticking. I can’t find a video of it [though didn’t try hard] but the text of it can be found a http://www.exampler.com/blog/2008/11/14/agile-development-practices-keynote-text.

Tip: Be careful though; screw up too much and Impostor Syndrome can set in. And it sucks. A lot. Get help. See Open Sourcing Mental Illness and Mental Health First Aid

Tip: Make sure your boss is onboard with the ‘go forward’ approach

Tip: Confidence is infectious, be patient zero

Know and be true to yourself
One of the biggest things I’ve learned in the last bit is around how /I/ function. Some people find the MBTI as hand-wavy and hokey, but I think its useful not in terms of how I choose to interact with people but in understanding how I am. I’m ENTP. Hilariously so. That’s not going to jive well with organizations that are ‘typed’ differently. That’s been a huge insight for me.

Tip: For a lark, take an MBTI test. Its heuristic, but still interesting

Being a geek I also think of things in terms of the classing AD&D alignment scale. I lean towards the Chaotic Good. We have a goal; there are no rules here. Especially ‘stupid’, ‘artificial’ ones.

And that has got me into trouble more than once. I don’t doubt that it will again in the future.

But I also have a strongly defined set of ethics and philosophy around how things should be done. Entrepreneurs don’t necessarily make good employees…

Putting a bow on it
Being a ‘Delivery Manager’ is great fun. Challenging as heck, but great fun and super rewarding. As someone who cares deeply about quality and the customer experience and has experience backed opinions on how to achieve them I don’t see myself going back to a ‘Just X’ role.

(P.S. I’m now available for hire if your organization needs a Delivery Manager)

Continuous Delivery in a .NET World

Here is one the other talk I did at Øredev this year. The original pitch was going to be show a single character commit and walk it through to production. Which is in itself a pretty bold idea for 40 minutes, but… But that pitch was made 7 months ago with the belief we would have Continuous Delivery to production in place. We ended up not hitting that goal though so the talk became more of a experience report around things we (I) learned while doing it. I would guess they are still about a year away from achieving it given what I know about priorities etc.

Below is the video, and then the deck, and the original ‘script’ I wrote for the talk. Which in my usual manner deviated from on stage at pretty much every turn. But, stories were delivered, mistakes confessed to, and lots of hallways conversations generated so I’m calling it a win.

CONTINUOUS DELIVERY IN A .NET WORLD from Øredev Conference on Vimeo.

I’ll admit to have being off the speaking circuit and such for awhile and the landscape could have changed significantly, but when last I was really paying attention, most, if not all talks about Continuous Delivery focused on the ‘cool’ stack such as Rails, and Node, etc. Without any data to back up this claim at all, I would hazard a guess that there are however more .NET apps out there, especially behind the corporate firewall than those other stacks. Possibly combined. This means that there is a whole lot of people being ignored by the literature. Or at least the ones not being promoted by a tool vendor… This gap needs to be addressed; companies live and die based on these internal applications and there is no reason why they should have crappy process around them just because they are internal.

I’ve been working in a .NET shop for the last 19 months and we’re agonizingly close to having Continuous Delivery into production… but still not quite there yet. Frustrating … but great fodder for a talk about actually doing this in an existing application [‘legacy’] context.

Not surprisingly, the high level bullets are pretty much the same as with other stacks, but there of course variations of the themes that are at play in some cases.

Have a goal
Saying ‘we want to do Continuous Delivery’ is not an achievable business goal. You need to be able to articulate what success looks like. Previously, success as looked like ‘do an update when the CEO is giving an investor pitch’. What is yours?

Get ‘trunk’ deliverable
Could you drop ‘trunk’ [or whatever your version control setup calls it] into production at a moment’s notice? Likely not. While it seems easy, I think this is actually the hardest part about everything? Why? Simple … it takes discipline. And that is hard. Really hard. Especially when the pressure ramps up as people fall back to their training in those situations and if you aren’t training to be disciplined…

So what does disciplined mean to me, right now…

  • feature flags (existence and removal of)
  • externalized configuration
  • non assumption of installation location
  • stop branching!!

Figure out your database
This, I think, is actually the hardest part of a modern application. And is really kinda related to the previous point. You need to be able to deploy your application with, and without, database updates going out. That means…

  • your tooling needs to support that
  • your build chains needs to support that
  • your application needs to support that (forwards and backwards compatible)
  • your process needs to support that

This is not simple. Personally, I love the ‘migration’ approach. Unfortunately… our DBA didn’t.

Convention over Configuration FTW
I’m quite convinced of two things; this is why RoR and friends ‘won’ and why most talks deal with them rather than .NET. To really win at doing Continuous Delivery [or at least without going insane], you need to standardize your projects. The solution file goes here. Images go here. CSS goes here. Yes, the ‘default’ project layout does have some of that stuff already figured out, but it is waaaaay too easy to go of script in the name of ‘configurability’. Stop that! Every single one of our .NET builds is slightly different because of that at 360, which means that we have to spend time when wiring them up and dealing with their snowflake-ness. I should have been able to ‘just’ apply a [TeamCity] template to the job and give it some variables…

Make things small [and modular]
This is something that has started to affect us more and more. And something that doesn’t be default in the RoR community with their prevalence of gems. If something has utility, and is going to be across multiple projects, make it a Nuget package. The first candidate for this could be your logging infrastructure. Then your notifications infrastructure. I have seen so much duplicate code…

Not all flows are created equal
This is a recent realization, though having said that, is a pretty obvious one as well. Not all projects, not all teams, not all applications have the same process for achieving whatever your Continuous Delivery goal is. Build your chains accordingly.

Automate what should be automated
I get accused of splitting hairs for this one, but Continuous Delivery is not about ‘push a button, magic, production!’. It is all about automating what should be automated, and doing by hand what should be done by hand. But! Also being able to short circuit gates when necessary.

It is also about automating the right things with the right tools. Are they meant for .NET or was it an afterthought? Is it a flash in the pan or is it going to be around? Does its project assumptions align with yours?

Infrastructure matters
For Continuous Delivery to really work, and this is why its often mentioned in the same breath as DevOps (we’ll ignore that who problem of ‘if you have devops you aren’t doing devops’…), the management of your infrastructure and environments needs to be fully automated as well. This is very much in the bucket of ‘what should be automated’. Thankfully, the tooling has caught up to Windows so you should be working on this right from the start. Likely in tandem with getting trunk deliverable.

But even still, there are going to have to be things that you need to drop down to the shell and do. We made a leap forward towards our goal when we let Octopus start to control IIS. But they don’t expose enough hooks for the particular needs of our application so we have to use the IIS cmdlets to do what we need afterwards. And there is absolutely nothing wrong with this approach.

Its all predicated by people
Lastly, and most importantly, you need to have the right people in place. If you don’t, then it doesn’t matter how well you execute on the above items, you /will/ fail.

SaunterPHP and the Browsermob Proxy

At this point, running all your scripts through a proxy should just be an accepted good practice. And if not, go watch Proxy & Executor. Back? Excellent. Now let’s get your scripts going through the proxy.

First, you need to get the proxy. It can be run on any host that the Selenium Server machines can contact and you’ll only have one for your entire platform setup. Once you have it downloaded, just run the script in the bin directory to get it going.

I’ll point out that the BMP works in a rather interesting manner regarding ports that need to be open [since I lost all afternoon to stupid firewall rules]. Let’s say you start it on port 9090. That is the port you will tell Saunter about. When the script starts it will contact the server on that port and ask for a different port that will be used as the actual proxy. The port it returns is the next available one sequentially so you will need this port, and a large chunk of ports after that too.

(The need for all these ports is actually a bug in Saunter that will be fixed one of these days.)

But now we have the BMP running and able to accept connections. Next we need to tell Saunter about it. And like with everything else configuration wise it goes in conf/saunter.inc

$GLOBALS['settings']['proxy'] = "localhost:9090";
$GLOBALS['settings']['proxy.browsermob'] = true;

And that, in theory, is all you need to do in order to get things running though the proxy. To then use any of the functions available to you you can use $this->client is the object you want to use.

Going Dependent

Earlier in the year I took my idea for a ‘mindmap based test idea management app’ all the way to the finals of Ignite Durham. And while I didn’t win, one of the judges was the founder and CEO of a local (like, 8 minute walk from the house local) tech startup, 360incentives.com and so I followed him on twitter and then promptly forgot I did it.

Until that is he advertised a ‘QA’ role [now ‘testing’] and while I didn’t really want to give up consulting we did come to an arrangement of three days a week. It turns out, I suck at working somewhere part-time and being able to ‘punch out’ at the allotted hours. Or at least somewhere there are fun challenges to solve. [Heh, and oh boy are there ever!]

There are a few more twists-and-turns to the tale, but the end of it is that as of this past Monday I am the ‘Software Delivery Manager’ at 360incentives.com. Which is kinda a made-up title we came up with the encompass the various things I was doing. The job description focuses on;

  • Manage delivery of software products from development into production
  • Manage team Software Testers to test all product changes and new features, including development of automated test suites
  • Champion software delivery best practices, such as continuous delivery, automated testing and operations automation, and work to continuously improve the team’s software delivery capabilities
  • Work with Product Management, Development and Operations teams to identify requirements, design new features, estimate development efforts and deliver on product roadmap
  • Work with Operations team to deploy and support production systems

Essentially, Element 34’s consulting practice — but for a single entity.

Which brings us to some business related FAQ-y stuff.

  • How will this affect existing support contracts? – It likely won’t. I’ll still turnaround email responses within a couple hours, and any larger code samples / upgrades will be done in the evenings or weekends. Which is when a lot of them were anyways
  • Are you taking new clients? – Likely not. But the Clarity.fm stuff will still be active if you want to chat around a very specific problem you are experiencing
  • What about Saunter? – Saunter will absolutely continue to exist and I have some interesting things planned for it. We’ll be using it at 360 as well. Though there would be some hilarity in using something else.

It should be an interesting ride as we change a monolithic, hand deployed application into a nimble continuous delivery-ed one. The question though is, who really won the Ignite contest now?

(Oh, and if you’re a devops-y minded person who knows both Windows and Linux and live in the Eastern GTA please get in tour — we’re hiring!)

Cooking With Web Automation – JQuery UI Menus

So I am officially sick and tired of seeing webinars on ‘locators’ and ‘[basic] synchronization’ and ‘page objects’. There is enough good content out there for all those topics … and a sea of horrid, but that’s a separate problem. What I want to see is more of the ‘secret tricks’ around the things that cause automation folks to pull out their hair in frustration.

To this end, Jim Holmes and I are co-hosting a webinar on June 21 at 11:30 EDT on wrestling JQuery UI Menus and all it ‘mouse over, then wait, then mouse over and click’ goodness. To some definition of goodness.. And if you know how to do it, its actually quite a simple problem to solve and whose solution lends itself to other problems.

This ‘recipe’ is the first of couple that Jim and I have talked about in a ‘Cooking With Web Automation’ series. I’ll be scripting with Python WebDriver and Jim will be working with Test Studio. Go register now!

So you want to build a framework…


My first foray into the ‘framework’ business was likely 1999 at one of the big Canadian banks. We were automating binders, literally 3″ ring binders, of manual test cases into WinRunner. There were 5 or 6 application silos with some shared things for login, etc. It was the ‘shared things’ that made it a framework. From there I have written one at pretty much every employer, the lessons learned having resulted in Py.Saunter and SaunterPHP. That was 14 years ago though. I’ve had a tonne of time to make mistakes [and hopefully learn from them] that a lot of people getting sucked into the automation whirlpool don’t have the advantage of having. They don’t know what they don’t know as it were.

This talk is about the various things a framework designer needs to be thinking about constantly from the perspective of someone who has lost sight of them at some point. The goal was not ‘here is how you write a framework’ since I could just point to my github… but to cause the attendees to go to their office later and start questioning the decisions they have made implicitly to see if those are the ones they are comfortable having as explicit ones.


One of the things you learn as a consultant is that things displace other things. This includes automation frameworks. It could be that it is replacing ‘nothing’, or another framework, or manual exploratory testing. But it is replacing something. Your job as a framework author is to be better than what you are replacing. And to keep improving it so that it doesn’t get displaced by something else.


Lightsabers is a favourite meme I use over and over. See Lightsabers, Time Machines, & Other Automation Heuristics


One of the things you learn writing frameworks is that the vast majority of stuff you write, your users won’t see if you do your job properly. Frameworks are all about hiding the abstractions and details behind the scenes. This can be a management problem though if you don’t keep them appraised of what you are working on. Trust me, it is possible to make a tonne of framework improvements and then catch trouble for not being productive on the automation…



Also a common rant of mine, and has a section in the Lightsabers article.


At the heart of your framework will be a runner. This is Py.Test, JUnit, PHPUnit, etc. Its job is to collect, execute and report on your scripts. Ideally the execution part will be done through some variation of the xUnit setup/run/teardown pattern. A well written framework tightly integrates into/around the runner. Look at the idioms, patterns and integration points of the runner. Once you choose one, it becomes really hard to replace it. Remember that the Audience is telling you which language you are using, but most languages have multiple runners you can choose from.

(Also, a hilariously meta photo.)


This is one of the biggest things you, the framework author, gets to control. Configs go here, and they look like this. Logs will go here, and they are in this format. Etc. The most successful frameworks all limit the decisions their users can make.


Don’t put your configuration details in your scripts. Don’t put them in your page objects. This is a pretty huge code smell. Put them somewhere that is completely separate and can have a different life within version control. The format of this is also dictated by the Audience. And is why, after 3.5 years of ‘selling’ Saunter to people I am switching the format to YAML since it has taken me that long to really understand who my ‘typical’ customer is.


Recall that one of the roles of the Runner is to discover the scripts that will be executed. Is it going to be by annotation/decoration/tag (my current favourite approach), method calls in a class, xml listing methods, etc. This seems like a small thing, but it actually has a pretty big impact since it also affects structure of scripts.


Logging is about diagnostics to the users. Presenting information to the user is crazy difficult. I pretty much avoid this problem and show stack traces from the the underlying runner. Heck, automation is programming … stack traces is how you diagnose crashes. Right? The key thing here is that I explicitly made that decision.


This is the dashboard-y stuff. The easiest thing here is to use the non-standard-yet-standard Ant JUnit XML format. I’m pretty sure every framework author has implemented this at some point. Please don’t come up with a new format unless you are also writing the consumer[s] of the reports.


Commercial frameworks live and die based on where / what they integrate with. OSS ones still somewhat do, but the expectation isn’t there so front-and-center. If your framework reports in the ant format you get most CI integrations ‘for free.’ Know what your framework is displacing. Unless it is part of a larger process displacement (waterfall to agile), it needs to integrate with at minimum what the existing thing does. And even within the context of a larger change it might need to.


Does the execution have to be on iron behind the firewall? Or can be in The Cloud? At this point I think all frameworks need to have a cloud execution story. What’s yours? Does it integrate with a specific cloud, all clouds? By configuration or by documentation?


Welcome to the house of cards. How important is backwards compatibility? How do you coax your users to upgrade? Is the upgrade process manual or automagic? You need to decide where you land on the spectrum with this. And then be consistent with it. When I release new versions of stuff, this is what worries the most. Especially for enterprise-y clients.


Lighting up browsers is slow. Its just a fact. Parallelizing the run is part of the solution to make making this tolerable. How does your framework handle this? Mine for instance doesn’t do parallelization, instead pushing it onto the CI servers it integrates with. (See how all these things tie together?)


Yes, your framework should run through a proxy. If yours doesn’t, this is your homework.


Sometimes you need to provide the user with the ability to hit things really, really hard in ways that they shouldn’t employ all the time. The framework should let ‘advanced’ users do this sort of thing. For instance, I think runtime parameterization of test methods is a really bad idea. But that doesn’t mean I have disabled the hooks in Py.Test that allows you to do that (though I could have..). This is analogous to the JS Executor in WebDriver.


One of the things that used to burn us with WinRunner was how it interacted with version control. Or more correctly how it didn’t. Everything in you framework should have a version control story around it. For instance, with my stuff, the actual config files that get used don’t get checked in, though the templates for them do. (Another thing I stole from Rails.)


Packaging is also one of the most horrific parts of most languages. But it is important that you work with the default packaging system of the language. Regardless of how horrific it is. [*cough* PEAR] Remember, ‘clone from github’ is not a distribution strategy.


It frustrates me that people are building frameworks for just ‘web’ or just ‘mobile’. You want to win? Be able to use the same framework for both. Figure it out.


Perhaps more important than what your framework does is what it doesn’t do. Well, doesn’t do on purpose. If it doesn’t do it because it is missing a feature, then you are at risk of having it displaced by something that has it. If you have a story / explanation about why you don’t support something, then that’s so much better. Of course, you could still get displaced if someone really, really wants that thing. But don’t compromise on your vision for the framework


At this point there isn’t much technical reason to not open source your framework. Of course, there are lots of business reasons not to, like ‘OMG! We don’t have a business model’ which is fine. But if the framework is a supporting application for your real business, open it up. Don’t underestimate the effect github has had on both distribution and instance community.


And finally, don’t be afraid to screw up. Often. And when you do, apologize and fix it. And then make a new mistake while trying to push things forward.

Page Object Contest #1: TimelineJS

Every two weeks I’ll be coming up with a dastardly bit of web automation and running a contest to see how others solve the problem. The chosen task will not be around what I am automating for work right now so this is not “Please do Adam’s work for him”. Though I do hope that these contests become an archive of sorts for how to tackle problems like this, where ‘this’ is likely to be ‘have to use the javascript executor’ as I’m quite convinced that this is where we are heading.

Contest #1: TimelineJS

TimelineJS looks like a pretty cool little widget, and one I can see it also being something that would be a ‘fun’ rabbit hole to fall down when automating. Unfortunately rabbit holes can get you into trouble with your boss.

How to play;

  • Create a Page Object against one of the example timelines. Which one shouldn’t matter since a PO should be generic enough to work on any timeline.
  • Add a comment to this post which links to either a blog post or public repo which has the code by 12 noon EDT by Monday, May 20, 2013
  • Make sure the comment has a real email address so I can contact you mid next week

The Judging Criteria is going to be completely subjective, but will be a combination of both utility of the PO and how well it actually works.

Of course, its not a contest if there isn’t a prize associated with it? The prize for this round will be an hour of coaching via skype with me on your Selenium problems.

Screenshots and Artifacts

Screenshots can be a useful tool in debugging broken scripts as they will show you when a spinner is stuck spinning or a bit of ‘3rd party crap’ isn’t downloading properly. Both Py.Saunter and Saunter.PHP have some helper methods in their TestCase class to capture these.


self.take_named_screenshot('some name')


$this->take_named_screenshot('some name');

Some notes on usage:

  • The name of the screenshot does not need to have .png to it, that is added for you
  • The numbered screenshot begins are 0001 and increments from there
  • The number resets on each test method
  • This is in the SaunterTestCase inheriting classes, not the Page Object. This is not by accident.

Screenshots and anything else generated by the script are broadly categorized as ‘artifacts’. Each script’s artifacts are stored in its own directory

  • logs
    • 2013-05-09-11-53-37
      • TestClass
        • TestMethod

Cadging an idea from Sauce Labs, the final screen of the browser is captured as ‘final.png’ in there. Right now, there is not a flag for disabling this, but will be in a couple releases. As will automatic capturing on exceptions.

If you are using Jenkins as your CI server, there is an extra treat for your reporting with these new screenshotting methods. If Saunter knows that you are using Jenkins it will format the XML log that you use to integrate with Jenkins in such a way that the screenshots are available in the visual log when you drill down to the method. This is done via the JUnit Attachments plugin.

The configuration for this is just a single line in the appropriate config file.


jenkins: true


$GLOBALS['settings']['saunter.ci'] = 'jenkins';

Custom Firefox Profiles in Saunter

One of the things that you can do with Firefox that you can’t do with other browsers [or at least not as nicely…] is do a run using a profile that has custom settings already in place (cookies, extensions, local storage, etc.). A couple people have asked for it, and someone then someone paid for it to happen. Of course, it was a much deeper rabbit hole than I thought and I under-quoted them. Ah, well…

Because Py.Saunter and SaunterPHP are designed to be self-contained checkouts, all profiles will be in the support/profiles directory off of the Saunter project directory. So if you have 3 different profiles that you want to use, you would end up with

  • support
    • profiles
      • blue
      • orange
      • green


Selenium-RC – Local

This configuration is the easiest to setup; simply checkout your Saunter project onto the machine you are going to run the Selenium server on and start it with the -firefoxProfileTemplate <dir> flag. That’s it. Now, when Firefox lights up it will use that profile rather than a newly constructed one.

Selenium-RC – Sauce OnDemand

This is slightly more complicated, but not by too too much. Obviously one cannot checkout local copies one their VMs, but we can send it over the wire to them. Or more accurately, Saunter can thus saving you a fair bit of pain. Not that there isn’t a hoop or two that needs jumping through…

In your saunter.ini file you need to use one of two configuration options, but other which are in the Selenium section of the config.

  • profile: name_of_profile
  • profile-<platform>: name_of_profile

There is a bit of ordering at play. If you have a matching platform key per sys.platform then it will use that rather than the generic one. For instance,

profile: red
profile-darwin: blue

would use the blue profile on OSX, but red on Linux.

Because Se-RC doesn’t have the notion of sending a profile over the wire, we need to enable Sauce Labs to get it. This is done by telling them where to find an HTTP server with it which we do in the saunter.ini as well.

file_server_base: s3cr3t.element34.ca:7000

Note: if this server is not accessible from the internets then you need to setup a Sauce Connect tunnel to access it.


WebDriver nicely has profile support baked into the wire protocol, so there is nothing magical do whether running locally or in the Sauce cloud aside from the saunter.ini entries mentioned above.


The same configuration process applies for SaunterPHP as Py.Saunter. The exception is [for now] how it is configured.

$GLOBALS['settings']["profile"] = "red";
$GLOBALS['settings']["profile-darwin"] = "blue";


So it turns out that the Firefox ‘portable’ profile story isn’t actually that portable a story. Or at least in one edge case I stumbled on — but its a crazy useful edge case if it worked. Firefox profiles contain the extensions that they use; things like Firebug and company. These are written in JS/XUL and are generally pretty cross-platform portable. Unfortunately, what profile do not contain are addons — flash and silverlight are two examples of addons. What the profile has instead an arcane, generated config file that if you edit in just the right way you can control the behaviour of these things. Except that the file includes both the path on disk (which means it is not cross platform) and the version of the addon (which means even if you are on the same OS but on differently patched machines you are screwed).

Of course, you are managing the configuration of the machines in your farm with something like Puppet so all your configurations are the same so you can make a profile on each target platform. Thus the profile-<platform> option. Its not that simple in the case where you don’t control the machines, like, say, the Sauce Cloud … but I have been suggested a solution around that and just need the time to experiment with it.

Downloading files in Py.Saunter

For years my stock answer for the question of how to download files from the browser has been “don’t do it, but if you really, really must then at least don’t try and use the browser” and I’ve left it at that. Well today I sat down to actually write the code to do it.

Here is the script which looks like. It just a ‘standard’ Py.Saunter script with the exection of the self._screenshot_prep_dirs() which is a bit of implmentation leakage that I’ll fix in the next release or two of Py.Saunter now that I’ve had an assumption I was making proven incorrect. Anyhow, it just goes to a random article as specified in a csv then downloads a pdf of that article. article_pdf is the full path on disc to it. For example, /Users/adam/work/client/client-automation/logs/2013-04-29-13-31-23/CheckPDFDownloads/test_pdf_download/downloaded_pdf.pdf.

from tailored.testcase import TestCase
import pytest
from pages.article import Article
class CheckPDFDownloads(TestCase):
    def setup_method(self, method):
        super(CheckPDFDownloads, self).setup_method(method)
        self.article = Article(self.driver).open_random_article().wait_until_loaded()
    def teardown_method(self, method):
        super(CheckPDFDownloads, self).teardown_method(method)
    @pytest.marks('shallow', 'pdf', 'article')
    def test_pdf_download(self):
        article_pdf = self.article.download("pdf")
        # open up this file in whatever pdf module you like and do whatever

The real interesting bit is, of course, in the Page Object. I’ll walk through the download method below rather than break it up inline.

from tailored.page import Page
from selenium.webdriver.support.wait import WebDriverWait
from providers.article import ArticleProvider
import random
from selenium.webdriver.common.action_chains import ActionChains
import requests
import os.path
import inspect
import sys
locators = {
    'article tab': 'css=span[name="article"]',
    'download button': 'xpath=//div[contains(@class,"btn-reveal")]/span[text()="Download"]',
    'pdf download button': 'css=.btn-reveal a[title$="PDF"]'
class Article(Page):
    def __init__(self, driver):
        super(type(self), self).__init__(driver)
        self.driver = driver
    def open(self, uri):
        self.driver.get("%s/%s" % (self.config.get('Selenium', 'base_url'), uri))
        return self
    def open_random_article(self):
        row = ArticleProvider().randomRow()
        return self.open(row["uri"])
    def wait_until_loaded(self):
        self.wait.until(lambda driver: driver.find_element_by_locator(locators["article tab"]))
        return self
    def download(self, type_of_download):
        chain = ActionChains(self.driver)
        chain.move_to_element(self.driver.find_element_by_locator(locators['download button']))
        def waiter(driver):
            e = self.driver.find_element_by_locator(locators["%s download button" % type_of_download])
            if e.is_displayed():
                return e
            return False
        button = self.wait.until(waiter)
        r = requests.get(button.get_attribute('href'))
        disposition = r.headers["content-disposition"]
        disposition_type, filename_param = disposition.split(';')
        filename = filename_param.split('=')[1][1:-1]
        stack = inspect.stack()
        # calling class name
        frame = inspect.currentframe(1)
        caller = frame.f_locals.get('self', None)
        calling_class_name = caller.__class__.__name__
        # calling method name
        calling_method_name = stack[1][3]
        path_to_file = os.path.join(self.config.get('Saunter', 'log_dir'), calling_class_name, calling_method_name, filename)
        f = open(path_to_file, "wb")
        sys.stdout.write(os.linesep + "[[ATTACHMENT|%s]]" % path_to_file)
        return path_to_file

Alright… lots of stuff going on in download, some of which is specific to this client but its useful to cover somewhere anyways.

  • 35 – 44: The actual download link on this page is hidden unless the user hovers over an different element. Clean UI, but an extra hoop to jump through when automating. We do this using an Action Chain and then synchronizing on whether the link is visible. One thing you’ll notice is that this method can download any number of different types. To keep it generic the locator string is generated at runtime.
    locators["%s download button" % type_of_download]
  • 46: If you were to look at this element in a browser the href attribute is actually a relative one. But WebDriver is smart enough to return a fully qualified one when it sees a relative href. Helpful! And since we have a full URL we can use Requests to grab it. Were it behind some sort of authentication scheme we could grab the correct cookies[s] from self.driver and put them in the requests.get(). Remember, HTTP is stateless.
  • 48 – 50: Because this is a nefarious example, the url we request the file from is actually just a call on the server and not the actual document. As such we need to figure out just what the heck the file we are downloading ‘should’ be called. Turns out the ‘standard’ way to do this is with the Content-Disposition header.
  • 52 – 60: Here is where things go sideways somewhat and then go completely into “no, you really shouldn’t be doing that!” territory. But it works! Well, as long as we apply the project-wide rule of ‘do not call this method from another page object’. What we are doing is peeking into the actual Python execution stack to get the script’s class and method names. This is easy-peasy if we were currently in the context of a script … but we’re not; we’re in a Page Object and they are script neutral.
  • 61: Since we now have all the information we need about the calling script, we can build a path (in an os-neutral way) to where we are going to put the file.
  • 63 – 65: Saves the contents of the file on disk in the appropriate place
  • 67: This client happens to be using Jenkins as their CI server and if you are using the JUnit-Attachment plugin for it, then this magically formatted line will add a link to our downloaded file to the displayed test results.
  • 69: Finally, we return the path on disk to the calling test method. From there it can be opened up in whatever PDF (in this case) parsing module to do further inspection. But that is out of scope for a method called download

For SaunterPHP users, the ideas you would follow would be very similar — just the reflection-y bits would be different.