Harmonizing Maintenance Windows

At the moment we are only using RDS and ElastiCache within AWS, but more services we use the more maintenance windows is going to come up. Rather than have them at random places around the week and clock, I figure it would be useful to have just a single window that we can subsequently work into our SLAs etc. Now, I really like the management consoles AWS has, but its a lot of clicks to track things — especially if I start using something like CloudFormation and Autoscaling or something to be making things magically.

Scripting to the rescue.

Our applications are PHP based, but at my heart I’m a Python guy, so I whipped up one. And aside from the fear of modifying running items, it appears to have worked well.

import boto3
 
maintenance_window = 'sun:09:35-sun:10:35'
 
# rds can have maintenance windows
update_rds = False
rds = boto3.client('rds')
paginator = rds.get_paginator('describe_db_instances')
for response_iterator in paginator.paginate():
    print('Current RDS Maintenance Windows')
    for instance in response_iterator['DBInstances']:
        print('%s: %s UTC' % (instance['DBInstanceIdentifier'], instance['PreferredMaintenanceWindow']))
        if instance['PreferredMaintenanceWindow'].lower() != maintenance_window.lower():
            update_rds = True
 
if update_rds == True:
    paginator = rds.get_paginator('describe_db_instances')
    for response_iterator in paginator.paginate():
        for instance in response_iterator['DBInstances']:
            if instance['PreferredMaintenanceWindow'].lower() != maintenance_window.lower():
                rds.modify_db_instance(
                    DBInstanceIdentifier=instance['DBInstanceIdentifier'],
                    PreferredMaintenanceWindow=maintenance_window
                )
    paginator = rds.get_paginator('describe_db_instances')
    for response_iterator in paginator.paginate():
        print('Adjusted RDS Maintenance Windows')
        for instance in response_iterator['DBInstances']:
            print('%s: %s UTC' % (instance['DBInstanceIdentifier'], instance['PreferredMaintenanceWindow']))
 
# elasticache can have maintenance windows
update_ec = False
ec = boto3.client('elasticache')
paginator = ec.get_paginator('describe_cache_clusters')
for response_iterator in paginator.paginate():
    print('Current ElastiCache Maintenance Windows')
    for instance in response_iterator['CacheClusters']:
        print('%s: %s UTC' % (instance['CacheClusterId'], instance['PreferredMaintenanceWindow']))
        if instance['PreferredMaintenanceWindow'].lower() != maintenance_window.lower():
            update_ec = True
 
if update_ec == True:
    paginator = ec.get_paginator('describe_cache_clusters')
    for response_iterator in paginator.paginate():
        for instance in response_iterator['CacheClusters']:
            if instance['PreferredMaintenanceWindow'] != maintenance_window:
                ec.modify_cache_cluster(
                    CacheClusterId=instance['CacheClusterId'],
                    PreferredMaintenanceWindow=maintenance_window
                )
 
    paginator = ec.get_paginator('describe_cache_clusters')
    for response_iterator in paginator.paginate():
        print('Adjusted ElastiCache Maintenance Windows')
        for instance in response_iterator['CacheClusters']:
            print('%s: %s UTC' % (instance['CacheClusterId'], instance['PreferredMaintenanceWindow']))

It’s always a Security Group problem…

I’ve got a number number of private subnets within my AWS VPC that are all nice and segregated from each other. But every time I light up a new Ubuntu instance and tell it to ‘apt-get update’ it times out. Now, since these are private subnets I can get away with opening ports wide open, but AWS is always cranky at me for doing so. I feel slightly vindicated that the same behaviour is asked about on Stack Overflow often too, but anyways, I figured it out this week. Finally. And as usual with anything wonky network-wise in AWS it was a Security Group problem.

  1. First thing, read the docs carefully.
  2. Read it again, more careful this time
  3. Setup the Routing. I actually created 2 custom routing tables rather than modify the Main one; explicit is better than implicit (thanks Python!)
  4. Create an ‘apt’ Security Group to be applied to the NAT instance with the inbound rule, from your private VPC address space for HTTP (80), HTTPS (443) and HKP (11371). HTTP is the default protocol for apt but if you are adding new repos the key is delivered via HTTPS and then validated against the central key servers via HKP. You’ll need outbound rules for those ports too per the docs

And now you should be able to lock down your servers a bit more.

Faster feedback by limiting information frequency

Code coverage is one of those wacky metrics that straddles the line between useful and vanity. On one hand, it gives you an idea of how safely you can make changes, but on the other it can be a complete fake-out depending on how the tests are constructed. And it can slow your build down.

A lot.

I suspect a lot of our pain is self-induced, but our Laravel application’s ‘build and package’ job jumps from under 2 minutes to around 15 once we turn on code coverage. Ouch.

So I came up with a compromise position in the build in that the tests always run, but the coverage only gets calculated every 15th build (around once a day). Here is what the relevant task for that job now looks like.

# hack around jenkins doing -xe
#set +e
 
mkdir -p jenkins/phpunit
mkdir -p jenkins/phpunit/clover
 
# run coverage only every 15 builds
if [ $(($BUILD_ID%15)) -eq 0 ]; then
  phpunit --log-junit jenkins/phpunit/junit.xml --coverage-clover jenkins/phpunit/clover.xml --coverage-html jenkins/phpunit/clover
else
  phpunit --log-junit jenkins/phpunit/junit.xml
fi
 
# hack around presently busted test
#exit 0

Some things of note;

  • The commented out bits at the beginning and end allow me to force a clean build if I really, really want one
  • My servers are all Ubuntu so use ‘dash’ as its shell which forces slightly different syntax which my fingers never get right the first time
  • I don’t delete the coverage log so the later ‘publish’ action doesn’t fall down. It just republishes that again
  • As we hire more people and the frequency of things landing in the repo increases, I’ll likely increase the spread from 15 to something higher
  • At some point we’ll spend the time to look at why the tests are so slow, but not now.

Using Puppet to manage AWS agents (on Ubuntu)

One of the first thing any cloud-ification and/or devops-ification project needs to do is figure out how they are going to manage their assets. In my case, I use puppet.

AWS is starting to do more intensive integrations into things using agents that sits in your environment. This is a good, if not great, thing. Except if you want to, oh, you know, control what is installed and how in said environment.

Now, it would be extremely nice if AWS took the approach of Puppet Labs and host a package repository which would mean that one could do this in a manifest to install the Code Deploy Agent.

  package { 'codedeploy-agent':
    ensure => latest,
  }
 
  service { 'codedeploy-agent':
    ensure  => running,
    enable  => true,
    require => Package[ 'codedeploy-agent' ],
  }

Nothing is ever that easy of course. If I was using RedHat or AWSLinux I could just use the source attribute of the package type such as below to get around the lack of repository but I’m using Ubuntu.

  package { 'codedeploy-agent':
    ensure   => present,
    source   => "https://s3.amazonaws.com/aws-codedeploy-us-east-1/latest/codedeploy-agent.noarch.rpm",
    provider => rpm,
  }

So down the rabbit hole I go…

First, I needed a local repository which I setup via the puppet-reprepro module. Which worked well — except for the GPG part. What. A. Pain.

After that, I cracked the install script and fetched the .deb file to install…

$ aws s3 cp s3://aws-codedeploy-us-west-2/latest/VERSION . --region us-west-2
download: s3://aws-codedeploy-us-west-2/latest/VERSION to ./VERSION
$ cat VERSION
{"rpm":"releases/codedeploy-agent-1.0-1.751.noarch.rpm","deb":"releases/codedeploy-agent_1.0-1.751_all.deb"}
$ aws s3 cp s3://aws-codedeploy-us-west-2/releases/codedeploy-agent_1.0-1.751_all.deb . --region us-west-2
download: s3://aws-codedeploy-us-west-2/releases/codedeploy-agent_1.0-1.751_all.deb to ./codedeploy-agent_1.0-1.751_all.deb

…and dropped it into the directory the repo slurps files from.

Aaaannnnnd, nothing.

Turns out that the .deb AWS provides doesn’t provide an optional trait in its control file. But reprepro wants it to be mandatory. No problem.

$ mkdir contents
$ cd contents/
$ dpkg-deb -x ../codedeploy-agent_1.0-1.751_all.deb .
$ dpkg-deb -e ../codedeploy-agent_1.0-1.751_all.deb ./DEBIAN
$ grep Priority DEBIAN/control
$

Alright. Add in our line.

$ grep Priority DEBIAN/control
Priority: Optional
$

And now to package it all back up

$ dpkg-deb -b . ../codedeploy-agent_1.0-1.751_all.deb
dpkg-deb: building package 'codedeploy-agent' in '../codedeploy-agent_1.0-1.751_all.deb'.

Ta-da! The package is now able to be hosted by a local repository and installed through the standard package type.

But we’re not through yet. AWS wants to check daily to update the package. Sounds good ‘in theory’, but I want to control when packages are updated. Necessitating

  cron { 'codedeploy-agent-update':
    ensure  => absent
  }

Now we’re actually in control.

A few final comments;

  • It’d be nice if AWS would provide a repository to install their agents via apt — so I can selfishly stop managing a repo
  • It’d be nice if the Code Deploy agent had the Priority line in the control file — so I can selfishly stop hacking the .deb myself. The Inspector team’s package does…
  • It’d be nice if AWS didn’t install update scripts for their agents
  • The install script for Code Deploy and Inspector is remarkably different. The teams should talk to each other.
  • The naming convention of the packages for Code Deploy and Inspector are different. The teams should talk to each other.

(Whinging aside, I really do like Code Deploy. And Inspector looks pretty cool too.)

Saunter 2.0

Welp. After 2+ years of tinkering and appearing to be an absentee open source landlord, I just pushed Saunter 2.0.0 up to PyPI. When I list the things that have changed, it is rather silly to have allowed so much time to elapse, but…

  • Remove all references to Selenium Remote Control(RC). Enough time has passed that there are no excuses anymore to not be on WebDriver
  • Config file format has fundamentally changed to be YAML. This is why there has been a major version bump. The Saunter page has the details of the new format.

There are still some hiccups, but the main one is that random ordering of script execution and parallelization doesn’t work yet. I know how to fix it via monkey patch, but…

Per always, if you find any bugs, log them in github. I’ve fixed my notification settings to actually see these ones and will be going through the existing ones over the remainder of the month.

Stop Being A Language Snob: Debunking The ‘But Our Application Is Written In X’ Myth

The folks over at Sauce Labs just published a guest post on their blog I wrong on Stop Being A Language Snob: Debunking The ‘But Our Application Is Written In X’ Myth.

This doesn’t get debunked nearly enough. Consider this fair warning that this might end up being a 2015 theme.

Lessons learned from 19 months of a delivery manager

This is one of the talks I did at Øredev last week. As usual, my decks are generally useless without me in front of them. But lucky(?) for you, all the sessions were recorded.

CONFESSIONS OF A ROOKIE [DELIVERY] MANAGER from Øredev Conference on Vimeo.

But if you are too lazy to listen to me for 40 minutes, here is the deck and the content I was working from on stage. Of course, I don’t actually practice my talks so some content was added and others was removed at runtime, but…



WTF is a Delivery Manager?!?!

For about a year and a half I had to the title of ‘Delivery Manager’ which means a whole lot, and nothing at the same time. And therein lies it potency. Just as Andy Warhol famously said that ‘Art is anything you can get away with’, being a Delivery Manager is anything you make it. In my case it was essentially anything and everything to do with getting our application into the hands of the end users.

Tip: Don’t put yourself in a box

Before we landed on this title other ones we considered were ‘Doer of Stuff’, ‘Chaos Monkey’ (blantantly stolen from Netflix), and ‘Minister Without Portfolio.’ But we eventually went with the more business palatable of ‘Delivery Manager’. Since Delivery Manager is a made up title, it is useful to describe it in terms and titles people are used to seeing; Product Owner, Production Gatekeeper and Process Guardian are the three umbrella ones I most associated with. But even those could be sub-divided. And possibly sub-sub-divided. Its also important to recognize that the percentages of these roles are ever in flux. And just to keep things interesting, can sometimes be in conflict with each other.

Because of the mix of problems Delivery Managers will have to, erm, manage there is a certain skillset required to be effective at it. Or perhaps not a specific one, but a breadth of one. Testing, Development, Operations, Marketing, Systems, Accounting, etc.. And I would suggest that you have done a stint consulting as well. There is nothing like it in terms of being a crucible for problem identification and solving. That doesn’t mean of course that you have to be a perfect mix of all these things. It is inevitable that you will be more specialized in one over the other, and I would be suspicious of anyone who said they weren’t. I for instance come up through the testing ranks. Specifically the ‘context’ ranks. That, for me is my secret sauce.

And yes, there is a tonne of irony around the idea that I spent a decade saying ‘I am not a gatekeeper! I am a provider of information!’ to moving precisely into the gatekeeper role. But in that irony I learned a lot. Not just about being /a/ Delivery Manager, but about how /I/ am a Delivery Manager.

No*

While everything is important in one degree or another, this is perhaps the one thing I leaned on every single day. When faced with a request, the default answer is always No. Well, it is more ‘No* (* but help me to say Yes)’. And don’t be subtle or selective about the application of this rule. At 360 there is an entire department I dealt with on a daily basis and they could tell you my default answer is going to be ‘No’ to any request. But that doesn’t stop them from asking since they know about the asterisk. What it does is force them to think about their request ahead of time beyond simplistic ‘because’ terms.

This is not a new idea that I ‘discovered’. I blatantly stole it from someone who was at one point the Product Owner for Firefox (I think… I can’t find the article now, if you find it please let me know). It all boils down to an economics problem around opportunity cost. If you say Yes to everything then the queues will over flow and nothing will get done. But if you say No to everything and selectively grant Yeses then there is order [rather than chaos] in the pipes.

Tip: Learn about economics; specifically Opportunity Cost (but Sunk Costs are also useful to understand when involved in No* discussions)

Tip: Unless you really understand the problem you are being asked to solve, you cannot say yes

Mature organizations understand this at their core. It might be you that levels them up to this understanding though.

Frenemies

Being the person who always says No won’t always make you friends. At first at any rate. You will become everyone’s enemy … and everyone’s friend. Welcome to the balancing act. I would argue that if you are everyone’s friend all the time then you are not doing your job properly. Part of the animosity can be dealt with though explaining the asterisk, but also by communicating who ‘your’ client is. Remember the hats that are being warn have words like ‘Owner’, ‘Guardian’ and ‘Gatekeeper’. Your client in this role may not being whom it is people think it is. In fact, it almost assuredly isn’t. Yours is the application and the [delivery] pipeline.

Tip: The Delivery Pipeline is a product

This will cause friction; and depending on how your company is structured it could be a non trivial amount. But as long as you are consistent in your application of No* and are transparent in the reasonings why, in my experience, it is easily overcomable.

Tip: Do you know what business you are in? Is that the business the business thinks it is in? It’s really hard to win that battle.

Defence

The role of ‘Delivery Manager’ can sometimes be a lone wolf one, but at other times you will have people working for you [as I did]. It is critical to remember is that as a ‘people’ manager your primary goal is to protect everyone under you. Physically, psychologically and work-ly. You need to be able to do their job but also to let /them/ do it. Just because you /could/ be the hero doesn’t mean that it is healthy for you or them. Like you would a child, let them work through it and be ready to catch them if they start to fall. [The existence of that metaphor does not mean of course that you should treat them like kids though…] Don’t hold them to higher standards than you hold yourself to. But also don’t inflict yourself on them as well. I’m a workaholic (thanks Dad!); its unfair to put than onto others. I also don’t believe in work-life balance (especially in startups) favouring harmony instead — but what is harmonious for me is likely not the same for someone else.

In order to do that you need to constantly be running defence for your charges; human and software. Invite yourself to meetings, constantly be vigilant for conversations that will affect them. Which unfortunately means you miss out of plugging in your headphones and listening to music all day.

Tip: Ensure grief from No* comes back to you, not your people

Tip: People, not resources

Tip: Ask the people who work for you if they feel you have their back. If not, you’re doing something wrong.

You Will Screw Up

I tend not to speak in terms of absolutes, but here is a truth; You will screw up, potentially largely, in this role. You are making decisions that require a crazy amount of information to be assimilated quickly and if it is not perfectly done or you are missing any [maliciously or innocently] then you are hooped. And that’s ok. Pick yourself up, and go forward. That is the only way you can go. We no longer have the luxury of going back. Remember, tough calls are your job.

Bending to go forward is not a new thing. I’m sure I heard it a couple times before it really stuck, but I credit Brian Marick’s talk at Agile 2008 for that sticking. I can’t find a video of it [though didn’t try hard] but the text of it can be found a http://www.exampler.com/blog/2008/11/14/agile-development-practices-keynote-text.

Tip: Be careful though; screw up too much and Impostor Syndrome can set in. And it sucks. A lot. Get help. See Open Sourcing Mental Illness and Mental Health First Aid

Tip: Make sure your boss is onboard with the ‘go forward’ approach

Tip: Confidence is infectious, be patient zero

Know and be true to yourself
One of the biggest things I’ve learned in the last bit is around how /I/ function. Some people find the MBTI as hand-wavy and hokey, but I think its useful not in terms of how I choose to interact with people but in understanding how I am. I’m ENTP. Hilariously so. That’s not going to jive well with organizations that are ‘typed’ differently. That’s been a huge insight for me.

Tip: For a lark, take an MBTI test. Its heuristic, but still interesting

Being a geek I also think of things in terms of the classing AD&D alignment scale. I lean towards the Chaotic Good. We have a goal; there are no rules here. Especially ‘stupid’, ‘artificial’ ones.

And that has got me into trouble more than once. I don’t doubt that it will again in the future.

But I also have a strongly defined set of ethics and philosophy around how things should be done. Entrepreneurs don’t necessarily make good employees…

Putting a bow on it
Being a ‘Delivery Manager’ is great fun. Challenging as heck, but great fun and super rewarding. As someone who cares deeply about quality and the customer experience and has experience backed opinions on how to achieve them I don’t see myself going back to a ‘Just X’ role.

(P.S. I’m now available for hire if your organization needs a Delivery Manager)

Continuous Delivery in a .NET World

Here is one the other talk I did at Øredev this year. The original pitch was going to be show a single character commit and walk it through to production. Which is in itself a pretty bold idea for 40 minutes, but… But that pitch was made 7 months ago with the belief we would have Continuous Delivery to production in place. We ended up not hitting that goal though so the talk became more of a experience report around things we (I) learned while doing it. I would guess they are still about a year away from achieving it given what I know about priorities etc.

Below is the video, and then the deck, and the original ‘script’ I wrote for the talk. Which in my usual manner deviated from on stage at pretty much every turn. But, stories were delivered, mistakes confessed to, and lots of hallways conversations generated so I’m calling it a win.

CONTINUOUS DELIVERY IN A .NET WORLD from Øredev Conference on Vimeo.

Introduction
I’ll admit to have being off the speaking circuit and such for awhile and the landscape could have changed significantly, but when last I was really paying attention, most, if not all talks about Continuous Delivery focused on the ‘cool’ stack such as Rails, and Node, etc. Without any data to back up this claim at all, I would hazard a guess that there are however more .NET apps out there, especially behind the corporate firewall than those other stacks. Possibly combined. This means that there is a whole lot of people being ignored by the literature. Or at least the ones not being promoted by a tool vendor… This gap needs to be addressed; companies live and die based on these internal applications and there is no reason why they should have crappy process around them just because they are internal.

I’ve been working in a .NET shop for the last 19 months and we’re agonizingly close to having Continuous Delivery into production… but still not quite there yet. Frustrating … but great fodder for a talk about actually doing this in an existing application [‘legacy’] context.

Not surprisingly, the high level bullets are pretty much the same as with other stacks, but there of course variations of the themes that are at play in some cases.

Have a goal
Saying ‘we want to do Continuous Delivery’ is not an achievable business goal. You need to be able to articulate what success looks like. Previously, success as looked like ‘do an update when the CEO is giving an investor pitch’. What is yours?

Get ‘trunk’ deliverable
Could you drop ‘trunk’ [or whatever your version control setup calls it] into production at a moment’s notice? Likely not. While it seems easy, I think this is actually the hardest part about everything? Why? Simple … it takes discipline. And that is hard. Really hard. Especially when the pressure ramps up as people fall back to their training in those situations and if you aren’t training to be disciplined…

So what does disciplined mean to me, right now…

  • feature flags (existence and removal of)
  • externalized configuration
  • non assumption of installation location
  • stop branching!!

Figure out your database
This, I think, is actually the hardest part of a modern application. And is really kinda related to the previous point. You need to be able to deploy your application with, and without, database updates going out. That means…

  • your tooling needs to support that
  • your build chains needs to support that
  • your application needs to support that (forwards and backwards compatible)
  • your process needs to support that

This is not simple. Personally, I love the ‘migration’ approach. Unfortunately… our DBA didn’t.

Convention over Configuration FTW
I’m quite convinced of two things; this is why RoR and friends ‘won’ and why most talks deal with them rather than .NET. To really win at doing Continuous Delivery [or at least without going insane], you need to standardize your projects. The solution file goes here. Images go here. CSS goes here. Yes, the ‘default’ project layout does have some of that stuff already figured out, but it is waaaaay too easy to go of script in the name of ‘configurability’. Stop that! Every single one of our .NET builds is slightly different because of that at 360, which means that we have to spend time when wiring them up and dealing with their snowflake-ness. I should have been able to ‘just’ apply a [TeamCity] template to the job and give it some variables…

Make things small [and modular]
This is something that has started to affect us more and more. And something that doesn’t be default in the RoR community with their prevalence of gems. If something has utility, and is going to be across multiple projects, make it a Nuget package. The first candidate for this could be your logging infrastructure. Then your notifications infrastructure. I have seen so much duplicate code…

Not all flows are created equal
This is a recent realization, though having said that, is a pretty obvious one as well. Not all projects, not all teams, not all applications have the same process for achieving whatever your Continuous Delivery goal is. Build your chains accordingly.

Automate what should be automated
I get accused of splitting hairs for this one, but Continuous Delivery is not about ‘push a button, magic, production!’. It is all about automating what should be automated, and doing by hand what should be done by hand. But! Also being able to short circuit gates when necessary.

It is also about automating the right things with the right tools. Are they meant for .NET or was it an afterthought? Is it a flash in the pan or is it going to be around? Does its project assumptions align with yours?

Infrastructure matters
For Continuous Delivery to really work, and this is why its often mentioned in the same breath as DevOps (we’ll ignore that who problem of ‘if you have devops you aren’t doing devops’…), the management of your infrastructure and environments needs to be fully automated as well. This is very much in the bucket of ‘what should be automated’. Thankfully, the tooling has caught up to Windows so you should be working on this right from the start. Likely in tandem with getting trunk deliverable.

Powershell
But even still, there are going to have to be things that you need to drop down to the shell and do. We made a leap forward towards our goal when we let Octopus start to control IIS. But they don’t expose enough hooks for the particular needs of our application so we have to use the IIS cmdlets to do what we need afterwards. And there is absolutely nothing wrong with this approach.

Its all predicated by people
Lastly, and most importantly, you need to have the right people in place. If you don’t, then it doesn’t matter how well you execute on the above items, you /will/ fail.

SaunterPHP and the Browsermob Proxy

At this point, running all your scripts through a proxy should just be an accepted good practice. And if not, go watch Proxy & Executor. Back? Excellent. Now let’s get your scripts going through the proxy.

First, you need to get the proxy. It can be run on any host that the Selenium Server machines can contact and you’ll only have one for your entire platform setup. Once you have it downloaded, just run the script in the bin directory to get it going.

I’ll point out that the BMP works in a rather interesting manner regarding ports that need to be open [since I lost all afternoon to stupid firewall rules]. Let’s say you start it on port 9090. That is the port you will tell Saunter about. When the script starts it will contact the server on that port and ask for a different port that will be used as the actual proxy. The port it returns is the next available one sequentially so you will need this port, and a large chunk of ports after that too.

(The need for all these ports is actually a bug in Saunter that will be fixed one of these days.)

But now we have the BMP running and able to accept connections. Next we need to tell Saunter about it. And like with everything else configuration wise it goes in conf/saunter.inc

$GLOBALS['settings']['proxy'] = "localhost:9090";
$GLOBALS['settings']['proxy.browsermob'] = true;

And that, in theory, is all you need to do in order to get things running though the proxy. To then use any of the functions available to you you can use $this->client is the object you want to use.

Going Dependent

Earlier in the year I took my idea for a ‘mindmap based test idea management app’ all the way to the finals of Ignite Durham. And while I didn’t win, one of the judges was the founder and CEO of a local (like, 8 minute walk from the house local) tech startup, 360incentives.com and so I followed him on twitter and then promptly forgot I did it.

Until that is he advertised a ‘QA’ role [now ‘testing’] and while I didn’t really want to give up consulting we did come to an arrangement of three days a week. It turns out, I suck at working somewhere part-time and being able to ‘punch out’ at the allotted hours. Or at least somewhere there are fun challenges to solve. [Heh, and oh boy are there ever!]

There are a few more twists-and-turns to the tale, but the end of it is that as of this past Monday I am the ‘Software Delivery Manager’ at 360incentives.com. Which is kinda a made-up title we came up with the encompass the various things I was doing. The job description focuses on;

  • Manage delivery of software products from development into production
  • Manage team Software Testers to test all product changes and new features, including development of automated test suites
  • Champion software delivery best practices, such as continuous delivery, automated testing and operations automation, and work to continuously improve the team’s software delivery capabilities
  • Work with Product Management, Development and Operations teams to identify requirements, design new features, estimate development efforts and deliver on product roadmap
  • Work with Operations team to deploy and support production systems

Essentially, Element 34’s consulting practice — but for a single entity.

Which brings us to some business related FAQ-y stuff.

  • How will this affect existing support contracts? – It likely won’t. I’ll still turnaround email responses within a couple hours, and any larger code samples / upgrades will be done in the evenings or weekends. Which is when a lot of them were anyways
  • Are you taking new clients? – Likely not. But the Clarity.fm stuff will still be active if you want to chat around a very specific problem you are experiencing
  • What about Saunter? – Saunter will absolutely continue to exist and I have some interesting things planned for it. We’ll be using it at 360 as well. Though there would be some hilarity in using something else.

It should be an interesting ride as we change a monolithic, hand deployed application into a nimble continuous delivery-ed one. The question though is, who really won the Ignite contest now?

(Oh, and if you’re a devops-y minded person who knows both Windows and Linux and live in the Eastern GTA please get in tour — we’re hiring!)