Not losing the trees for the forest

One problem with large scale automation is that legitimate failures will start to linger about. Whether this is a symptom of the organization not actually caring about the results is a different problem entirely, but one that can be solved at the script level is to make sure that known failures don’t hide new issues.

Each runner is going to have a slightly different way to do this[1] and call it something different but let’s just go with Py.Test’s terminology and implementation. Py.Test calls this XFail.

import pytest
import sys

class TestXFail(object):
  @pytest.mark.xfail
  def test_monkey(self):
    pytest.fail('okay')

  @pytest.mark.xfail(reason="squirrels are a fail")
  def test_squirrel(self):
    pass #except this one apparently
  
  @pytest.mark.xfail('sys.platform == "win32"')
  def test_gorilla(self):
    raise Exception

A couple things. First, if your team uses this sort of pattern I would highly suggest it become convention to not use it blindly without something like the reason argument. Now, if you have better names test methods than I do here that might not be necessary, but it seems like a good idea regardless. Also, you can use a condition to help it decide whether the xfail is going to applied or not. This feels like the wrong use for it, at least in this example. Using a skipif decorator seems like a better fit. But again, your milage will vary.

In any event, when run this is the output.

Adam-Gouchers-MacBook:xfail adam$ py.test bar.py 
============================= test session starts ==============================
platform darwin -- Python 2.7.2 -- pytest-2.2.0
collected 3 items 

bar.py xxX

===================== 2 xfailed, 1 xpassed in 0.19 seconds =====================
Adam-Gouchers-MacBook:xfail adam$

That’s it. No stack traces to clutter things up, just two x and a X. Though the X is something to pay attention too. This is the indicator that something we though should fail has passed. This is new information which is the whole point of doing automation in the first place. Did something get fixed? Intentionally? Accidentally? Is our script no longer valid as a result of some other changes? That’s up to you do determine.

And then of course there is the pesky problem of how long things stay in an xfail-ed state. But that is an organizational and cultural problem.

[1] And if not it is not too hard to parse out the log and gather that information and compare it to a known failures list somewhere

Comments 5

  1. mam-p wrote:

    Hey Adam! This is ULTRA-cool info which I am putting to use RIGHT NOW!

    However, a few pysaunter-specific questions:

    1. When I run your code via pysaunter, I get “XX.” when I rather expected to get “xxX”.

    2. When I added the -v option, the two that fail display a red XPASS. I expected XFAIL messages (preferably in red!).

    3. With the -v option, the one that passed produced the same green PASSED message as with a normal pass, so I can’t realize the xfail=>pass benefit you’re pointing out above.

    4. The output from my asserts disappears for the failures. Since they’re expected failures, this might be a “feature” not a bug–just curious.

    Even with these issues, just having a third category of results is great!

    Posted 01 Feb 2012 at 11:24 pm
  2. adam wrote:

    In your conftest.py did, you remove some of the code per the upgrade note for 0.36? It is at here. Things act a bit funky with it in there (and it is no longer necessary due to some bug fixes in py.test).

    Posted 01 Feb 2012 at 11:27 pm
  3. mam-p wrote:

    Thanks for the quick response! I definitely had an out-of-date conftest.py. I’m now seeing a bright red XPASS and a yellow-green xfail.

    However, I’m still not seeing the output from my asserts on xfail tests. Is that expected behavior? I can see that if the test is expected to fail, one has already seen the assert output before, so why clutter things up with the output? BUT a test *could* fail in a new way, which one *might* catch if the assert output was present.

    Again–a very useful post!

    Posted 02 Feb 2012 at 3:38 pm
  4. adam wrote:

    Yes, the expected behaviour in this case is that the exception is not displayed. xfail basically says I know it failed, you don’t need to tell me about that failure. Which can lead to…

    Correct; a new failing cause going undetected is one potential pitfall of using xfail.

    Though I ideally one would have a responsive development group and they would fix whatever is causing you to want an xfail line in there.

    Posted 02 Feb 2012 at 4:08 pm
  5. mam-p wrote:

    I’m with you on that “ideally” bit! I’m going to first see if the dev manager will agree that any automated test with xfail status should be fixed in the very next release. And Plan B is to modify my bar-chart graph of total Selenium tests to show a “stacked bar” format, with xfails displayed as a separate segment. And of course, whenever I’m reporting how the Selenium tests did, the xfails will be listed separately. I can’t imagine this much xfail publicity failing to elicit the desired response. ;-)

    Posted 02 Feb 2012 at 6:09 pm

Post a Comment

Your email is never published nor shared. Required fields are marked *