UI Automation on the iPhone

As a summer intern I could have worried about just being given tasks such as making the tea, but here at Airsource, among other challenges, I was given the chance to work on improving some the QA infrastructure via automated testing. I study Engineering and have learnt some theory regarding testing practices in software development, so it was great to put that to a practical use!

Apple announced with iOS4 the ability to do automated UI testing on the iPhone as a part of the Instruments development tool. Such testing is done with the help of a number of JavaScript libraries which allow you to access an application on a device and tap buttons, scroll views and varify text as if it were a human interacting with the hardware, and not a script.

Why is this - other than the fact it's something from Apple - cool? It essentially means that it is possible for us as app developers and testers to automate some of the tests we always do to ensure we have not broken anything in our latest development cycle. There is nothing worse than working hard on adding a new feature to an app, testing it and releasing it only to discover that your testing missed a regression in your previous functionality. This forces you to make a brown paper bag release to fix it, for which you have to test everything again in addition to the things not fully tested the first time. This takes up time and effort which could be used to add more features, and more importantly can lead to your users having a broken app.

Automating the tests does not mean that they are better tests than if they were done manually. What it does mean is that tester time is saved on doing the mundane day to day tests so that they can spend more of their time on testing edge cases and new features, where the harder to find bugs are more likely to be and less likely to be found by the developers as they write code.

What Airsource is doing with it

At Airsource we've been busy automating tests for our app, Optiscan. Doing this we have found that, despite what Apple have tried to do with using timeouts, it is sometimes necessary to have delays in the tests. The main shortcoming in this area is the fact UI Automation swallows taps on elements which are not visible; meaning that your tests script may expect a changed state and fail because the tap to change the state was never registered. The reason that timeouts do not work on this is that the element you are attempting to tap exists and valid, so it can be tapped, but it may not be visible (due to an animation sequence for example) thus the tap is never registered by the app. This is a big shortcoming in the system of timeouts, it would be great if Apple fixed this by attempting taps until the tap occurs on a visible element or the timeout is reached.

When you are writing a test you do not want to have to worry if the button is ready to be tapped or not, you just want to tap the button! So as a part of the testing effort we have written a bunch of useful little functions which encapsulate all the delay logic and mean than we don't worry about delays when we're writing the tests.

The code below allows us to scroll to an element with a particular name in a scroll view, before waiting for it to become visible and then tapping it. This simple encapsulation of a common task means that there is no need for the test writer to worry about the button becoming visible to tap before tapping it, just worry about how to actually test the app.

// Allows you to scroll to an element with a particular name and tap it.
function scrollToElementWithNameAndTap(scrollView,name)
{
    if (! (elementArray instanceof UIAScrollView))
    {
        throw ("Expected a UIAScrollView");
    }

    var e = elementArray.scrollToElementWithName(name);
    waitForVisible(e,5,0.25);
    e.tap();
}

// Poll till the item becomes visible, up to a specified timeout
function waitForVisible(element, timeout, step)
{
        if (step == null)
        {
                step = 0.5;
        }

        var stop = timeout/step;

        for (var i = 0; i < stop; i++)
        {
                target.delay(step); // for the animation
                if (element.isVisible())
                {
                        return;
                }
        }
        element.logElement();
        throw("Not visible");
}

Using JavaScript objects' prototype property it is even possible to make scrollToElementWithNameAndTap(name) a method of each instance of UIAScrollView. Hence if you have a constructor for your tests you can use

UIAScrollView.prototype.scrollToElementWithNameAndTap = function(name){
                        scrollToElementWithNameAndTap(this,name)
                        };

and now you can call the method on all your UIAScrollViews just as if it were a native UI Automation method.

Doing more than it says on the tin

An unfortunate aspect of the UI Automation kit is that there is no reset button or method. The result of this is that if a test fails, then your app is in an unknown state, thus it is very likely that all subsiquent tests will also fail. Ideally you do not want it to appear as if 20 tests have failed when in fact only one has failed. Having the ability to target.reset() to reset the app to a known state would be very useful, and it is a pain that Apple have not implemented it in some form.

To remedy this we've had to add a small amount of code to our apps which makes it possible for us to interact with them on the level of the program, not just the UI level. UI Automation allows us to change the volume on the device using methods in UIATarget, so we've added a listener for the device volume to our app, which means we can run code when we detect a volume change. Code for a volume listener is shown below.

// Set up a listener to act on volume changes
AudioSessionInitialize(nil, nil, nil, nil);
AudioSessionSetActive(true);
AudioSessionAddPropertyListener(
                         kAudioSessionProperty_CurrentHardwareOutputVolume,
                         applicationVolumeDidChange,
                         self);

// Note that this callback will only be called if the mute button is off.
void applicationVolumeDidChange(void *inClientData,
                         AudioSessionPropertyID inID,
                         UInt32 inDataSize, const void *inData)
{
        NSLog(@"Volume changed");

        // Do something like reset the system
}

This way calling target.clickVolumeUp(); just after UIALogger.logFail(...) in your testing script will mean that you can reset the app before the next test is run. Note that it is important that the hardware mute button is off when you are testing otherwise no signals will be sent that the volume has changed!

Shortcomings

Other than the swallowed taps and lack of reset method mentioned above, there are a couple of other shortcomings with UI Automation. The most obvious being the lack of screenshots when running the tests on the simulator; this makes it hard to verify UI and record what the app's state was when a test failed. To be honest it feels rather bizarre that you can take remote screenshots on the actual hardware iPhone, but not in the simulator. I hope that Apple fix this in future releases.

The less obvious, but rather annoying shortcoming (especially if you are building a large library of testing scripts) is the lack of a decent preprocessor. JavaScript does not have native support for preprocessing or any kind of #import structure, so it is good that UI Automation scripts support #import, but the lack of define statements and logic such as #define and #ifdef is a pain when you want only to run a certain initialisation script once, but it is called by each test you run. Emulation of this behaviour using JavaScript variables is not helped by the fact that variables can remain defined between different runs in Instruments.

Conclusions

Automated UI testing does not mean bug free code. No testing on any non-trivial piece of software can guarantee bug free code, and the testing is only as good as the tests themselves. Using automated testing means that it is easier to stop regressions by having a test in place for each previous bug which has cropped up, it also means that the tester's time can be utilised more effectively. Apple's UI Automation ticks many boxes for good automated testing despite its shortcommings, and we hope will prove an invaluable tool in our development cycle.

Iestyn Pryce
in iOS

Airsource design and develop apps for ourselves and for our clients. We help people like you to turn concepts into reality, taking ideas from initial design, development and ongoing maintenance and support.

Contact us today to find out how we can help you build and maintain your app.