Get help with testing, discuss unit testing strategies etc.


Post by chauncey-garrett »

How can I run the selenium binding
driver.quit()
at the end of every test? This is needed so that Sauce Labs will see each test as a separate run and use a new VM per test.

Related, is there a way to run raw selenium code?

Post by nickolay »

Its not possible to use raw selenium code in test. You can achieve the same effect with "--chunk-size" config - set it to 1. New VM will be created for every test file. Will be significantly slower though.

Post by chauncey-garrett »

This worked, thanks!

Though the test setup is slower, we are in progress of parallelizing our test runs. SauceLabs recommends that tests are parallelized and the equivalent of driver.quit() is run at the end of every test.

Is there a way that I can set
--cap "name=myTestname"
on a per test basis? Since I'm running the tests in parallel on a per-test basis, it'd be useful to have each test named by the name of the test. As of now, each test has the same name and that's (obviously) not particularly useful.

Post by nickolay »

No easy way to do this unfortunately. Page (selenium session) is currently created before the task for it (tests to run) is assigned.

Post by chauncey-garrett »

Hmmm.... ok it looks like this could also be accomplished via Sauce Labs' REST API:

https://wiki.saucelabs.com/display/DOCS ... s+REST+API

using
PUT /rest/v1/USERNAME/jobs/JOB_ID
https://wiki.saucelabs.com/display/DOCS/Job+Methods

I'd need the JOB_ID and it looks like Siesta already uses it to update the pass/fail status of a job with the Sauce Labs API. Would that be something you could provide access to?

Ideally this would be something that Siesta could handle on its own ;)

Another idea along this line: we've created the ability to filter our tests through the use of tags placed on each test. The tags allow filtering based on inclusion and exclusion and this functionality has been quite helpful in segregating unit/functional tests, features, sanity/smoke/regression, and tests in quarantine. It'd be great if Siesta had this sort of functionality built in! However, the reason I bring it up is that I'd also like to annotate our tests with these tags so I can have a one-to-one match between tags locally and tags in Sauce Labs. This will help us make good use of their analytics feature that will be greatly expanded this year.

In summary, I'm asking for 2 things really (3 that would be useful):

1. That information related to utilizing Sauce Labs' REST API (like the JOB_ID and USERNAME) could be made available to use since Siesta knows this information (this would solve all my current needs in terms of annotating tests and allow future interaction with their API)
2. If possible, natively annotate Sauce Labs' test name with the name of the test in Siesta's harness
3. Add built-in support for tagging tests and groups of tests for filtering and annotation.

Thanks again!

Post by chauncey-garrett »


Post by nickolay »

Oh, good point about the job id, I completely forgot about it.

So right now what you can do is:
- find this code in "bin/siesta-launcher-all.js":
                me.resolveBeforeDestroy.push(new Promise(function (resolve) {

                    var request     = require('https').request({
                        method      : 'PUT',
                        host        : 'saucelabs.com',
                        path        : '/rest/v1/' + sl.userName + '/jobs/' + jobId,
                        auth        : sl.userName + ':' + sl.key,
                        timeout     : 10000
                    }, function (response) {
                        response.setEncoding('utf8')

                        var rawData     = '';

                        response.on('data', function (chunk) { return rawData += chunk })

                        response.on('end', function () {
                            resolve()
                        });
                    })

                    request.on('error', resolve)

                    request.end(JSON.stringify({
                        passed      : passed
                    }))
                }).then(function () {
                    me.debug('Job [' + jobId + '] updated')

                    return Promise.resolve()
                }))
- copy the whole section and insert it right below, so there will be 2 "pushes" to "me.resolveBeforeDestroy"
- adapt it to call the SauceLabs API method you mentioned
- that should give you the name assignment

Addressing your questions:
1. How this information should be provided? Thing is, there's no user visible launcher API. And harness and launcher lives in different processes (harness possibly running in some remote VM). They can of course communicate, lets say launcher could be running some harness methods or something, but the exact details of this are currently unclear..
2. Probably related to 1. In general you are asking for some sort of "Launcher API" with more precise control over automation process.
Note however, that running 1 test per chunk is not a primary use case (and with multiple tests there can be multiple names to be assigned to the job).
3. This is a good idea, ticket created: https://app.assembla.com/spaces/bryntum ... ts/details We'll try to do this in one of the following releases.

Post by chauncey-garrett »

1. My initial thought is that if information required to use the Sauce Labs REST API were made available from the launcher to the harness, that all the API requests could be either made from within the test run (either while the test is running or in a setup/teardown step).

So you might have a
test.sauce.getJobId()
method that would return the current JOB_ID or a similar one that would return an object with metadata related to the current test run.

PS There's other reason's you should add a
test.sauce
class: https://wiki.saucelabs.com/display/DOCS ... Sauce+Labs

---

2. As far as running one test / chunk, I think this should only be done along with parallelization (and in random order). There are a couple main benefits from doing so:

- A single chunk size prevents and solves many problems (non-determinism, leaks, etc.) by forcing isolation between each test
- Parallelization offsets the additional startup time cost and reduces overall time to get back results

Those are well-known testing best practices but there are additional benefits from doing so when using Sauce Labs:

- Each test has it's own video/screenshots/logs. I think the default time between chunks is something like 30 min -- trying to find a specific test that broke within that timeframe is time-consuming, especially give that there are no bookmarks of test runs. I can't tell you how useful this is when debugging!
- Each test (assuming it has a unique name--and this is why I want to add this capability) will now have analytics around it (number of failures, on which browser/platform/build, test runtime, etc.)

---

3. Thanks! Having this capability has been very useful for us.

Post by nickolay »

Ok, the 1. is something big and probably for the next major release. It has to be done in some generic way, because SauceLabs is only one of the cloud testing providers and API should work with all of them. We'll figure something out.

Regarding the 2 - did the patch for siesta-launcher-all.js work? As far as I see, you only need to change this part:
request.end(JSON.stringify({
    passed      : passed
}))

Post by chauncey-garrett »

I'll have to give it a shot tomorrow. I don't think I'll have any issue with using the Sauce API but I'm not sure how I'd know which test(s) are run during a particular chunk. I can give them a naming convention like test1, test2, test3 and they'll at least have uniq names. However, from our discussion it sounds like it's not currently possible to know the name of the test(s) that are run in a particular chunk. Is that correct?

Post Reply