Personal tools
The Open Lighting Project has moved!

We've launched our new site at This wiki will remain and be updated with more technical information.

RDM Responder Testing


Revision as of 19:47, 30 January 2011 by Nomis52 (talk | contribs)
Jump to: navigation, search

As part of the Open Lighting Project a suite of tests for RDM responders has been developed. This enables manufacturers to check how well a RDM device conforms to the E1.20 specification. The tests cases are written in Python, and use the OLA Open Lighting Architecture to communicate with devices.

Setup the Test Rig

The following controller devices are supported:

Connect the device under test to the controller device and start olad. Patch the output port on the controller device to a universe (UNIVERSE_NUMBER). Then run ola_rdm_discover, you should see the responder's UID appear:

 $ ola_rdm_discover -u UNIVERSE_NUMBER

Running the Tests

The tests are written in Python and run using Below is the output from a typical test run:

 ./ --universe 1  --pid_file ../../python/pids.config  00a1:00010003
 Starting tests, universe 3, UID 00a1:00010003
 SetManufacturerLabel: Passed
 SetSoftwareVersionLabel: Passed
 GetManufacturerLabel: Passed
 GetSoftwareVersionLabelWithData: Failed
 ------------- Warnings --------------
 ------------ By Category ------------
   Product Information:  7 /  7   100%
       RDM Information:  1 /  1   100%
    Core Functionality:  2 /  2   100%
      Error Conditions: 10 / 16   62%
          DMX512 Setup:  3 /  3   100%
 29 / 30 tests run, 23 passed, 6 failed, 0 broken

Useful Options has some options which can assist in debugging failures. For a full list of options run with -h

-d, --debug
Show all debugging output, including actual & expected responses.
-l, --log
Log the output of the tests to a file. The UID and timestamp is appended to the filename
-t Test1,Test2 , --tests=Test1,Test2
Only run a subset of the Tests. Only the tests listed (and their dependencies) will be run.

Information on Tests

Some tests have dependencies, which are other tests that need to be completed before the test can be run. Dependencies can be used to check for supported parameters and other conditions that may affect responder behavior.

There are 4 result states for a test:

The responder replied with the expected result
The responder failed to reply, or replied with an un-expected result
Not Run
This test wasn't run because the responder doesn't support the required functionality
An internal error occurred, this indicates a programming error or an error with the test rig.