Writing RDM Responder Tests
This page describes how to author new RDM Responder tests. Read OLA RDM Responder Testing for information on running the tests and some general information about the testing framework.
Basic Test Structure
The tests are defined in tools/rdm/TestDefinitions.py . Each test subclasses the ResponderTest class and provides the Test() method which is used to send the RDM request.
Tests can have dependencies, which enables the conditional running of tests if conditions match. Dependencies are defined in the DEPS variable
Each test belongs to a category (defined in tools/rdm/ResponderTest.py). Categories follow those in the RDM Categories/Parameter ID Defines table in the E1.20 document but there are some extra categories for specific behavior like TestCategory.ERROR_CONDITIONS and TestCategory.SUB_DEVICES . The category a test belongs to is defined in the CATEGORY variable.
The PID variable defined which PID this test will exercise (tests can exercise multiple PIDs but that's more complicated). The PID variable should be set to a string that exists in the PidStore data file.
This shows a test which checks that a GET request for PID DMX Start Address behaves correctly.
class GetStartAddress(ResponderTest): """GET the DMX start address.""" CATEGORY = TestCategory.DMX_SETUP PID = 'dmx_start_address' DEPS = [GetDeviceInfo] def Test(self): result = ExpectedResult.NackResponse(self.pid.value, RDMNack.NR_UNKNOWN_PID) if self.Deps(GetDeviceInfo).GetField('dmx_footprint') > 0: result = ExpectedResult.AckResponse(self.pid.value, ['dmx_address']) self.AddExpectedResults(result) self.SendGet(PidStore.ROOT_DEVICE, self.pid)
Things to note:
- The GetStartAddress test depends on the GetDeviceInfo test to ensure that the device reports a dmx footprint > 0. This is because devices with a footprint of 0 are not required to implement the DMX Start Address PID.
Warnings can be recorded when we detect behavior which while not serious enough to cause a failure should still be correctly. Warnings are printed in the summary section of the test output. To record a warning use the AddWarning() method:
if footprint > MAX_DMX_ADDRESS: self.AddWarning('DMX Footprint of %d, was more than 512' % footprint)
A test may not want to run if certain conditions aren't satisfied. The PreCondition() method allows a test to prevent itself from running. For example, we only want to run the GetParamDescription test if we find manufacturer specific PIDS:
def PreCondition(self): params = self.Deps(GetSupportedParameters).supported_parameters self.params = [p for p in params if p >= 0x8000 and p < 0xffe0] return len(self.params) > 0
Sometimes it's not enough to check the presences of fields, or use simple equality matching. The VerifyResult() method is passed the full RDM response and can be used to implement complex inter-field checking.
def VerifyResult(self, unused_status, fields): """Check the footprint, personalities & sub devices.""" footprint = fields['dmx_footprint'] if footprint > MAX_DMX_ADDRESS: self.AddWarning('DMX Footprint of %d, was more than 512' % footprint) if footprint > 0: personality_count = fields['personality_count'] current_personality = fields['current_personality'] if personality_count == 0: self.AddWarning('DMX Footprint non 0, but no personalities listed')
Mixins are classes which abstract away common functionality to make it easier to author tests. Mixins are defined in tools/rdm/TestMixins.py.
class GetDeviceLabel(TestMixins.GetLabelMixin, ResponderTest): """GET the device label.""" CATEGORY = TestCategory.PRODUCT_INFORMATION PID = 'device_label'
This test didn't need any code at all. We simply inherit from the GetLabelMixin, which provides it's own Test() method. Remember when using Mixins to inherit from ResponderTest last, otherwise the Test() method in ResponderTest will be used and the test will be marked as BROKEN.
- Avoid the use of multiple expected responses. With good use of test dependencies, a test should know what to expect before we send the request.
- Always include a doc string. These are used in the debugging output to describe the test
- Try to keep the number of dependencies for each test to a minimum. Additional dependencies increase the chance the test won't be run because a dependency failed.
- Use the IsSupportedMixin where ever possible.