The Open Lighting Project has moved!

We've launched our new site at This wiki will remain and be updated with more technical information.

Writing RDM Responder Tests

Jump to: navigation, search

This page describes how to author new RDM Responder tests. Read OLA RDM Responder Testing for information on running the tests and some general information about the testing framework.

Basic Test Structure

The tests are defined in tools/rdm/ . Each test subclasses the ResponderTest class and provides the Test() method which is used to send the RDM request.


Tests can have dependencies, which enables the conditional running of tests if conditions match. Dependencies are defined in the DEPS variable


Each test belongs to a category (defined in tools/rdm/ Categories follow those in the RDM Categories/Parameter ID Defines table in the E1.20 document but there are some extra categories for specific behavior like TestCategory.ERROR_CONDITIONS and TestCategory.SUB_DEVICES . The category a test belongs to is defined in the CATEGORY variable.


The PID variable defined which PID this test will exercise (tests can exercise multiple PIDs but that's more complicated). The PID variable should be set to a string that exists in the PidStore data file.

Example 1

This shows a test which checks that a GET request for PID DMX Start Address behaves correctly.

class GetStartAddress(ResponderTest):
  """GET the DMX start address."""
  PID = 'dmx_start_address'
  DEPS = [GetDeviceInfo] 
  def Test(self):   
    result = ExpectedResult.NackResponse(,
    if self.Deps(GetDeviceInfo).GetField('dmx_footprint') > 0:                                                                                                              
      result = ExpectedResult.AckResponse(, ['dmx_address'])

Things to note:

  • The GetStartAddress test depends on the GetDeviceInfo test to ensure that the device reports a dmx footprint > 0. This is because devices with a footprint of 0 are not required to implement the DMX Start Address PID.

Advanced Functionality


Warnings can be recorded when we detect behavior which while not serious enough to cause a failure should still be correctly handled. Warnings are printed in the summary section of the test output. To record a warning use the AddWarning() method:

if footprint > MAX_DMX_ADDRESS:
  self.AddWarning('DMX Footprint of %d, was more than 512' % footprint) 

Advisory Messages

Advisory messages are similar to warnings but they indicate issues that are not covered by the standard but are likely to cause problems i.e a sensor temperature out side of the stated scale range.

if sensor.value > sensor.max:
  self.AddAdvisory('Sensor value %d greater than max range %d' % (sensor.value, sensor.max)) 

Skipping Tests

A test may not want to run if certain conditions aren't satisfied. Calling SetNotRun() means that a test will be marked as skipped. For example, we only want to run the GetParamDescription test if we find manufacturer specific PIDS:

  def Test(self):
    self.params = self.Deps(GetSupportedParameters).manufacturer_parameters[:]
    if len(self.params) == 0:
      self.SetNotRun(' No manufacturer params found')

   # test continues here

Verification Methods

Sometimes it's not enough to check the presences of fields, or use simple equality matching. The VerifyResult() method is passed the full RDM response and can be used to implement complex inter-field checking.

def VerifyResult(self, unused_status, fields):                                                                                                                            
  """Check the footprint, personalities & sub devices."""
  footprint = fields['dmx_footprint']   if footprint > MAX_DMX_ADDRESS:
    self.AddWarning('DMX Footprint of %d, was more than 512' % footprint)
  if footprint > 0:
    personality_count = fields['personality_count']
    current_personality = fields['current_personality']
    if personality_count == 0:
      self.AddWarning('DMX Footprint non 0, but no personalities listed')


Mixins are classes which abstract away common functionality to make it easier to author tests. Mixins are defined in tools/rdm/

class GetDeviceLabel(TestMixins.GetLabelMixin, ResponderTest):
  """GET the device label."""
  PID = 'device_label'

This test didn't need any code at all. We simply inherit from the GetLabelMixin, which provides it's own Test() method. Remember when using Mixins to inherit from ResponderTest last, otherwise the Test() method in ResponderTest will be used and the test will be marked as BROKEN.

The IsSupportedMixin allows for easy testing based on whether support for the parameter has been declared. From the mixin code:

class IsSupportedMixin(object):                                                                                                                                             
  """A Mixin that changes the result if the pid isn't in the supported list."""
  DEPS = [GetSupportedParameters]

  def PidSupported(self):
    return self.Deps(GetSupportedParameters).SupportsPid(

  def AddIfSupported(self, result):
    if not self.PidSupported():
      result = ExpectedResult.NackResponse(,

Tests can use this like so:

class GetFactoryDefaults(IsSupportedMixin, ResponderTest):
  """GET the factory defaults pid."""
  PID = 'factory_defaults'

  def Test(self):
      ExpectedResult.AckResponse(, ['using_defaults']))

This test will send a GET request for the factory_defaults pid. If this pid was listed in the supported parameters the test will expect a ACK response. If this pid wasn't listed, a NR_UNKNOWN_PID will be expected.


  • Avoid the use of multiple expected responses. With good use of test dependencies, a test should know what to expect before we send the request.
  • Always include a doc string. These are used in the debugging output to describe the test
  • Try to keep the number of dependencies for each test to a minimum. Additional dependencies increase the chance the test won't be run because a dependency failed.
  • Use the IsSupportedMixin where ever possible.