Personal tools
The Open Lighting Project has moved!

We've launched our new site at www.openlighting.org. This wiki will remain and be updated with more technical information.

Difference between revisions of "Writing RDM Responder Tests"

From wiki.openlighting.org

Jump to: navigation, search
 
(One intermediate revision by one other user not shown)
Line 54: Line 54:
 
=== Advisory Messages ===
 
=== Advisory Messages ===
  
Advisory messages are similar to warnings but they indicate issues that are not covered by the standard but are likley to cause problems i.e a sensor temperature out side of the stated scale range.
+
Advisory messages are similar to warnings but they indicate issues that are not covered by the standard but are likely to cause problems i.e a sensor temperature out side of the stated scale range.
  
 
<pre>
 
<pre>
Line 61: Line 61:
 
</pre>
 
</pre>
  
=== Pre Conditions ===
+
=== Skipping Tests ===
  
A test may not want to run if certain conditions aren't satisfied. The PreCondition() method allows a test to prevent itself from running. For example, we only want to run the GetParamDescription test if we find manufacturer specific PIDS:
+
A test may not want to run if certain conditions aren't satisfied. Calling SetNotRun() means that a test will be marked as skipped. For example, we only want to run the GetParamDescription test if we find manufacturer specific PIDS:
  
 
<pre>
 
<pre>
def PreCondition(self):                                                                                                                                                  
+
  def Test(self):
  params = self.Deps(GetSupportedParameters).supported_parameters
+
    self.params = self.Deps(GetSupportedParameters).manufacturer_parameters[:]
  self.params = [p for p in params if p >= 0x8000 and p < 0xffe0]
+
    if len(self.params) == 0:
  return len(self.params) > 0
+
      self.SetNotRun(' No manufacturer params found')
 +
      self.Stop()
 +
      return
 +
 
 +
  # test continues here
 
</pre>
 
</pre>
  

Latest revision as of 08:57, 1 February 2011

This page describes how to author new RDM Responder tests. Read OLA RDM Responder Testing for information on running the tests and some general information about the testing framework.

Basic Test Structure

The tests are defined in tools/rdm/TestDefinitions.py . Each test subclasses the ResponderTest class and provides the Test() method which is used to send the RDM request.

Dependencies

Tests can have dependencies, which enables the conditional running of tests if conditions match. Dependencies are defined in the DEPS variable

Categories

Each test belongs to a category (defined in tools/rdm/ResponderTest.py). Categories follow those in the RDM Categories/Parameter ID Defines table in the E1.20 document but there are some extra categories for specific behavior like TestCategory.ERROR_CONDITIONS and TestCategory.SUB_DEVICES . The category a test belongs to is defined in the CATEGORY variable.

PID

The PID variable defined which PID this test will exercise (tests can exercise multiple PIDs but that's more complicated). The PID variable should be set to a string that exists in the PidStore data file.

Example 1

This shows a test which checks that a GET request for PID DMX Start Address behaves correctly.

class GetStartAddress(ResponderTest):
  """GET the DMX start address."""
  CATEGORY = TestCategory.DMX_SETUP
  PID = 'dmx_start_address'
  DEPS = [GetDeviceInfo] 
  
  def Test(self):   
    result = ExpectedResult.NackResponse(self.pid.value,
                                         RDMNack.NR_UNKNOWN_PID)
    if self.Deps(GetDeviceInfo).GetField('dmx_footprint') > 0:                                                                                                              
      result = ExpectedResult.AckResponse(self.pid.value, ['dmx_address'])
    self.AddExpectedResults(result)
    self.SendGet(PidStore.ROOT_DEVICE, self.pid)

Things to note:

  • The GetStartAddress test depends on the GetDeviceInfo test to ensure that the device reports a dmx footprint > 0. This is because devices with a footprint of 0 are not required to implement the DMX Start Address PID.

Advanced Functionality

Warnings

Warnings can be recorded when we detect behavior which while not serious enough to cause a failure should still be correctly handled. Warnings are printed in the summary section of the test output. To record a warning use the AddWarning() method:

if footprint > MAX_DMX_ADDRESS:
  self.AddWarning('DMX Footprint of %d, was more than 512' % footprint) 

Advisory Messages

Advisory messages are similar to warnings but they indicate issues that are not covered by the standard but are likely to cause problems i.e a sensor temperature out side of the stated scale range.

if sensor.value > sensor.max:
  self.AddAdvisory('Sensor value %d greater than max range %d' % (sensor.value, sensor.max)) 

Skipping Tests

A test may not want to run if certain conditions aren't satisfied. Calling SetNotRun() means that a test will be marked as skipped. For example, we only want to run the GetParamDescription test if we find manufacturer specific PIDS:

  def Test(self):
    self.params = self.Deps(GetSupportedParameters).manufacturer_parameters[:]
    if len(self.params) == 0:
      self.SetNotRun(' No manufacturer params found')
      self.Stop()
      return

   # test continues here

Verification Methods

Sometimes it's not enough to check the presences of fields, or use simple equality matching. The VerifyResult() method is passed the full RDM response and can be used to implement complex inter-field checking.

def VerifyResult(self, unused_status, fields):                                                                                                                            
  """Check the footprint, personalities & sub devices."""
  footprint = fields['dmx_footprint']   if footprint > MAX_DMX_ADDRESS:
    self.AddWarning('DMX Footprint of %d, was more than 512' % footprint)
  if footprint > 0:
    personality_count = fields['personality_count']
    current_personality = fields['current_personality']
    if personality_count == 0:
      self.AddWarning('DMX Footprint non 0, but no personalities listed')

Mixins

Mixins are classes which abstract away common functionality to make it easier to author tests. Mixins are defined in tools/rdm/TestMixins.py.

class GetDeviceLabel(TestMixins.GetLabelMixin, ResponderTest):
  """GET the device label."""
  CATEGORY = TestCategory.PRODUCT_INFORMATION
  PID = 'device_label'

This test didn't need any code at all. We simply inherit from the GetLabelMixin, which provides it's own Test() method. Remember when using Mixins to inherit from ResponderTest last, otherwise the Test() method in ResponderTest will be used and the test will be marked as BROKEN.


The IsSupportedMixin allows for easy testing based on whether support for the parameter has been declared. From the mixin code:


class IsSupportedMixin(object):                                                                                                                                             
  """A Mixin that changes the result if the pid isn't in the supported list."""
  DEPS = [GetSupportedParameters]

  def PidSupported(self):
    return self.Deps(GetSupportedParameters).SupportsPid(self.pid)

  def AddIfSupported(self, result):
    if not self.PidSupported():
      result = ExpectedResult.NackResponse(self.pid.value,
                                           RDMNack.NR_UNKNOWN_PID)
    self.AddExpectedResults(result)

Tests can use this like so:


class GetFactoryDefaults(IsSupportedMixin, ResponderTest):
  """GET the factory defaults pid."""
  CATEGORY = TestCategory.PRODUCT_INFORMATION
  PID = 'factory_defaults'

  def Test(self):
    self.AddIfSupported(
      ExpectedResult.AckResponse(self.pid.value, ['using_defaults']))
    self.SendGet(PidStore.ROOT_DEVICE, self.pid)

This test will send a GET request for the factory_defaults pid. If this pid was listed in the supported parameters the test will expect a ACK response. If this pid wasn't listed, a NR_UNKNOWN_PID will be expected.

Guidelines

  • Avoid the use of multiple expected responses. With good use of test dependencies, a test should know what to expect before we send the request.
  • Always include a doc string. These are used in the debugging output to describe the test
  • Try to keep the number of dependencies for each test to a minimum. Additional dependencies increase the chance the test won't be run because a dependency failed.
  • Use the IsSupportedMixin where ever possible.