Goal: Reason explicitly about the undependability of engineered artifacts as it relates to the artifact's operational envelope

Subgoal-1: capture and model (using UMD) the operational   envelope of the artifact, including the cost/benefit reasoning that led to it.

Subgoal-2: capture and model whatever information possible about the artifact's behavior outside of the envelope. 

Subgoal-3: provide runtime information to operational staff on the artifact's dependability in terms of its placement in its computed envelope. 

Subgoal-4: provide runtime support to operational staff when the artifact moves out of its envelope.  

Questions that will drive our experiments:

1. Does knowing the "operational envelope" improve the dependability of the artifact?

2. And, if so, is there a generalizable method to find that envelope?

Experimental infrastructure, part a: we have developed a testbed with three major components:

  1. a computer running MDS applications,
  2. a robot working in a physical environment, being directed by MDS and sending sensor data back to MDS (over a wireless link), and
  3. a simulator running in a simulated environment, being directed  by MDS and sending (simulated) sensor data back to MDS (over a wireless link). We are currently working with the SCRover application.

Question-a: how can we generate a wide set of environments for evaluation and risk assessment?

Metric-a.1: using the experimental setup (physical robot in combination with simulator), how many environments can be generated and at what cost?

Metric-a.2: How often do the environments generated lead to useful data in determining cost/benefit tradeoffs on dependability factors?

Question-b: The experimental set-up relies at least partly on generating environments using the simulator. How can we trust  the simulator?

Metric-b.1: determine the fidelity of the simulator when compared to the physical robot. Record the comparisons of robot behavior and simulated behavior over an identical set of environments.  

Experimental infrastructure, part b: we are working on a means of capturing descriptions of the operational envelope that can be reasoned about at runtime. A runtime monitor gathers what data it can from the artifact (e.g., telemetry data), and compares it with the known envelope.

Question-c: Given that an engineer marks certain environments as inside the operational envelope and others as outside the envelope, how can this information be used at runtime to compute dependability?

Metric-c.1: using a combination of physical robot and simulator, how often is monitoring successful in detecting compliance with and deviance from the operational envelope?

Question-d: Can the monitor detect an approaching undependable state (i.e., the edge of the envelope) in time to prevent it?

Metric-d.1: what is the timing on the monitor's detection versus actual entry?

Metric-d.2: in cases where some lead time is given, in what percentage of cases can the operational staff avoid entering the undependable state? (This can be tested at least somewhat independently from the actual monitor. Simulated monitoring warnings can be employed to gather this data.)

   
        home | MDS Info | MDS Info | MDS Info | MDS Info | MDS Info