top of page
  • Andrew Singletary

Why is controlling cyber-physical systems safely so hard?

Updated: Jun 14, 2023


Computers have been around for a while now. What it takes to develop reliable software is fairly well understood. The same cannot be said for developing reliable robots. Why is that? Isn’t robotics just AI?


Yes and no. AI is often just a fancy word to amalgamate the various components of a robot autonomy stack, mainly Perception and Control (see and act). Perception is nowadays dominated by machine learning approaches, but control is very much dominated by first principle approaches. Did you know that Boston Dynamics’ Atlas didn’t use any “Machine Learning” in its autonomy stack for a long time?


Human beings and robots alike grapple with similar issues when attempting to understand the world around them. And as with humans, for robots, the core challenge comes down to control, nor perception of the physical environment. Read on for our take on how to think about the safety of cyber physical systems, and how to approach defining a “fundamentally safe” safety set.



Why the Main Challenge is Control (not Perception)


We argue that the main challenge behind CPS safety is related to control. One may interject that before figuring out what to do, one must first detect what’s happening, and that therefore perception is, if not the main, at least the first challenge associated with CPS safety.


By arguing that latter point, one would treat safety (maybe without knowing) as a soft constraint. The reason they argue that not seeing is an issue is because in such safety frameworks, the robot only avoids what it sees. So system safety is contingent on seeing everything, which is a fundamentally flawed approach – trying to plug any infinite number of holes and edge cases in the perception stack.


Such an approach is unsatisfactory in practice, and is recognized as such by experts like Phil Koopman. Customers consider lack of safety as a deal breaker – see all these robotic startups failing! – and safety must therefore be handled as a hard constraint, in which case we have to look at the problem differently.

In particular, we have to consider the uncertainty and incompleteness of perception as a fundamental input to the problem we’re solving; not a temporary gap in our current technology that we’ll eventually fill.


The Math of Cyber-physical System Safety


The challenge behind CPS safety is therefore answering the following question : “How should the AI controlling the robot act in light of the incomplete and uncertain knowledge of the state of the robot, its environment, and the physical capabilities of the robot?”


In this context, effort made at the perception level only helps increase the performance of the system: what tasks it can perform safely, and how fast, but not how safely it performs these tasks.


In order to answer that question rigorously, we need to abstract ourselves in the world of math. (just like Mobileye has done for their automated vehicles). First, we need to define the set of safe behaviors. This set is defined over the state space of the system, which is the space of all possible system behaviors. This set is typically generated by taking the complement of the unsafe set, ensuring that all bad behaviors are avoided.


Then, we must ensure that the CPS stays inside of that set. The earliest mathematical framework for this was invented by Mitio Nagumo in 1942, and is known as the sub-tangentiality condition, see [2]. The related mathematical framework is called “Set Invariance” [0, 1], which simply means that the system must stay in that set for all time.


To satisfy this sub-tangentiality condition, we must influence the behavior of a system through the control inputs, which allow you to directly (or indirectly) control the system. We need to know how these inputs affect the behavior of our system ,commonly known as the system’s dynamics: x = f(x,u) (see [0] for more detail on models of systems).


Very quickly, one will realize that there are many parts of the defined “safe set” that are not truly safe. More formally, the “safe set” as defined is almost never an invariant set, which means it cannot directly be used with the sub-tangentiality condition.


This is illustrated in the below example. The truck is safe if it maintains a certain follow distance from the van in front of it. But simply the distance alone is not a safety metric: if the truck is going 70 mph, and it is 10 feet behind the van, it will not be able to stop in time.


How do you compute a safe follow distance and velocity?


The safe follow distance must be a function of the velocity and braking power of the vehicles.


Defining a “Fundamentally Safe” Set


Stated differently, an arbitrary safety set for physical systems is fundamentally unsafe. Therefore, we need to take into account what the system is capable of to define a fundamentally safe set: a control invariant set. This is not surprising either; how could we figure out what to have the system do if we don't account for what the system is capable of doing?


We need to figure out what subset of the safe set is control invariant. These subsets are notoriously difficult to compute [2], as it requires searching over the set of all possible trajectories over an infinite time horizon. However, there are several approaches to this problem with varying degrees of conservatism and practicality.


In such a framework, any claim we make on the behavior of the system is only as good as the correctness of our uncertainty characterizations. The main sources of uncertainty to consider come from dynamics modeling, state estimation, safety set perception, input approximation, digital discretization of signals, and communication delays. What’s hard is to:

  1. Characterize these uncertainties rigorously

  2. Propogate them rigorously through our mathematical safety conditions

Both of which present many challenges. Take the example of the truck before. As soon as we add uncertain road conditions, it becomes even more difficult to ensure the safety of the truck. The uncertainty in road conditions must be propagated through the formulation of the safe set.


How does this change with the road conditions?


A 3Laws Future


While these problems are unsolved, 3Laws is working to develop the least conservative, most general approach to uncertainty estimation. In future blog posts, we will discuss how each of these challenges are tackled.


bottom of page