Optimal Control

My research focus lies in developing numerical algorithms for optimal control problems. The control input to a dynamical system can be used to steer the system in order to solve specific problems, e.g. a transition from point A to point B. If this task should be performed optimally with respect to a given cost functional or criterion, one is faced with an optimal control problem.

Direct methods firstly discretize the problem and then solve the nonlinear restricted optimization problem in order to approximate an optimal solution. A structure-preserving discretization for mechanical systems can be obtained by applying variational integrators to the controlled dynamics, as it is done in DMOC (Discrete Mechanics and Optimal Control).MotionPrimitives

Solution methods for high-dimensional nonlinear restricted optimization problems typically work locally. Therefore, good initial guesses are required to guarantee convergence to a „good local optimum“. Such an initial guess can be obtained by a sequence of motion primitives. Above you see examples of motion primitives for a spherical pendulum: purely horizontal rotations (trim primitives, or relative equilibria, which are generated by the system’s symmetry properties), vertical swing-ups, which are uncontrolled motions on the (un)stable manifold and a number of control maneuvers which connect the vertical and horizontal primitives. Manifold primitives play an important role for energy efficient solutions, since they show how to use natural system dynamics.

In my motion planning with motion primitives method, the primitive sequences form hybrid admissible solutions to the control problem. While the states are still smooth, the control strategy switches between uncontrolled, constantly controlled, and fully controlled phases. The method is also extendable to the optimal control of hybrid systems.