Research
Cyber-Physical
Systems
Integration Lab
The Cyber-Physical Systems Integration Lab studies cyber-physical systems built around robotics and computing. A cyber-physical system is an intelligent setup where physical devices such as robots and sensors are tightly linked with smart software that reads the data, makes decisions, and sends commands back to the devices. Because the hardware and software act as one, the system can notice what is happening around it and change its behavior right away.
Cyber-physical systems sit at the crossroads of computer engineering, electronics, and artificial intelligence, so research in this area naturally sparks new ideas that none of the fields could create alone. One well-known example is a self-driving robot that collects live information, uses AI to choose the safest route and speed, and then moves without human help. In smart factories, industrial robot arms watch production data, rearrange tasks on their own, and cut down defect rates. The same core technology supports modern defense tools, letting drones and unmanned vehicles carry out missions more safely and efficiently. At CPSI Lab, we dig into these topics to open fresh paths for connected, intelligent machines that can sense, think, and act in the real world.
Cyber-physical systems sit at the crossroads of computer engineering, electronics, and artificial intelligence, so research in this area naturally sparks new ideas that none of the fields could create alone. One well-known example is a self-driving robot that collects live information, uses AI to choose the safest route and speed, and then moves without human help. In smart factories, industrial robot arms watch production data, rearrange tasks on their own, and cut down defect rates. The same core technology supports modern defense tools, letting drones and unmanned vehicles carry out missions more safely and efficiently. At CPSI Lab, we dig into these topics to open fresh paths for connected, intelligent machines that can sense, think, and act in the real world.
Architecture of a Cyber-Physical System
The heart of a cyber-physical system (CPS) is the seamless, optimized union of independently developed pieces - sensors, control logic, AI modules - and a layered communication stack on a single platform. As Physical AI spreads, that union has grown even more intricate: one faulty module or design slip can ripple through the whole system, leading to delays, lower safety, or unexpected breakdowns. AI must therefore deliver performance that outclasses older designs and arrive in a form that is safe and ready for field use.
Our lab tackles both sides of the problem. We run theory-driven integration studies while applying Physical AI on real robot platforms, producing CPS designs that stay reliable. Building on this foundation, we aim to create intelligent robotic systems that satisfy strict real-time demands in many application domains. The specific research topics that follow explain how we do it.
Our lab tackles both sides of the problem. We run theory-driven integration studies while applying Physical AI on real robot platforms, producing CPS designs that stay reliable. Building on this foundation, we aim to create intelligent robotic systems that satisfy strict real-time demands in many application domains. The specific research topics that follow explain how we do it.
Cyber-Physical Artificial Intelligence (Cyber-Physical AI)
We were the first to propose the idea of Cyber-Physical AI (CPAI). Classic AI tries to “maximize good outcomes”, while a cyber-physical system must “guarantee that bad things never happen”. To merge these two worlds, we need AI that can control its own uncertainties. On top of that, the robots and sensors inside a CPS have tight limits on power, memory, and computing, so they cannot give the AI unlimited data or processing time. CPAI offers a clear framework for handling every risk and constraint from saving energy to preventing fatal errors so that physical AI can join a CPS safely and efficiently.
Building on this concept, our lab designs physical AI that works reliably in the real world. During training, we tackle skewed, scarce, or weakly labeled data. During inference, we deal with shifting data patterns, prediction uncertainty, and missing inputs. We also study the hard barriers to deploying physical AI such as limited compute, bandwidth, and security gaps and craft lightweight models plus robust safety mechanisms to overcome them. Through this work, we aim to turn physical AI into an everyday, trusted technology.
Building on this concept, our lab designs physical AI that works reliably in the real world. During training, we tackle skewed, scarce, or weakly labeled data. During inference, we deal with shifting data patterns, prediction uncertainty, and missing inputs. We also study the hard barriers to deploying physical AI such as limited compute, bandwidth, and security gaps and craft lightweight models plus robust safety mechanisms to overcome them. Through this work, we aim to turn physical AI into an everyday, trusted technology.
Concept of Cyber-Physical Artificial Intelligence
Robot Operating System 2 (ROS 2)
ROS 2 is a robot-specific software layer that has become the de-facto standard platform in robotics and industry. By uniting sensing, control, and AI modules under one roof, it lets mobile robots, drones, and collaborative arms share data and awareness in real time. Thanks to this common language, machines with very different shapes can work together naturally in the same environment.
Our lab goes beyond simply using ROS 2. We study its inner workings with mathematical tools and model it so that it stays stable even in crowded, multi-robot settings. By mapping its complex layers, we fix the performance drops that often appear in real projects and create practical guidelines for building robot networks tailored to Physical AI. The result is a next-generation, ROS 2-based system that keeps consistent speed and high trustworthiness on factory floors and other industrial sites.
Our lab goes beyond simply using ROS 2. We study its inner workings with mathematical tools and model it so that it stays stable even in crowded, multi-robot settings. By mapping its complex layers, we fix the performance drops that often appear in real projects and create practical guidelines for building robot networks tailored to Physical AI. The result is a next-generation, ROS 2-based system that keeps consistent speed and high trustworthiness on factory floors and other industrial sites.
Robot Operating System Architecture Diagram
Autonomous Mobile Robot (AMR)
An autonomous mobile robot is a next-generation machine that uses high-grade sensors and AI to see its surroundings in real time, plan a path on the spot, and finish its mission with no human help. Because it can dodge complex obstacles and reach its goal safely, industry is adopting it for logistics automation and process optimization, while defense groups use it for supply runs, reconnaissance, and exploring hazardous zones.
Our lab works to raise the performance and reliability of ROS 2-based autonomous mobile robots by applying Physical AI across the entire robot pipeline: task allocation → mapping → localization → navigation → control. Concretely, we are developing (1) a reinforcement-learning scheduler that shares jobs among multiple robots, (2) a lifelong-learning map that stays accurate as the workspace changes, (3) an adaptive localization method that keeps position error tiny even in harsh settings, (4) real-time path planning that accounts for moving obstacles, and (5) ultra-low-latency, high-reliability remote control. Through these efforts, we lead the way toward autonomous mobile robots that factories and defense teams can deploy with confidence today.
Our lab works to raise the performance and reliability of ROS 2-based autonomous mobile robots by applying Physical AI across the entire robot pipeline: task allocation → mapping → localization → navigation → control. Concretely, we are developing (1) a reinforcement-learning scheduler that shares jobs among multiple robots, (2) a lifelong-learning map that stays accurate as the workspace changes, (3) an adaptive localization method that keeps position error tiny even in harsh settings, (4) real-time path planning that accounts for moving obstacles, and (5) ultra-low-latency, high-reliability remote control. Through these efforts, we lead the way toward autonomous mobile robots that factories and defense teams can deploy with confidence today.