Chapter 1.9: ROS 2 Tooling Ecosystem
Learning Objectives
By the end of this chapter, you will be able to:
- Use core ROS 2 command-line tools for system inspection and debugging
- Navigate and visualize complex robotic systems using RViz2 and RQT
- Employ debugging and profiling tools for performance analysis
- Utilize development tools for package management, testing, and documentation
- Leverage diagnostic tools for system health monitoring
- Apply visualization tools for sensor data, robot state, and AI behavior
- Understand the role of each tool in the Physical AI development workflow
- Create custom visualization and debugging tools for specific applications
Introduction
ROS 2 provides a comprehensive ecosystem of development, debugging, and visualization tools that are essential for building, testing, and maintaining complex Physical AI systems. These tools serve as the interface between the developer and the complex robotic system, making it possible to understand, debug, and optimize the robot's behavior.
In a Physical AI context, these tools take on particular importance. When developing a humanoid robot, you need to visualize sensor data from multiple cameras and LiDAR units, monitor the robot's internal state (joint angles, balance control), inspect AI decision-making processes, and verify that the robot behaves correctly in complex scenarios. The tooling ecosystem provides windows into the robot's "mind" and "body", making transparent what would otherwise be invisible internal processes.
The ROS 2 tooling ecosystem can be categorized into several areas:
- Command-line tools: Core ROS 2 commands for introspection and control
- Visualization tools: RViz2 for 3D visualization, RQT for Qt-based interfaces
- Debugging and analysis: Tools for monitoring performance and diagnosing issues
- Development utilities: Tools for package creation, testing, and documentation
- Diagnostic tools: Specialized tools for system health monitoring
As we explore these tools, we'll see how they enable rapid iteration and development of Physical AI systems. For example, RViz2 allows us to visualize in real-time how our AI perception system detects objects and how our planning system computes paths. Debugging tools help us identify bottlenecks in our AI reasoning system. The command-line tools let us introspect the complex message flows in our system.
In this chapter, we'll explore each category of tools with practical examples relevant to Physical AI applications, demonstrating how to use them effectively in the context of humanoid robotics development.
1. Command-Line Tools (1000 words)
Core ROS 2 Commands (500 words)
The ROS 2 command-line interface (CLI) provides powerful tools for inspecting and controlling your robotic system. These commands are essential for understanding what's happening in your robot without needing to create custom interfaces.
Node Inspection:
# List all running nodes
ros2 node list
# Get information about a specific node
ros2 node info /physical_ai_robot/camera_driver
# Get arguments for a running node
ros2 run rclpy get_node_parameters /physical_ai_robot/camera_driver
Topic Inspection:
# List all topics
ros2 topic list
# Get information about a topic
ros2 topic info /front_cam/image_raw
# Echo messages from a topic (like watching a live feed)
ros2 topic echo /front_cam/image_raw sensor_msgs/msg/Image
# Echo with custom QoS (for sensor data)
ros2 topic echo --qos-profile sensor_data /front_cam/image_raw
# Count messages per second (throughput)
ros2 topic hz /front_cam/image_raw
# Show message type details
ros2 interface show sensor_msgs/msg/Image
Service Inspection:
# List all services
ros2 service list
# Get information about a service
ros2 service info /physical_ai_robot/navigation/get_path
# Call a service directly from command line
ros2 service call /physical_ai_robot/navigation/get_path nav_msgs/srv/GetPath "{start: {header: {frame_id: 'map'}, pose: {position: {x: 0.0, y: 0.0, z: 0.0}, orientation: {x: 0.0, y: 0.0, z: 0.0, w: 1.0}}}, goal: {header: {frame_id: 'map'}, pose: {position: {x: 1.0, y: 1.0, z: 0.0}, orientation: {x: 0.0, y: 0.0, z: 0.0, w: 1.0}}}"
# Get service type
ros2 service type /physical_ai_robot/navigation/get_path
Parameter Management:
# Get all parameters for a node
ros2 param list /physical_ai_robot/camera_driver
# Get value of a specific parameter
ros2 param get /physical_ai_robot/camera_driver camera_name
# Set a parameter value (can be changed at runtime)
ros2 param set /physical_ai_robot/camera_driver frame_rate 60.0
# Dump all parameters to a file (for reproduction)
ros2 param dump /physical_ai_robot/camera_driver > camera_params.yaml
Action Inspection:
# List all actions (for long-running tasks)
ros2 action list
# Get information about an action
ros2 action info /physical_ai_robot/navigation/navigate_to_pose
# Send a goal to an action server
ros2 action send_goal /physical_ai_robot/navigation/navigate_to_pose navigation_msgs/action/NavigateToPose "{pose: {position: {x: 1.0, y: 1.0, z: 0.0}, orientation: {x: 0.0, y: 0.0, z: 0.0, w: 1.0}}, behavior_tree: 'default'}"
Advanced Command-Line Operations (300 words)
System Monitoring:
# Monitor system resource usage
ros2 doctor # Check system health and connectivity
# Record bag files for later playback and analysis
ros2 bag record /front_cam/image_raw /imu/data /physical_ai_robot/joint_states --output my_session
# Play back recorded bag files for testing and development
ros2 bag play my_session
# Launch diagnostic monitor
ros2 run diagnostic_aggregator aggregator_node
Package Management:
# List all installed packages
ros2 pkg list
# Get information about a package
ros2 pkg info physical_ai_perception
# Find executables in a package
ros2 pkg executables physical_ai_perception
# Find launch files in a package
find ~/.local/lib/python3.10/site-packages -name "*launch*" -type f
Message and Service Exploration:
# Get detailed message definition
ros2 interface show std_msgs/msg/String
# Show all available message types
ros2 interface list
# Show only message types from a specific package
ros2 interface package sensor_msgs
Command-Line Tools for Physical AI Applications (200 words)
When working with Physical AI systems, you'll frequently use these commands:
Real-time Monitoring:
ros2 topic hzto monitor sensor data ratesros2 run tf2_tools view_framesto visualize the robot's coordinate frame treeros2 topic delayandros2 topic bwto analyze network performance
AI System Debugging:
ros2 topic echoto inspect AI system inputs and outputsros2 action listto see active planning and manipulation tasksros2 param listto verify AI model configurations
Performance Analysis:
ros2 doctorto check system healthros2 bag recordto capture data for offline analysisros2 lifecycleto manage node lifecycles in complex systems
The command-line tools can be combined with Unix utilities for powerful analysis. For example, to monitor how often a particular topic is published: ros2 topic echo /topic_name msg_type | wc -l
2. Visualization Tools (800 words)
RViz2: The 3D Visualization Tool (400 words)
RViz2 is the primary visualization tool for ROS 2, providing a 3D environment where you can visualize robot models, sensor data, AI planning results, and more. For Physical AI systems, RViz2 is invaluable for understanding what your robot perceives and how its AI systems interpret that information.
Setting Up RViz2 for Physical AI Systems:
RViz2 configurations are stored in .rviz files, which define the display panels, visualization elements, and topic subscriptions. Here's how to configure RViz2 for a Physical AI humanoid robot:
- Install RViz2:
sudo apt install ros-humble-rviz2 # On Ubuntu
# Or include in your workspace dependencies
- Basic RViz2 Setup:
# Launch RViz2
rviz2
# Or launch with a specific configuration
rviz2 -d /path/to/physical_ai_config.rviz
- Essential Displays for Physical AI:
- RobotModel: Shows the robot's URDF model with current joint angles
- TF: Visualizes coordinate frames and transforms
- Image: Displays camera images from the robot
- PointCloud2: Shows LiDAR and depth camera data
- LaserScan: Displays 2D laser scanner data
- MarkerArray: Visualizes AI-generated objects, paths, waypoints
- Path: Shows planned and executed paths
- Odometry: Tracks robot position and orientation
- Camera: 3D visualization of camera views
RViz2 Configuration for Humanoid Robot:
# Example .rviz configuration snippet
Panels:
- Class: rviz_common/Displays
Help Height: 78
Name: Displays
Property Tree Widget:
Expanded:
- /Global Options1
- /Status1
- /RobotModel1
- /TF1
- /Camera1
- /Path1
- /MarkerArray1
Splitter Ratio: 0.5
Tree Height: 855
- Class: rviz_common/Selection
Name: Selection
Visualization Manager:
Class: ""
Displays:
- Alpha: 0.5
Cell Size: 1
Class: rviz_default_plugins/Grid
Color: 160; 160; 164
Enabled: true
Line Style:
Line Width: 0.029999999329447746
Value: Lines
Name: Grid
Normal Cell Count: 0
Offset:
X: 0
Y: 0
Z: 0
Plane: XY
Plane Cell Count: 10
Reference Frame: <Fixed Frame>
Value: true
- Alpha: 1
Class: rviz_default_plugins/RobotModel
Collision Enabled: false
Description File: ""
Description Source: Topic
Description Topic:
Depth: 5
Durability Policy: Volatile
History Policy: Keep Last
Reliability Policy: Reliable
Value: /physical_ai_robot/robot_description
Enabled: true
Links:
All Links Enabled: true
Expand Joint Details: false
Expand Link Details: false
Expand Tree: false
Link Tree Style: Links in Alphabetic Order
Name: RobotModel
TF Prefix: ""
Update Interval: 0
Value: true
- Class: rviz_default_plugins/TF
Enabled: true
Frame Timeout: 15
Frames:
All Enabled: true
Marker Scale: 1
Name: TF
Show Arrows: true
Show Axes: true
Show Names: false
Tree:
{}
Update Interval: 0
Value: true
- Class: rviz_default_plugins/Path
Alpha: 1
Buffer Length: 1
Color: 25; 255; 0
Enabled: true
Head Diameter: 0.30000001192092896
Head Length: 0.20000000298023224
Length: 0.30000001192092896
Line Style: Lines
Line Width: 0.029999999329447746
Name: Path
Offset:
X: 0
Y: 0
Z: 0
Pose Color: 255; 85; 255
Pose Style: None
Radius: 0.029999999329447746
Shaft Diameter: 0.10000000149011612
Shaft Length: 0.10000000149011612
Topic:
Depth: 5
Durability Policy: Volatile
Filter size: 10
History Policy: Keep Last
Reliability Policy: Reliable
Value: /physical_ai_robot/planned_path
Value: true
RQT: The Qt-Based Tool Suite (400 words)
RQT is a Qt-based framework that provides various GUI tools for monitoring and debugging ROS 2 systems. Unlike RViz2 which focuses on visualization, RQT tools focus on monitoring, plotting, and debugging specific aspects of your system.
Essential RQT Plugins:
- rqt_graph: Shows the node graph - which nodes are publishing/subscribing to which topics. This is crucial for understanding the data flow in your Physical AI system.
# Launch the node graph
rqt_graph
- rqt_plot: Plots numeric values from topics over time. Useful for monitoring sensor values, control signals, or any time-series data.
# Plot joint positions over time
rqt_plot /physical_ai_robot/joint_states/position[0] /physical_ai_robot/joint_states/position[1]
- rqt_console: Monitors ROS 2 log messages from all nodes. Critical for debugging errors and warnings.
# Monitor all log messages
rqt_console
-
rqt_bag: Records and plays back bag files with a graphical interface, making it easier to work with recorded data.
-
rqt_topic: Shows all published topics and their values in real-time, similar to
ros2 topic echobut with a GUI. -
rqt_service_caller: Allows you to call services through a GUI instead of the command line.
-
rqt_reconfigure: Provides a GUI for dynamically reconfiguring parameters during runtime.
Creating Custom RQT Views:
You can save RQT layouts to capture the exact arrangement of plugins you need for your Physical AI work:
# Save the current RQT layout
rqt --save-config ./physical_ai_debug_view.perspective
# Load a saved layout
rqt --force-discover --perspective-file ./physical_ai_debug_view.perspective
RQT for Physical AI Workflows:
- Use
rqt_graphto verify that your AI perception nodes are properly connected to your planning nodes - Use
rqt_plotto monitor sensor values during robot operation - Use
rqt_consoleto track AI system logs and debugging information - Use
rqt_reconfigureto tune AI model parameters during live operation
3. Debugging and Analysis Tools (700 words)
Performance Analysis Tools (300 words)
Performance is critical in Physical AI systems where real-time responses are often required for safety and functionality.
ros2 doctor: Performs basic system health checks.
ros2 doctor
ros2 run tf2_tools view_frames: Creates a PDF showing all coordinate frames in your system.
# Visualize the transform tree
ros2 run tf2_tools view_frames
evince frames.pdf # View the generated PDF
ros2 topic delay and bandwidth: Analyze communication performance.
# Monitor delay of messages on a topic
ros2 topic delay /front_cam/image_raw
# Monitor bandwidth usage of a topic
ros2 topic bw /physical_ai_robot/joint_states
Profiling with ROS 2 Tracing: For detailed performance analysis, use the tracetools package:
# Install tracetools
sudo apt install ros-humble-tracetools
# Record traces of system execution
ros2 trace -a my_trace_session
# Analyze traces with Trace Compass
# Or use command line tools to extract information
Debugging with GDB and Valgrind (200 words)
For low-level debugging of C++ nodes:
# Attach GDB to a running node
ros2 run --prefix 'gdb -ex run --args' package_name node_name
# Debug with Valgrind for memory issues
ros2 run --prefix 'valgrind --tool=memcheck' package_name node_name
Memory and Resource Monitoring (200 words)
Monitor resource usage of your Physical AI system:
# Use system tools with ROS 2
htop
# Look for processes related to your robot nodes
# Use ROS 2-specific resource monitoring
ros2 run topological_lifecycles topological_lifecycles
4. Development Tools (600 words)
Package Development Tools (300 words)
ROS 2 provides several tools for creating, building, and managing packages:
Creating Packages:
# Create a new Python package
ros2 pkg create --build-type ament_python my_physical_ai_package --dependencies rclpy sensor_msgs geometry_msgs
# Create a new C++ package
ros2 pkg create --build-type ament_cmake my_cpp_package --dependencies rclcpp sensor_msgs geometry_msgs
# Create a launch-only package
ros2 pkg create --build-type ament_cmake my_launch_package
Building Packages:
# Build a specific package
colcon build --packages-select my_physical_ai_package
# Build with symlinks to avoid copying
colcon build --symlink-install --packages-select my_physical_ai_package
# Build with verbose output for debugging
colcon build --packages-select my_physical_ai_package --event-handlers console_direct+
# Build all packages in workspace
colcon build
Testing Tools (300 words)
ROS 2 integrates with standard testing frameworks and provides additional robot-specific testing capabilities:
Running Tests:
# Run tests for a specific package
colcon test --packages-select my_physical_ai_package
# Run specific test files
colcon test --packages-select my_physical_ai_package --ctest-args -R test_my_node
# View test results
colcon test-result --verbose
Creating Tests: For Python packages, use pytest:
# test/test_my_node.py
import pytest
import rclpy
from rclpy.executors import SingleThreadedExecutor
from my_physical_ai_package.my_node import MyPhysicalAINode
@pytest.fixture(scope='module')
def node_with_executor():
"""Create a node wrapped in an executor."""
rclpy.init()
node = MyPhysicalAINode()
executor = SingleThreadedExecutor()
executor.add_node(node)
yield node, executor
node.destroy_node()
rclpy.shutdown()
def test_node_initialization(node_with_executor):
"""Test that the node initializes properly."""
node, executor = node_with_executor
assert node is not None
assert node.get_parameter('robot_name').value == 'physical_ai_robot'
def test_sensor_data_processing(node_with_executor):
"""Test sensor data processing functionality."""
node, executor = node_with_executor
# Create test sensor message
from sensor_msgs.msg import Image
test_image = Image()
test_image.width = 640
test_image.height = 480
test_image.data = [0] * (640 * 480 * 3) # RGB data
# Process the image (would typically happen automatically)
result = node.process_image(test_image)
# Assertions
assert result is not None
assert type(result) == dict # Expect some kind of processed data
For C++ packages, use Google Test:
// test/test_my_node.cpp
#include <gtest/gtest.h>
#include "my_cpp_package/my_node.hpp"
TEST(MyPhysicalAINodeTest, NodeInitialization) {
auto node = std::make_shared<MyNode>();
ASSERT_NE(nullptr, node);
EXPECT_EQ("physical_ai_robot", node->get_parameter("robot_name").as_string());
}
TEST(MyPhysicalAINodeTest, ProcessSensorData) {
auto node = std::make_shared<MyNode>();
// Create test sensor data
sensor_msgs::msg::JointState test_joints;
test_joints.position = {0.1, 0.2, 0.3};
// Process the data
auto result = node->process_joint_data(test_joints);
// Verify result
ASSERT_TRUE(result.success);
}
5. Diagnostic Tools (500 words)
Built-in Diagnostics (250 words)
ROS 2 includes comprehensive diagnostic tools that monitor system health:
Diagnostic Aggregator:
# Launch the diagnostic aggregator
ros2 run diagnostic_aggregator aggregator_node
Creating Diagnostic Messages in Your Nodes:
from diagnostic_msgs.msg import DiagnosticArray, DiagnosticStatus, KeyValue
class PhysicalAIDiagnosticNode(Node):
def __init__(self):
super().__init__('physical_ai_diagnostic_node')
# Publisher for diagnostic messages
self.diag_pub = self.create_publisher(DiagnosticArray, '/diagnostics', 10)
# Timer to periodically publish diagnostics
self.diag_timer = self.create_timer(1.0, self.publish_diagnostics)
# Initialize diagnostic values
self.system_load = 0.0
self.memory_usage = 0.0
self.processing_latency = 0.0
def publish_diagnostics(self):
"""Publish diagnostic information about the system."""
diag_array = DiagnosticArray()
diag_array.header.stamp = self.get_clock().now().to_msg()
# System health status
system_status = DiagnosticStatus()
system_status.name = "Physical AI System Health"
system_status.hardware_id = "physical_ai_robot_001"
# Determine status level based on values
if self.system_load > 0.8 or self.memory_usage > 0.9:
system_status.level = DiagnosticStatus.ERROR
system_status.message = "High resource utilization"
elif self.system_load > 0.6 or self.memory_usage > 0.7:
system_status.level = DiagnosticStatus.WARN
system_status.message = "Moderate resource utilization"
else:
system_status.level = DiagnosticStatus.OK
system_status.message = "System operating normally"
# Add key-value pairs for detailed information
system_status.values.extend([
KeyValue(key="CPU Load", value=f"{self.system_load:.2f}"),
KeyValue(key="Memory Usage", value=f"{self.memory_usage:.2f}"),
KeyValue(key="Processing Latency", value=f"{self.processing_latency:.3f}s"),
KeyValue(key="AI Model Accuracy", value="0.92"),
])
diag_array.status.append(system_status)
self.diag_pub.publish(diag_array)
Custom Diagnostic Development (250 words)
For Physical AI applications, develop custom diagnostic tools that monitor specific aspects of your system:
AI Performance Diagnostics:
- Monitor AI model response times
- Track accuracy metrics over time
- Alert if AI system becomes unresponsive
Sensor Health Diagnostics:
- Monitor sensor data rates
- Detect sensor failures or drift
- Verify sensor calibration status
Actuator Health Diagnostics:
- Track joint position errors
- Monitor motor temperatures
- Detect actuator failures
Example of Custom Diagnostic for AI Model:
def check_ai_model_performance(self):
"""Check the performance of the AI perception model."""
last_response_time = self.ai_response_times[-1] if self.ai_response_times else 0
avg_response_time = sum(self.ai_response_times[-10:]) / min(len(self.ai_response_times), 10) if self.ai_response_times else 0
ai_status = DiagnosticStatus()
ai_status.name = "AI Perception Model Performance"
ai_status.hardware_id = self.get_parameter('model_path').value
if avg_response_time > 1.0: # If average response is over 1 second
ai_status.level = DiagnosticStatus.WARN
ai_status.message = f"Slow AI response: avg {avg_response_time:.2f}s"
elif last_response_time > 3.0: # If last response took > 3 seconds
ai_status.level = DiagnosticStatus.ERROR
ai_status.message = f"AI timeout: {last_response_time:.2f}s response"
else:
ai_status.level = DiagnosticStatus.OK
ai_status.message = f"AI responding normally: avg {avg_response_time:.2f}s"
ai_status.values.extend([
KeyValue(key="Avg Response Time", value=f"{avg_response_time:.3f}s"),
KeyValue(key="Last Response Time", value=f"{last_response_time:.3f}s"),
KeyValue(key="Recent Calls", value=f"{len(self.ai_response_times)}"),
KeyValue(key="Model Accuracy", value=f"{self.perception_accuracy:.3f}"),
])
return ai_status
6. Hands-On Exercise (500 words)
Exercise: Physical AI System Debugging and Visualization
Objective: Use ROS 2 tools to analyze and visualize a Physical AI system, identifying potential issues and improving the system's performance.
Prerequisites:
- ROS 2 Humble installed
- Basic Physical AI system running (simulated or real)
- Understanding of basic ROS 2 concepts
Setup:
# Make sure you have a simple Physical AI system running
# This could be a simulated robot with basic perception and control nodes
source ~/ros2_ws/install/setup.bash
# Launch a simple example system
ros2 launch physical_ai_examples simple_system.launch.py
Step 1: System Inspection Using Command-Line Tools
Task 1.1: Node and Topic Discovery
# 1. List all running nodes in your system
ros2 node list
# 2. Get detailed information about the camera driver node
ros2 node info /physical_ai_robot/camera_driver
# 3. List all topics in the system
ros2 topic list
# 4. Find all topics related to perception
ros2 topic list | grep -E "(camera|image|perception|detection)"
# 5. Monitor the frame rate of camera images
ros2 topic hz /front_cam/image_raw
Task 1.2: Parameter Inspection and Adjustment
# 1. Check current camera parameters
ros2 param list /physical_ai_robot/camera_driver
# 2. Get a specific parameter value
ros2 param get /physical_ai_robot/camera_driver frame_rate
# 3. Try adjusting a parameter (if the node supports it)
ros2 param set /physical_ai_robot/camera_driver frame_rate 15.0
Step 2: Visualization with RViz2
Task 2.1: Basic RViz2 Setup
# 1. Launch RViz2
rviz2 &
2. Configure RViz2 for your system:
- Add a RobotModel display and point it to
/physical_ai_robot/robot_description - Add a TF display to visualize coordinate frames
- Add an Image display and point it to
/front_cam/image_raw(or your camera topic) - Add a PointCloud2 display for any LiDAR data
- Add a Path display for navigation routes
Task 2.2: Analyze System State
- Verify that the robot model updates with real joint states
- Check that camera images appear correctly
- Ensure coordinate frames are properly connected
- Verify that any path planning or AI outputs are visible
Step 3: Real-Time Monitoring with RQT
Task 3.1: Node Graph Analysis
# Launch RQT with the node graph
rqt
# In RQT, go to Plugins → Introspection → Node Graph
Task 3.2: Parameter Tuning
- In RQT, go to Plugins → Services → Dynamic Reconfigure
- Find any nodes that support dynamic parameters
- Adjust parameters and observe the effects in real-time
Task 3.3: Data Plotting
- In RQT, go to Plugins → Visualization → Plot
- Add topics to plot (e.g., joint positions, sensor values)
- Observe how values change over time
Step 4: Performance Analysis
Task 4.1: Communication Analysis
# Monitor bandwidth usage of critical topics
ros2 topic bw /physical_ai_robot/joint_states
# Check message delay on time-sensitive topics
ros2 topic delay /physical_ai_robot/robot_state
# Monitor the TF tree
ros2 run tf2_tools view_frames
Task 4.2: Create a Diagnostic Monitor Create a simple diagnostic node that monitors system performance:
#!/usr/bin/env python3
"""Diagnostic node to monitor Physical AI system performance."""
import rclpy
from rclpy.node import Node
from diagnostic_msgs.msg import DiagnosticArray, DiagnosticStatus, KeyValue
from sensor_msgs.msg import Image
from std_msgs.msg import Float64
class PhysicalAIDiagnosticNode(Node):
"""Monitors Physical AI system performance and publishes diagnostics."""
def __init__(self):
super().__init__('physical_ai_diagnostic_node')
# Publishers
self.diag_pub = self.create_publisher(DiagnosticArray, '/diagnostics', 10)
self.latency_pub = self.create_publisher(Float64, '/diagnostics/latency', 10)
# Subscriptions
self.image_sub = self.create_subscription(
Image, '/front_cam/image_raw', self.image_callback, 10
)
# Timers
self.diag_timer = self.create_timer(1.0, self.publish_diagnostics)
# Statistics
self.image_count = 0
self.start_time = self.get_clock().now().seconds_nanoseconds()
self.last_image_time = None
self.latency_samples = []
def image_callback(self, msg):
"""Process incoming camera images to measure latency."""
current_time = self.get_clock().now()
if self.last_image_time:
latency = current_time - self.last_image_time
latency_ms = latency.nanoseconds / 1_000_000.0
self.latency_samples.append(latency_ms)
# Publish latest latency
latency_msg = Float64()
latency_msg.data = latency_ms
self.latency_pub.publish(latency_msg)
self.last_image_time = current_time
self.image_count += 1
def publish_diagnostics(self):
"""Publish diagnostic information about the system."""
diag_array = DiagnosticArray()
diag_array.header.stamp = self.get_clock().now().to_msg()
# Calculate stats
now = self.get_clock().now().seconds_nanoseconds()[0]
elapsed_time = now - self.start_time[0]
if elapsed_time > 0:
avg_rate = self.image_count / elapsed_time
else:
avg_rate = 0
avg_latency = sum(self.latency_samples[-10:]) / min(len(self.latency_samples), 10) if self.latency_samples else 0
# Camera health status
camera_status = DiagnosticStatus()
camera_status.name = "Front Camera System"
camera_status.hardware_id = "physical_ai_front_cam_001"
if 25.0 < avg_rate < 35.0: # Within expected range for 30fps camera
camera_status.level = DiagnosticStatus.OK
camera_status.message = f"Operating normally: avg {avg_rate:.1f}Hz"
elif avg_rate < 10.0: # Well below expected
camera_status.level = DiagnosticStatus.ERROR
camera_status.message = f"Low frame rate: avg {avg_rate:.1f}Hz"
else:
camera_status.level = DiagnosticStatus.WARN
camera_status.message = f"Frame rate above normal: avg {avg_rate:.1f}Hz"
camera_status.values.extend([
KeyValue(key="Average Rate", value=f"{avg_rate:.2f}Hz"),
KeyValue(key="Average Latency", value=f"{avg_latency:.2f}ms"),
KeyValue(key="Total Images", value=str(self.image_count)),
KeyValue(key="Elapsed Time", value=f"{elapsed_time:.1f}s"),
])
diag_array.status.append(camera_status)
self.diag_pub.publish(diag_array)
def main(args=None):
rclpy.init(args=args)
node = PhysicalAIDiagnosticNode()
try:
rclpy.spin(node)
except KeyboardInterrupt:
node.get_logger().info('Shutting down diagnostic node')
finally:
node.destroy_node()
rclpy.shutdown()
if __name__ == '__main__':
main()
Step 5: Analysis and Reporting
Task 5.1: Performance Analysis
- Document the actual frame rate achieved vs. expected rate
- Note any latency issues detected
- Identify potential bottlenecks in the system
Task 5.2: Visualization Effectiveness
- Evaluate how well RViz2 displays helped understand the system
- Identify any additional visualization capabilities needed
- Consider creating custom displays for specific Physical AI functions
Expected Results: You should be able to use ROS 2 tools to comprehensively analyze your Physical AI system, identify any performance issues, and visualize key system parameters in real-time.
Troubleshooting:
- If RViz2 doesn't show robot models, check that the robot_description topic is being published
- If topics aren't showing up in RQT, verify that nodes are running and publishing on the right topics
- If parameter changes don't work, make sure the node supports dynamic reconfiguration
Extension Challenge (Optional)
Create a custom RQT plugin that provides a Physical AI-specific dashboard, showing real-time information about:
- AI model accuracy and response times
- Sensor health and data rates
- Robot balance and stability metrics
- Battery level and power consumption (if applicable)
7. Assessment Questions (10 questions)
Multiple Choice (5 questions)
Question 1: What is the purpose of ros2 topic hz command?
a) To list all available topics
b) To measure the frequency of messages on a topic
c) To send messages to a topic
d) To get the type of a topic
Details
Click to reveal answer
Answer: b Explanation: Theros2 topic hz command measures and displays the frequency (rate) of messages being published to a specific topic, which is useful for understanding data flow rates.Question 2: Which RViz2 display plugin would be most appropriate for visualizing LiDAR sensor data? a) RobotModel b) Image c) PointCloud2 d) Path
Details
Click to reveal answer
Answer: c Explanation: The PointCloud2 display in RViz2 is specifically designed for visualizing LiDAR and other 3D point cloud data.Question 3: What does the --symlink-install option do when building with colcon?
a) Creates symbolic links instead of copying files during development
b) Ensures secure installation
c) Speeds up the build process
d) Checks for system dependencies
Details
Click to reveal answer
Answer: a Explanation: The--symlink-install option creates symbolic links instead of copying files, which speeds up iterative development by avoiding file duplication while maintaining proper linking.Question 4: Which RQT tool would be most useful for checking all parameter values of a running node?
a) rqt_graph
b) rqt_console
c) rqt_topic
d) Use ros2 param list command instead
Details
Click to reveal answer
Answer: d Explanation: The command line toolros2 param list <node_name> is the appropriate tool for checking parameter values. RQT doesn't have a dedicated parameter listing tool.Question 5: What is the main purpose of the diagnostic_aggregator in ROS 2? a) To aggregate multiple robot models in RViz2 b) To collect and summarize system health information c) To combine multiple sensor inputs d) To aggregate different map sources
Details
Click to reveal answer
Answer: b Explanation: The diagnostic_aggregator collects diagnostic status messages from multiple nodes and provides a summary view of system health.Short Answer (3 questions)
Question 6: Explain when you would use RViz2 versus RQT for debugging a Physical AI system, and provide specific examples.
Details
Click to reveal sample answer
Use RViz2 for 3D visualization tasks like viewing robot models, point clouds, camera images overlaid in 3D space, paths, and coordinate transforms. Use RQT for monitoring system parameters, plotting numeric values over time, viewing log messages, and examining node graphs. For example, use RViz2 to see where a robot thinks it is in a map and how its sensors perceive the environment, but use RQT to monitor joint angles over time or track system performance metrics.Question 7: Describe how you would diagnose a situation where a robot's AI perception system is not detecting objects correctly.
Details
Click to reveal sample answer
I would: 1) Useros2 topic echo to verify sensor data is being published correctly 2) Use RViz2 to visualize the actual sensor inputs 3) Check AI node logs with rqt_console 4) Monitor AI output topics to see what it's detecting 5) Use ros2 param list to check configuration parameters like confidence thresholds 6) Potentially record a bag file to replay data offline for debugging.Question 8: What are the advantages of using ros2 bag for debugging Physical AI systems compared to real-time inspection?
Details
Click to reveal sample answer
Bag files allow you to: 1) Replay data to reproduce issues consistently 2) Share problematic scenarios with team members 3) Perform offline analysis without affecting the robot's operation 4) Process data with different algorithms to compare results 5) Train AI models on real-world data 6) Analyze performance over extended periods after the fact.Practical Exercises (2 questions)
Question 9: System Performance Analysis Exercise Analyze a running Physical AI system using ROS 2 tools and create a performance report that includes:
- Node topology diagram using rqt_graph
- Message rates for critical topics (sensors, AI outputs, control commands)
- Average latency between related topics (e.g., sensor input to AI output)
- Resource usage of critical nodes
- Recommendations for performance improvements
Document your analysis process and the specific commands/tools used for each measurement.
Question 10: Visualization Enhancement Project Create an enhanced RViz2 configuration file specifically for your Physical AI system that includes:
- Properly configured displays for all sensor types on your robot
- Visualization of AI planning outputs (paths, waypoints, detected objects)
- Robot state visualization (joint positions, center of mass)
- Coordinate frame visualization to understand robot perception
- Custom MarkerArray displays for AI-generated annotations
Provide an explanation of how each visualization element helps understand the Physical AI system's behavior.
8. Further Reading (5-7 resources)
-
ROS 2 Tools Documentation Why read: Official documentation of all ROS 2 command-line and GUI tools Link: https://docs.ros.org/en/rolling/How-To-Guides/Getting-started-with-roslaunch.html
-
RViz User Guide Why read: Comprehensive guide to configuring and using RViz2 Link: https://docs.ros.org/en/rolling/Tutorials/Beginner-Client-Libraries/Visualization/Using-RViz2.html
-
RQT User Manual Why read: Complete reference for using RQT tools for development and debugging Link: https://rqt-robot-plugins.readthedocs.io/en/latest/
-
"Mastering ROS 2" - Lentin Joseph Why read: In-depth coverage of ROS 2 tools and development workflows Link: https://www.packtpub.com/product/mastering-ros-for-robotics-programming-second-edition/9781789531594
-
ROS 2 Performance Analysis Why read: Specialized tools and techniques for analyzing ROS 2 system performance Link: https://github.com/ros2/tracetools
-
Diagnostics in ROS 2 Why read: Understanding and implementing diagnostic systems for robot health monitoring Link: https://index.ros.org/p/diagnostic_aggregator/
-
ROS 2 Visualization Tools Tutorial Why read: Practical tutorials on using visualization tools for robotics Link: https://docs.ros.org/en/rolling/Tutorials/Intermediate/RViz/Custom-Displays.html
Recommended Order:
- Start with the official ROS 2 tools documentation to learn the basics
- Practice using RViz2 with the user guide
- Learn RQT tools for detailed system monitoring
- Read performance analysis tools for optimizing complex systems
- Study diagnostics for building reliable systems
- Advance to custom visualization development for specialized applications
9. Hardware/Software Requirements
Software Requirements:
- ROS 2 Humble Hawksbill (or latest LTS)
- RViz2 visualization suite
- RQT tools collection
- Qt5 development libraries (for custom RQT plugins)
- Python 3.8+ and appropriate ROS 2 packages
Hardware Requirements:
- Computer with graphics card capable of OpenGL 3.3 for RViz2
- 4+ GB RAM (8+ GB recommended for complex visualizations)
- Monitor with high resolution for effective use of multiple panes in RQT
- Network access for documentation and updates
10. Chapter Summary & Next Steps
Chapter Summary
In this chapter, you learned:
- How to use essential ROS 2 command-line tools for system inspection
- How to configure and use RViz2 for 3D visualization of robot systems
- How to leverage RQT tools for monitoring and debugging
- How to analyze system performance and resource usage
- How to create and use diagnostic systems for robot health monitoring
- How to apply these tools specifically to Physical AI system development
Next Steps
In Chapter 2.1, we'll begin exploring ROS 2 with Python, learning how to create nodes, handle topics and services, and build foundational robot applications. This will provide the practical programming skills needed to implement the systems we've been discussing at a conceptual level.
Estimated Time to Complete: 2 hours Difficulty Level: Intermediate Prerequisites: Chapters 1.1-1.9