Projects & Initiatives
Building a visualizer for a small league robot football team.
By Quentin GolsteynReact/Redux, TypeScript • September 2018 - August 2019
I joined UBC Thunderbots is a student engineering design team that competes annually in the small league robot football competition at Robocup. It is made up entirely of student volunteers, divided into three subteams (mechanical, electrical, and software).✱UBC Thunderbot over the summer 2018 as they undertook a rewrite of their entire codebase. My role was to upgrade their aging visualizer, a tool they used to test the validity of their AI in a virtual environment. We designed a web app that allowed for the visualization of 10,000 visual items at 60fps.
The visualizer was to be a critical tool for the team, as it would provide a visual interface by which to verify the correctness of the AI controlling the robots. To help me, two additional team members stepped up to help on this project.
The visualizer would be responsible for three primary tasks:
- Visualize the game environment Using data from the AI, represent the game environment as a 2D world. This environment should include the game field, robots, and ball.
- Control AI parameters Allow developers to control some simple AI states in game, such as starting/pausing the game, control team sides, etc.
- Display current play information Display information regarding robot status and current play tactic.
This article will focus primarily on the first task as it provided a significant challenge given the requirements of the project.
ROS was desirable as it allowed for the creation of standalone modules that communicate with one another using an event bus. The visualizer would be a module of its own, able to send and receive messages on the ROS bus.✱The team had recently switched to ROS for their architecture. While the team developed the AI in C++, I decided to implement the visualizer as a web app in React.js, and communicate with the rest of the AI using the [ROSBridge](http://wiki.ros.org/rosbridge_suite) library. This framework converts binary data into JSON which can then be consumed by the web app.
It was important to allow the AI team to visualize any parts of the AI architecture. This meant the visualizer could not make assumptions about the visual objects it could receive (such as robots or the game field). Rather, we decided to expose a series of draw functions available on the AI side of the project. These functions would send draw objects object to the visualizer, which would be rendered on screen.
The visualizer was designed as a React/Redux web application, with Redux Saga to perform effectful operations. All application operations were driven by Redux actions, which would either update the Redux state, or trigger various Redux Saga routines.
Rendering was done with the Pixi.js library, a 2D WebGL library.
The visualizer exposed a series of C++ draw functions on the AI side of the project. These functions would be called by the AI at every tick (roughly 200 times per second). These draw functions would generate draw objects, containing information about the primitive to draw and the position to place it on the visualizer canvas. These draw objects would be pushed to the ROS bus, and sent over a websocket to the visualizer.
The visualizer offered the ability for the AI to draw on multiple "layers", which the visualizer user could enable/disable to focus on on a particular state of the AI.
60 times per second, the visualizer would receive a new batch of draw objects to render. Draw objects refered to a particular sprite texture on the visualizer's spritesheet. Additional attributes specified the placement, rotation, and tint to apply on the sprite.
Our requirements demanded that the visualizer be able to render up to 10,000 visual objects at 60 frames per seconds. This constraint was by far the most challenging part of the project.
In addition, special consideration had to be made around the limitations of WebGL and the Pixi.js library. In order to maintain good performance, we had to ensure all visual items we wanted to render were already on GPU memory. Sprites in Pixi.js come with the benefit that their textures are stored in GPU memory. Rendering additional sprites is therefore a rather inexpensive process.✱This meant using Pixi.js Sprites was necessary.
Our initial implementation gave the AI the ability to draw a number of different primitives (lines, rectangles, circles, arcs) on the visualizer canvas. This approach was not scalable, as the visualizer could not use sprites to improve performance.
We instead used a preloaded spritesheet to load a set of textures in GPU memory that the AI could use at runtime. The AI could then specify which texture to draw, and indicate the position and dimensions to give this object. Colour tinting was also implemented.
The resulting datastructure to encode a visual object had 8 fields. When transfering 10,000 objects 60 times per second, we quickly realized we would also need to optimize the data representation of a visual object. We settled on transfering data in a binary format, as a JSON object would take too much space. Even limiting each field to a 16 bit number, the resulting payload size for a visual object would be 16 bytes. This resulted in a data transfer requirement of 9.6 MB/s.
Competition and the future of the project
The visualizer was used at the 2019 Robocup competition in Australia. The team ultimately won 1st place of the Small Robot Soccer League, Division B; the first Canadian team to win this honour!
Following the competition, UBC Thunderbots team leads decided to reduce the size of their tech stack to ensure the maintability of the project. As part of this decision, the visualizer was to be rewritten in C++ Qt.