Projects & Initiatives

Corner Kick

By Quentin Golsteyn React/Redux, TypeScript • Posted August 8, 2019

I joined UBC Thunderbot over the summer 2018 as they undertook a rewrite of their entire codebase. My role was to upgrade their aging visualizer, a tool they used to test the validity of their AI in a virtual environment. We designed a web app that allowed for the visualization of 10,000 visual items at 60fps.

The visualizer was to be a critical tool for the team, as it would provide a visual interface by which to verify the correctness of the AI controlling the robots. To help me, two additional team members stepped up to help on this project.

The visualizer would be responsible for three primary tasks:

  1. Visualize the game environment Using data from the AI, represent the game environment as a 2D world. This environment should include the game field, robots, and ball.
  2. Control AI parameters Allow developers to control some simple AI states in game, such as starting/pausing the game, control team sides, etc.
  3. Display current play information Display information regarding robot status and current play tactic.

This article will focus primarily on the first task as it provided a significant challenge given the requirements of the project.

Approach

The team had recently switched to ROS for their architecture. While the team developed the AI in C++, I decided to implement the visualizer as a web app in React.js, and communicate with the rest of the AI using the [ROSBridge](http://wiki.ros.org/rosbridge_suite) library. This framework converts binary data into JSON which can then be consumed by the web app.

It was important to allow the AI team to visualize any parts of the AI architecture. This meant the visualizer could not make assumptions about the visual objects it could receive (such as robots or the game field). Rather, we decided to expose a series of draw functions available on the AI side of the project. These functions would send draw objects object to the visualizer, which would be rendered on screen.

Architecture

The visualizer was designed as a React/Redux web application, with Redux Saga to perform effectful operations. All application operations were driven by Redux actions, which would either update the Redux state, or trigger various Redux Saga routines.

Rendering was done with the Pixi.js library, a 2D WebGL library.

Draw library

The visualizer exposed a series of C++ draw functions on the AI side of the project. These functions would be called by the AI at every tick (roughly 200 times per second). These draw functions would generate draw objects, containing information about the primitive to draw and the position to place it on the visualizer canvas. These draw objects would be pushed to the ROS bus, and sent over a websocket to the visualizer.

Drawing

The visualizer offered the ability for the AI to draw on multiple "layers", which the visualizer user could enable/disable to focus on on a particular state of the AI.

60 times per second, the visualizer would receive a new batch of draw objects to render. Draw objects refered to a particular sprite texture on the visualizer's spritesheet. Additional attributes specified the placement, rotation, and tint to apply on the sprite.

Performance

Our requirements demanded that the visualizer be able to render up to 10,000 visual objects at 60 frames per seconds. This constraint was by far the most challenging part of the project.

Javascript is a single thread garbage collected language. Significant effort was therefore necessary to limit the memory and time complexity of the rendering process. We also had to limit the number of variables we allocated to reduce the pressure on garbage collection.

In addition, special consideration had to be made around the limitations of WebGL and the Pixi.js library. In order to maintain good performance, we had to ensure all visual items we wanted to render were already on GPU memory. This meant using Pixi.js Sprites was necessary.

Our initial implementation gave the AI the ability to draw a number of different primitives (lines, rectangles, circles, arcs) on the visualizer canvas. This approach was not scalable, as the visualizer could not use sprites to improve performance.

We instead used a preloaded spritesheet to load a set of textures in GPU memory that the AI could use at runtime. The AI could then specify which texture to draw, and indicate the position and dimensions to give this object. Colour tinting was also implemented.

The resulting datastructure to encode a visual object had 8 fields. When transfering 10,000 objects 60 times per second, we quickly realized we would also need to optimize the data representation of a visual object. We settled on transfering data in a binary format, as a JSON object would take too much space. Even limiting each field to a 16 bit number, the resulting payload size for a visual object would be 16 bytes. This resulted in a data transfer requirement of 9.6 MB/s.

Competition and the future of the project

The visualizer was used at the 2019 Robocup competition in Australia. The team ultimately won 1st place of the Small Robot Soccer League, Division B; the first Canadian team to win this honour!

1st Place Qualification Match, Robocup 2019

Following the competition, UBC Thunderbots team leads decided to reduce the size of their tech stack to ensure the maintability of the project. As part of this decision, the visualizer was to be rewritten in C++ Qt.

Hi! It's me!
Written by Quentin Golsteyn
A front-end developer based in Vancouver, Canada.

Read more