Download the binary CARLA 0.9.10.1 release.
Unzip the package into a folder, e.g. CARLA.
In the following commands, change the ${CARLA_ROOT}
variable to correspond to your CARLA root folder.
conda create -n py37 python=3.7
conda activate py37
cd ${CARLA_ROOT} # Change ${CARLA_ROOT} for your CARLA root folder
pip3 install -r PythonAPI/carla/requirements.txt
Make sure to download 0.9.10.1. This is the exact version used by the online servers.
leaderboard-1.0
branch from the official CARLA repositorygit clone -b leaderboard-1.0 --single-branch https://github.com/carla-simulator/leaderboard.git
In the following commands, change the ${LEADERBOARD_ROOT}
variable to correspond to your Leaderboard root folder.
cd ${LEADERBOARD_ROOT} # Change ${LEADERBOARD_ROOT} for your Leaderboard root folder
pip3 install -r requirements.txt
leaderboard-1.0
branch of the CARLA ScenarioRunner:git clone -b leaderboard-1.0 --single-branch https://github.com/carla-simulator/scenario_runner.git
In the following commands, change the ${SCENARIO_RUNNER_ROOT}
to correspond to your Scenario_Runner root folder.
cd ${SCENARIO_RUNNER_ROOT} # Change ${SCENARIO_RUNNER_ROOT} for your Scenario_Runner root folder
pip3 install -r requirements.txt
rai-leaderboard
repository (or your fork of it) and name it rai
using the following command:git clone -b main https://github.com/cognitive-robots/rai-leaderboard.git rai # or use the URL of your fork
git submodule add -b leaderboard-1.0 https://github.com/carla-simulator/leaderboard.git
git submodule add -b leaderboard-1.0 https://github.com/carla-simulator/scenario_runner.git
git submodule add -b main https://github.com/cognitive-robots/rai-leaderboard.git rai # or use the URL of your fork
In the command for the rai-leaderboard repository, we are naming the submodule rai
. Ensure you use the full command as shown.
We need to make sure that the different modules can find each other.
rai/scripts/env_var.sh
~/.bashrc
profile. And remember to run source ~/.bashrc
for these changes to take effect. When running the test, we set a series of parameters as env variables. Here are the descriptions of some of them:
SCENARIOS
(JSON) — The set of scenarios that will be tested in the simulation. A scenario is defined as a traffic situation. Agents will have to overcome these scenarios in order to pass the test. Participants have access to a set of traffic scenarios that work on the publicly available towns. There are 10 types of scenarios that are instantiated using different parameters. Here is a list of the available scenarios.
ROUTES
(XML) — The set of routes that will be used for the simulation. Every route has a starting point (first waypoint), and an ending point (last waypoint). Additionally, they can contain a weather profile to set specific weather conditions. A XML contains many routes, each one with an ID. Users can modify, add, and remove routes for training and validation purposes. The Leaderboard ships with a set of routes for debug, training, and validation. The routes used for the online evaluation are secret.
TEAM_AGENT
(Python script) — Path to the Python script that launches the agent. This has to be a class inherited from leaderboard.autoagents.autonomous_agent.AutonomousAgent. The steps to create an agent are explained in the next step.Other relevant parameters are described below.
TEAM_CONFIG
(defined by the user) — Path to an arbitrary configuration file read by the provided agent. You are responsible to define and parse this file within your agent class.
DEBUG_CHALLENGE
(int) — Flag that indicates if debug information should be shown during the simulation. By default this variable is unset (0
), which produces no debug information to be displayed. When this is set to 1
, the simulator will display the reference route to be followed. If this variable is set to anything greater than 1
the engine will print the complete state of the simulation for debugging purposes.
CHECKPOINT_ENDPOINT
(JSON) — The name of a file where the Leaderboad metrics will be recorded.
RECORD_PATH
(string) — Path to a folder that will store the CARLA logs. This is unset by default.
RESUME — Flag to indicate if the simulation should be resumed from the last route. This is unset by default.
As you would see later on, these environment variables are passed to ${LEADERBOARD_ROOT}/leaderboard/leaderboard_evaluator.py
, which serves as the entry point to perform the simulation. Take a look at leaderboard_evaluator.py to find out more details on how your agent will be executed and evaluated.
The definition of a new agent starts by creating a new class that inherits from rai.autoagents.base_agent.BaseAgent
. Note that is different from the previous CARLA leaderboard challenge where agents inherit directly from leaderboard.autoagents.autonomous_agent.AutonomousAgent
First, define a function called get_entry_point
that returns the name of your new class. This will be used to automatically instantiate your agent.
from rai.autoagents import base_agent
def get_entry_point():
return 'MyAgent'
class MyAgent(base_agent.BaseAgent):
...
Within your agent class override the setup
method. This method performs all the initialization and definitions needed by your agent. It will be automatically called each time a route is initialized. It can receive an optional argument pointing to a configuration file. Users are expected to parse this file.
from leaderboard.autoagents.autonomous_agent import Track
...
def setup(self, path_to_conf_file):
self.track = Track.SENSORS # At a minimum, this method sets the Leaderboard modality. In this case, SENSORS
You will also have to override the sensors
method, which defines all the sensors required by your agent.
def sensors(self):
sensors = [
{'type': 'sensor.camera.rgb', 'id': 'Center',
'x': 0.7, 'y': 0.0, 'z': 1.60, 'roll': 0.0, 'pitch': 0.0, 'yaw': 0.0, 'width': 300, 'height': 200, 'fov': 100},
{'type': 'sensor.lidar.ray_cast', 'id': 'LIDAR',
'x': 0.7, 'y': -0.4, 'z': 1.60, 'roll': 0.0, 'pitch': 0.0, 'yaw': -45.0},
{'type': 'sensor.other.radar', 'id': 'RADAR',
'x': 0.7, 'y': -0.4, 'z': 1.60, 'roll': 0.0, 'pitch': 0.0, 'yaw': -45.0, 'fov': 30},
{'type': 'sensor.other.gnss', 'id': 'GPS',
'x': 0.7, 'y': -0.4, 'z': 1.60},
{'type': 'sensor.other.imu', 'id': 'IMU',
'x': 0.7, 'y': -0.4, 'z': 1.60, 'roll': 0.0, 'pitch': 0.0, 'yaw': -45.0},
{'type': 'sensor.opendrive_map', 'id': 'OpenDRIVE', 'reading_frequency': 1},
{'type': 'sensor.speedometer', 'id': 'Speed'},
]
return sensors
Most of the sensor attributes have fixed values. These can be checked in agent_wrapper.py. This is done so that all the teams compete within a common sensor framework.
Every sensor is represented as a dictionary, containing the following attributes:
type
: type of the sensor to be added.id
: the label that will be given to the sensor to be accessed later.other attributes
: these are sensor dependent, e.g.: extrinsics and fov.Users can set both intrinsics and extrinsic parameters (location and orientation) of each sensor, in relative coordinates with respect to the vehicle. Please, note that CARLA uses the Unreal Engine coordinate system, which is: x-front
, y-right
, z-up
.
The available sensors are:
sensor.speedometer
— Pseudosensor that provides an approximation of your linear velocity.Trying to set another sensor or misspelling these, will make the set up fail.
You can use any of these sensors to configure your sensor stack. However, in order to keep a moderate computational load we have set the following limits to the number of sensors that can be added to an agent:
sensor.camera.rgb
: 4sensor.lidar.ray_cast
: 1sensor.other.radar
: 2sensor.other.gnss
: 1sensor.other.imu
: 1sensor.opendrive_map
: 1sensor.speedometer
: 1Trying to set too many units of a sensor will make the set up fail.
There are also spatial restrictions that limit the placement of your sensors within the volume of your vehicle. If a sensor is located more than 3 meters away from its parent in any axis (e.g. [3.1,0.0,0.0]
), the setup will fail.
This method will be called once per time step to produce a new action in the form of a carla.VehicleControl object. Make sure this function returns the control object, which will be use to update your agent.
def run_step(self, input_data, timestamp):
control = self._do_something_smart(input_data, timestamp)
return control
input_data: A dictionary containing sensor data for the requested sensors. The data has been preprocessed at sensor_interface.py, and will be given as numpy arrays. This dictionary is indexed by the ids defined in the sensor method.
timestamp: A timestamp of the current simulation instant.
Remember that you also have access to the route that the ego agent should travel to achieve its destination. Use the self._global_plan member to access the geolocation route and self._global_plan_world_coord for its world location counterpart.
At the end of each route, the destroy
method will be called, which can be overriden by your agent, in cases where you need a cleanup. As an example, you can make use of this function to erase any unwanted memory of a network
def destroy(self):
pass
cd <your carla installation path> ./CarlaUE4.sh
For detailed instructions on running the Carla server inside Docker, refer to the Carla documentation.
A quick command to get started is shown below:
docker run --privileged --gpus all --net=host -e DISPLAY=${DISPLAY} -it -e SDL_VIDEODRIVER=x11 -v /tmp/.X11-unix:/tmp/.X11-unix carlasim/carla:0.9.10.1 /bin/bash
# launch carla inside the container./CarlaUE4.sh
To run the evaluation locally without creating a Docker image:
bash rai/scripts/run_evaluation.sh
To run with docker:
team_code
directory is outside the leaderboard. rai/scripts/make_docker.sh
to include agent-specific files you want to copy to the Docker container, such as requirements.txt
, model weights, etc. rai/scripts/Dockerfile.master
to include agent-specific environment variables and install any necessary Python/Conda packages.bash rai/scripts/make_docker.sh -t <image:tag>
docker run --ipc=host --gpus all --net=host -e DISPLAY=$DISPLAY -it -e SDL_VIDEODRIVER=x11 -v /tmp/.X11-unix:/tmp/.X11-unix <image:tag> /bin/bash
bash rai/scripts/run_evaluation.sh
Manually interrupting the Leaderboard will preemptively stop the simulation of the route, automatically moving onto the next one.
CARLA Leaderboard-1.0 provides a set of predefined routes to serve as a starting point. You can use these routes for training and verifying the performance of your agent. Routes can be found in the folder ${LEADERBOARD_ROOT}/data
:
routes_training.xml: 50 routes intended to be used as training data (112.8 Km).
routes_testing.xml: 26 routes intended to be used as verification data (58.9 Km).
ROS based agents are not currently supported by the RAI Leaderboard.