Creating an Agent

1. System Setup

1.1 Get CARLA 0.9.10.1

In the following commands, change the ${CARLA_ROOT} variable to correspond to your CARLA root folder.

  • In order to use the CARLA Python API you will need to install some dependencies in your favorite environment. As a reference, for conda, start by creating a new environment:
conda create -n py37 python=3.7
conda activate py37
cd ${CARLA_ROOT}  # Change ${CARLA_ROOT} for your CARLA root folder

pip3 install -r PythonAPI/carla/requirements.txt
  • Feel free to download additional maps to augment the training data available. If you downloaded additional maps, follow the instructions provided here to install the maps.

Make sure to download 0.9.10.1. This is the exact version used by the online servers.

1.2 Clone the Leaderboards and Scenario_Runner

    If you wish to host your agent code online, fork your own version of the rai-leaderboard so you can modify it easily for your agent. If you prefer to clone the repositories (leaderboard, scenario_runner, rai-leaderboard) directly, proceed with steps below, or skip to step to add them as git submodules.
  • Clone the leaderboard-1.0 branch from the official CARLA repository
  • git clone -b leaderboard-1.0 --single-branch https://github.com/carla-simulator/leaderboard.git
      

In the following commands, change the ${LEADERBOARD_ROOT} variable to correspond to your Leaderboard root folder.

  • Install the required Python dependencies.
  • cd ${LEADERBOARD_ROOT} # Change ${LEADERBOARD_ROOT} for your Leaderboard root folder 
        pip3 install -r requirements.txt
        
  • Similarly, clone the leaderboard-1.0 branch of the CARLA ScenarioRunner:
  • git clone -b leaderboard-1.0 --single-branch https://github.com/carla-simulator/scenario_runner.git
        

In the following commands, change the ${SCENARIO_RUNNER_ROOT} to correspond to your Scenario_Runner root folder.

  • Install the required Python dependencies using the same Python environments.
  • cd ${SCENARIO_RUNNER_ROOT} # Change ${SCENARIO_RUNNER_ROOT} for your Scenario_Runner root folder
        pip3 install -r requirements.txt
        
  • Next, clone the rai-leaderboard repository (or your fork of it) and name it rai using the following command:
  • git clone -b main https://github.com/cognitive-robots/rai-leaderboard.git rai # or use the URL of your fork
      
  • If you are adding these repositories to an existing Git repository and wish to include them as submodules, use the following commands:
  • git submodule add -b leaderboard-1.0 https://github.com/carla-simulator/leaderboard.git
    git submodule add -b leaderboard-1.0 https://github.com/carla-simulator/scenario_runner.git
    git submodule add -b main https://github.com/cognitive-robots/rai-leaderboard.git rai # or use the URL of your fork
        

In the command for the rai-leaderboard repository, we are naming the submodule rai. Ensure you use the full command as shown.

1.3 Setup environment variables

We need to make sure that the different modules can find each other.

  • Update environment variables in rai/scripts/env_var.sh
  • Optional: You may also add the variables directly to your ~/.bashrc profile. And remember to run source ~/.bashrc for these changes to take effect.

1.4. Description of the ENV variables

When running the test, we set a series of parameters as env variables. Here are the descriptions of some of them:

  • SCENARIOS (JSON) — The set of scenarios that will be tested in the simulation. A scenario is defined as a traffic situation. Agents will have to overcome these scenarios in order to pass the test. Participants have access to a set of traffic scenarios that work on the publicly available towns. There are 10 types of scenarios that are instantiated using different parameters. Here is a list of the available scenarios.

  • ROUTES (XML) — The set of routes that will be used for the simulation. Every route has a starting point (first waypoint), and an ending point (last waypoint). Additionally, they can contain a weather profile to set specific weather conditions. A XML contains many routes, each one with an ID. Users can modify, add, and remove routes for training and validation purposes. The Leaderboard ships with a set of routes for debug, training, and validation. The routes used for the online evaluation are secret.

  • TEAM_AGENT (Python script) — Path to the Python script that launches the agent. This has to be a class inherited from leaderboard.autoagents.autonomous_agent.AutonomousAgent. The steps to create an agent are explained in the next step.

Other relevant parameters are described below.

  • TEAM_CONFIG (defined by the user) — Path to an arbitrary configuration file read by the provided agent. You are responsible to define and parse this file within your agent class.

  • DEBUG_CHALLENGE (int) — Flag that indicates if debug information should be shown during the simulation. By default this variable is unset (0), which produces no debug information to be displayed. When this is set to 1, the simulator will display the reference route to be followed. If this variable is set to anything greater than 1 the engine will print the complete state of the simulation for debugging purposes.

  • CHECKPOINT_ENDPOINT (JSON) — The name of a file where the Leaderboad metrics will be recorded.

  • RECORD_PATH (string) — Path to a folder that will store the CARLA logs. This is unset by default. RESUME — Flag to indicate if the simulation should be resumed from the last route. This is unset by default.

As you would see later on, these environment variables are passed to ${LEADERBOARD_ROOT}/leaderboard/leaderboard_evaluator.py, which serves as the entry point to perform the simulation. Take a look at leaderboard_evaluator.py to find out more details on how your agent will be executed and evaluated.

2. Creating your own Autonomous Agent

The definition of a new agent starts by creating a new class that inherits from rai.autoagents.base_agent.BaseAgent. Note that is different from the previous CARLA leaderboard challenge where agents inherit directly from leaderboard.autoagents.autonomous_agent.AutonomousAgent

2.1 Create get_entry_point

First, define a function called get_entry_point that returns the name of your new class. This will be used to automatically instantiate your agent.

from rai.autoagents import base_agent

def get_entry_point():
    return 'MyAgent'

class MyAgent(base_agent.BaseAgent):
...

2.2 Override the setup method

Within your agent class override the setup method. This method performs all the initialization and definitions needed by your agent. It will be automatically called each time a route is initialized. It can receive an optional argument pointing to a configuration file. Users are expected to parse this file.

from leaderboard.autoagents.autonomous_agent import Track
...
def setup(self, path_to_conf_file):
    self.track = Track.SENSORS # At a minimum, this method sets the Leaderboard modality. In this case, SENSORS

2.3 Override the sensors method

You will also have to override the sensors method, which defines all the sensors required by your agent.

def sensors(self):
    sensors = [
        {'type': 'sensor.camera.rgb', 'id': 'Center',
         'x': 0.7, 'y': 0.0, 'z': 1.60, 'roll': 0.0, 'pitch': 0.0, 'yaw': 0.0, 'width': 300, 'height': 200, 'fov': 100},
        {'type': 'sensor.lidar.ray_cast', 'id': 'LIDAR',
         'x': 0.7, 'y': -0.4, 'z': 1.60, 'roll': 0.0, 'pitch': 0.0, 'yaw': -45.0},
        {'type': 'sensor.other.radar', 'id': 'RADAR',
         'x': 0.7, 'y': -0.4, 'z': 1.60, 'roll': 0.0, 'pitch': 0.0, 'yaw': -45.0, 'fov': 30},
        {'type': 'sensor.other.gnss', 'id': 'GPS',
         'x': 0.7, 'y': -0.4, 'z': 1.60},
        {'type': 'sensor.other.imu', 'id': 'IMU',
         'x': 0.7, 'y': -0.4, 'z': 1.60, 'roll': 0.0, 'pitch': 0.0, 'yaw': -45.0},
        {'type': 'sensor.opendrive_map', 'id': 'OpenDRIVE', 'reading_frequency': 1},
        {'type': 'sensor.speedometer', 'id': 'Speed'},
    ]
    return sensors

Most of the sensor attributes have fixed values. These can be checked in agent_wrapper.py. This is done so that all the teams compete within a common sensor framework.

Every sensor is represented as a dictionary, containing the following attributes:

  • type: type of the sensor to be added.
  • id: the label that will be given to the sensor to be accessed later.
  • other attributes: these are sensor dependent, e.g.: extrinsics and fov.

Users can set both intrinsics and extrinsic parameters (location and orientation) of each sensor, in relative coordinates with respect to the vehicle. Please, note that CARLA uses the Unreal Engine coordinate system, which is: x-front, y-right, z-up.

The available sensors are:

Trying to set another sensor or misspelling these, will make the set up fail.

You can use any of these sensors to configure your sensor stack. However, in order to keep a moderate computational load we have set the following limits to the number of sensors that can be added to an agent:

  • sensor.camera.rgb: 4
  • sensor.lidar.ray_cast: 1
  • sensor.other.radar: 2
  • sensor.other.gnss: 1
  • sensor.other.imu: 1
  • sensor.opendrive_map: 1
  • sensor.speedometer: 1

Trying to set too many units of a sensor will make the set up fail.

There are also spatial restrictions that limit the placement of your sensors within the volume of your vehicle. If a sensor is located more than 3 meters away from its parent in any axis (e.g. [3.1,0.0,0.0]), the setup will fail.

2.4 Override the run_step method

This method will be called once per time step to produce a new action in the form of a carla.VehicleControl object. Make sure this function returns the control object, which will be use to update your agent.

def run_step(self, input_data, timestamp):
    control = self._do_something_smart(input_data, timestamp)
    return control
  • input_data: A dictionary containing sensor data for the requested sensors. The data has been preprocessed at sensor_interface.py, and will be given as numpy arrays. This dictionary is indexed by the ids defined in the sensor method.

  • timestamp: A timestamp of the current simulation instant.

Remember that you also have access to the route that the ego agent should travel to achieve its destination. Use the self._global_plan member to access the geolocation route and self._global_plan_world_coord for its world location counterpart.

2.5 Override the destroy method

At the end of each route, the destroy method will be called, which can be overriden by your agent, in cases where you need a cleanup. As an example, you can make use of this function to erase any unwanted memory of a network

def destroy(self):
    pass
  

2.6 Running your agent

Carla Server

  • Start the Carla server before running the agent.
    • Without Docker:
    • cd <your carla installation path> 
      ./CarlaUE4.sh
    • With Docker
    • For detailed instructions on running the Carla server inside Docker, refer to the Carla documentation.

      A quick command to get started is shown below:

      docker run --privileged --gpus all --net=host -e DISPLAY=${DISPLAY} -it -e SDL_VIDEODRIVER=x11 -v /tmp/.X11-unix:/tmp/.X11-unix carlasim/carla:0.9.10.1 /bin/bash 

      # launch carla inside the container
      ./CarlaUE4.sh

    Agent

    • Without Docker:
    • To run the evaluation locally without creating a Docker image:

      bash rai/scripts/run_evaluation.sh
    • With Docker
    • To run with docker:

      1. First, create a Docker image for your agent. Ensure the team_code directory is outside the leaderboard.
      2. Update rai/scripts/make_docker.sh to include agent-specific files you want to copy to the Docker container, such as requirements.txt, model weights, etc.
      3. Lastly, update the USER COMMANDS section in rai/scripts/Dockerfile.master to include agent-specific environment variables and install any necessary Python/Conda packages.
      4. Build the Docker image by running:
      5. bash rai/scripts/make_docker.sh -t <image:tag> 
      6. Once the image is built, run the Docker image with:
      7. docker run --ipc=host --gpus all --net=host -e DISPLAY=$DISPLAY -it -e SDL_VIDEODRIVER=x11 -v /tmp/.X11-unix:/tmp/.X11-unix <image:tag> /bin/bash
      8. Finally, run the evaluation script within the Docker container:
      9. bash rai/scripts/run_evaluation.sh

    Manually interrupting the Leaderboard will preemptively stop the simulation of the route, automatically moving onto the next one.

    3. Training and testing your agent

    CARLA Leaderboard-1.0 provides a set of predefined routes to serve as a starting point. You can use these routes for training and verifying the performance of your agent. Routes can be found in the folder ${LEADERBOARD_ROOT}/data:

    routes_training.xml: 50 routes intended to be used as training data (112.8 Km).

    routes_testing.xml: 26 routes intended to be used as verification data (58.9 Km).

    3.1 Baselines

    We have created a starter kit based on the Neural Attention Fields for End-to-End Autonomous Driving (NEAT) approach. Also look at our paper to see how these baselines perform with respect to RAI.

    4. ROS based agents

    ROS based agents are not currently supported by the RAI Leaderboard.