ReSim SDK
Introduction
ReSim's standard workflow runs your sim (i.e. build) inside our platform so we can manage execution, collect outputs, and generate metrics. However, if you can't or don't want to dockerize your sim, you can still generate metrics for analysis. ReSim's SDK allows you to run tests outside of ReSim, on your own infrastructure, export the results, and generate metrics for these tests.
There are a few different reasons you may want to run tests on your own infrastructure, outside of ReSim:
- You are just getting started, and want to explore our metrics framework
- You want to run tests on actual hardware, and only use ReSim as a visualization/analysis tool
- Your system is difficult to dockerize, perhaps due to special hardware requirements
This guide walks you through how to use ReSim's Python SDK. You can run your tests wherever you like, emit structured data from each test, then upload the results to ReSim. The tests will appear in the ReSim app exactly like any other batch.
Metrics 2.0 required
This feature utilizes the Metrics 2.0 framework. It is recommended to familiarize yourself with Metrics 2.0. You need at least a valid metrics config file (config.resim.yml) with topics defined before running batches this way. A small, functional example is provided in the Github repo below to get you started.
Installation
Running batches outside of ReSim is supported via our Python SDK:
pip install resim-sdk
# or, if you're using uv
uv add resim-sdk
Note: The resim-sdk package requires Python >= 3.10.
Quickstart
The following repository contains a working example that creates a batch with a few tests and generates some metrics:
https://github.com/resim-ai/external-batches-example
The example in this repo will create a small batch, with a couple of tests, and generate some simple metrics.
API Documentation
Basic usage involves create a Batch, with Tests inside, and emit-ing relevant data under each test.
from resim.sdk.batch import Batch
from resim.sdk.test import Test
# `DeviceCodeClient` will trigger an interactive prompt to authenticate.
from resim.sdk.auth.device_code_client import DeviceCodeClient
client = UsernamePasswordClient()
# OR, if you want non-interactive authentication, like for a CI system, use `UsernamePasswordClient`
# instead, and request credentials from ReSim. The username/password below are NOT your personal
# credentials.
# from resim.sdk.auth.username_password_client import UsernamePasswordClient
# client = UsernamePasswordClient(username="*****", password="*****")
# Create a batch, and run the "my metrics" metrics set from your metrics config file
with Batch(
client=client,
project_name="my-project",
branch="metrics-test-branch",
metrics_set_name="my metrics",
metrics_config_path="resim/config.resim.yml", # See example git repo for an example config file
) as batch:
print(f"Created batch {batch.friendly_name} with id {batch.id}")
# Create a test, and emit some random data for the "position" topic
with Test(client, batch, "test 1") as test:
for i in range(10):
test.emit("position", {"x": float(i), "y": float(i)}, i)
# When finished the emitted data will be uploaded to ReSim so metrics processing can begin
Branches and versions
The Batch constructor accepts a branch (required) and an optional version string. Common values for version include a commit SHA, a semver tag, or a build number.
Using these fields can enable longitudinal analysis, such as:
- How has metric X changed in my branch versus main?
- How has metric Y changed between release versions 2.0 and 3.0?
branch is required because ReSim performs a backwards-compatibility check when syncing a new config, ensuring you don't make destructive changes to your topics - for example, changing an string field to a integer could potentially break queries against historical data. For this reason, we recommend experimenting on a branch other than main when getting started.
Batch parameters
| Parameter | Type | Description |
|---|---|---|
client |
AuthenticatedClient |
Required. Authenticated API client. |
project_name |
str |
Required. Name of the project to associate this batch with. Provide this or project_id, not both. |
project_id |
str |
Required. UUID of the project to associate this batch with. |
branch |
str |
Required. Branch name to associate this batch with. |
metrics_set_name |
str | None |
Name of the metrics set, from your metrics config.resim.yml file, to run against emissions. |
name |
str | None |
Human-readable name for the batch. If not provided, one will be generated. |
version |
str | None |
A version string (e.g. a git SHA, or semver number) to tag this batch with. Useful for tracking longitudinal metrics. |
metrics_config_path |
str | None |
Path to your metrics config file. If provided, the config is synced to ReSim before the batch is created. |
system |
str | None |
Attach this batch to the given system. |
test_suite |
str | None |
Attach this batch to the given test suite. Required if test_suite_revision is provided. |
test_suite_revision |
str | None |
Use this specific revision of the test suite. If not given, it will use the latest revision. |
Test parameters
| Parameter | Type | Description |
|---|---|---|
client |
AuthenticatedClient |
Required. Authenticated API client. |
batch |
Batch |
Required. The parent Batch instance. |
name |
str |
Required. Name of the test. |
Note: Each test is tied 1-to-1 with an Experience. The name of a test comes from the experience. This means when you do Test(name="hello world"), the SDK will get-or-create an experience with that name. This is mainly an implementation detail, but useful to know if you want to associate any tests with pre-existing experiences.
Emitting data
test.emit() writes a data point to the emissions log for that test. Topic names and schemas must match your metrics config file. You do not need to manage the emissions file yourself. The SDK handles creating and uploading it to ReSim automatically. The test object supports all emit methods. The metrics 2.0 guide will cover these in greater depth.
test.emit(topic_name, data, timestamp)
test.emit_series("odom_linear_velocity", {
"x": [1.0, 1.1, 1.1, 1.2],
"y": [2.0, 1.9, 1.8, 1.7],
"z": [3.0, 3.0, 3.0, 3.0]
}, timestamps=[0, 500000000, 1000000000, 1500000000])
test.emit_event({"name": "Goal 1 Reached", ...})
For more details on emitting data, see emitting your data
Attaching files
Use test.attach_log() to upload a file (such as an image, GIF, or other artifact) and associate it with a test result:
test.attach_log("run_001.mcap")
The file is uploaded to ReSim's storage and registered under the test. You can attach any file type — images, logs, MCAPs, videos, etc.
Image and Video metrics
For image and video metrics, attaching the file is required. Your metric references the image by filename, so the file must be uploaded for the image metric to render:
test.attach_log("screen_capture_123.png")
test.emit("my-image-topic", {"img": "screen_capture_123.png"}, time.time_ns())
The image's filename in the emission must match the filename passed to attach_log. The metrics engine uses this reference to retrieve and render the image.
Here is how the metrics config would look for the above image metric:
topics:
images:
schema:
img: image
metrics:
images:
type: test
query_string: select img from images
template_type: system
template: image
Viewing results
Once the Batch context exits, and all metrics processing is complete, navigate to your project in the ReSim app to view the batch, its tests, and generated metrics — the same as any batch run inside ReSim.
