Running Tests Outside of ReSim
Introduction
ReSim's standard workflow runs your sim (i.e. build) inside our platform so we can manage execution, collect outputs, and generate metrics. However, if you can't or don't want to dockerize your sim, you can still start generating metrics for analysis. ReSim allows you to run tests outside of ReSim, on your own infrastructure, export the results, and generate metrics for these tests.
This guide walks you through how to do that using the ReSim Python SDK. You run your tests wherever you like, emit structured data during each test, then the SDK submits the results to ReSim. The batch and its tests will appear in the ReSim app exactly like any other batch.
Metrics 2.0 required
This feature utilizes the Metrics 2.0 framework. It is recommended to familiarize yourself with Metrics 2.0. You need at least a valid metrics config file (config.resim.yaml) with topics defined before running batches this way. A small, functional example is shown below to get you started.
Installation
Install the ReSim SDK via pip:
pip install resim-sdk
Quickstart
The following is a small, working example you can copy to get started.
The SDK provides Batch and Test context managers. Open a Batch, then create individual Test instances within it. Inside each Test, emit structured data points.
When a Test block exits, emitted data is uploaded and metrics processing begins for that test. When the Batch block exits, the batch is closed so batch-level metrics can be computed.
For a simple "hello world" example, follow the steps below:
- Download this basic metrics config file to
config.resim.yml: Download metrics config - Copy the example code below to
run_tests.py - Replace the noted constants at the top of the file
- Run the script
python run_tests.py - You should now see a new result published at https://app.resim.ai. You will need to wait couple of minutes for the metrics to process and become available.
import time
from resim.sdk.auth.username_password_client import UsernamePasswordClient
from resim.sdk.batch import Batch
from resim.sdk.test import Test
# Replace with your project name. Visit https://app.resim.ai to find your project name
PROJECT_NAME = "<your-project-name>"
# Replace with the username and password provided by ReSim
USERNAME = "<your-username>"
PASSWORD = "<your-password>"
client = UsernamePasswordClient(username=USERNAME, password=PASSWORD)
with Batch(
client=client,
project_name=PROJECT_NAME,
branch="metrics-test-branch",
metrics_set_name="my metrics",
metrics_config_path="config.resim.yml",
) as batch:
print(f"Created batch {batch.friendly_name}. id {batch.id}")
# Create 2 tests, and emit some data for the position topic
with Test(client, batch, "test 1") as test:
for i in range(10):
test.emit("position", {"x": float(i), "y": float(i)}, time.time_ns())
with Test(client, batch, "test 2") as test:
for i in range(10):
test.emit("position", {"x": float(i), "y": float(i * 2)}, time.time_ns())
print(
f"Batch done. After a few minutes, you can view your metrics here: https://app.resim.ai/projects/{batch.project_id}/batches/{batch.id}"
)

Branches and versions
The Batch constructor accepts a branch (required) and an optional version string. Common values for version include a commit SHA, a semver tag, or a build number.
Using these fields can enable longitudinal analysis, such as:
- How has metric X changed in my branch versus main?
- How has metric Y changed between release versions 2.0 and 3.0?
branch is required because ReSim performs a backwards-compatibility check when syncing a new config, ensuring you don't make destructive changes to your topics - for example, changing an string field to a integer could potentially break queries against historical data.
Batch parameters
| Parameter | Type | Description |
|---|---|---|
client |
AuthenticatedClient |
Required. Authenticated API client. |
project_id |
str |
Required. UUID of the project to associate this batch with. |
branch |
str |
Required. Branch name to associate this batch with. |
name |
str | None |
Human-readable name for the batch. If not provided, one will be generated. |
version |
str | None |
A version string (e.g. a git SHA, or semver number) to tag this batch with. Useful for tracking longitudinal metrics. |
metrics_set_name |
str | None |
Name of the metrics set, from your metrics config.resim.yaml file, to run against emissions. |
metrics_config_path |
str | None |
Path to your metrics config file. If provided, the config is synced to ReSim before the batch is created. |
Test parameters
| Parameter | Type | Description |
|---|---|---|
client |
AuthenticatedClient |
Required. Authenticated API client. |
batch |
Batch |
Required. The parent Batch instance. |
name |
str |
Required. Name of the test. |
Emitting data
test.emit() writes a data point to the emissions log for that test. Topic names and schemas must match your metrics config file. You do not need to manage the emissions file yourself. The SDK handles creating and uploading it to ReSim automatically. The test object supports all emit methods. The metrics 2.0 guide will cover these in greater depth.
test.emit(topic_name, data, timestamp)
test.emit_series("odom_linear_velocity", {
"x": [1.0, 1.1, 1.1, 1.2],
"y": [2.0, 1.9, 1.8, 1.7],
"z": [3.0, 3.0, 3.0, 3.0]
}, timestamps=[0, 500000000, 1000000000, 1500000000])
test.emit_event({...})
For more details on emitting data, see emitting your data
Attaching files
Use test.attach_log() to upload a file (such as an image, GIF, or other artifact) and associate it with a test result:
test.attach_log("run_001.mcap")
The file is uploaded to ReSim's storage and registered under the test. You can attach any file type — images, logs, MCAPs, videos, etc.
Image metrics
For image metrics, attaching the file is required. Your metric references the image by filename, so the file must be uploaded for the image metric to render:
test.attach_log("screen_capture_123.png")
test.emit("images", {"img": "screen_capture_123.png"}, time.time_ns())
The image's filename in the emission must match the filename passed to attach_log. The metrics engine uses this reference to retrieve and render the image.
Here is how the metrics config would look for the above image metric:
topics:
images:
schema:
img: image
metrics:
images:
type: test
query_string: select img from images
template_type: system
template: image
Viewing results
Once the Batch context exits, and all metrics processing is complete, navigate to your project in the ReSim app to view the batch, its tests, and generated metrics — the same as any batch run inside ReSim.
