Skip to content

Reports

Finally, we can get a view of how our system is performing over time by creating a test suite report.

Reports in the CLI and App

A Test Suite Report is defined as a summary of all the test batches created from a given Test Suite and from builds of a given Branch between a Start Timestamp and an End Timestamp.

For maximum flexibility, test suite reports are executed as a cloud workflow. While this adds a small amount of friction (the reports need to be run before being viewed), we feel the benefits of flexibility for customers outweigh the potential delays.

In order to generate a Test Suite Report the user simply submits a report via the UI or CLI:

resim reports create --report-name "<My Report Name>" --length "30" --test-suite "Report Test" --branch "main" --metrics-build-id <Metrics Build ID>

The flags are defined as:

  • Report name: An optional name to give to your report. If you do not provide a name, a friendly name will be generated e.g. rejoicing-aquamarine-starfish.
  • Length: The number of days to use to generate a report. You can alternatively explicitly state a --start-timestamp and (optionally) --end-timestamp. If you specify a start timestamp, but not an end timestamp, it will be assumed that the report will be generated up until the current time.
  • Test Suite: This flag requires you to specify which test suite to use as the basis of the report. Reports are expected to be generated as a longitudinal analysis of all batches created from this test suite.
  • Branch: This flag requires you to specify the branch to concentrate on for this report. We require a specific branch to be chosen because batches across multiple branches can generate a confusing report as work-in-progress branches may cause unexpected errors to be shown in the analysis.
  • Metrics Build: As has been discussed above, we rely on a metrics build to generate the report. The details of how this metrics build is expected to operate are described in the next section.

Once a report is submitted, it goes through the following simple workflow:

flowchart LR
  SUBMITTED
  RUNNING
  ERROR
  SUCCEEDED

  SUBMITTED-->RUNNING
  SUBMITTED-->ERROR
  RUNNING-->ERROR
  RUNNING-->SUCCEEDED

The ReSim CLI allows one to wait for a transition via resim reports wait --report <my-report>.

Test Suite Reports behave similarly to a test metrics or a batch metrics stage in that they generate metrics using the ReSim Metrics SDK and can potentially output logs. As such, you can obtain the logs via resim reports logs --report <my-report>.

Reports Mode for Metrics Build

Reports can, in theory, be generated with any metrics build and it is good practice to use the same metrics build version to generate test, batch, and report metrics for a given test suite, though this is not required.

Generating reports with a metrics build follows a similar pattern to batch metrics. In report mode, the ReSim platform will only populate the /tmp/resim/inputs directory with a configuration file (called report_config.json) that the Metrics SDK can use to fetch the batches associated with that report for further processing and longitudinal analysis. More information is available in the open source documentation.

It is expected that a metrics.binproto file exists in the outputs directory, /tmp/resim/outputs, which the ReSim platform will process to present the reports in the web app. Any other log files placed in the outputs directory will also be made available as logs in the app.

Automating Reports in CI/CD

It can be convenient to use CI/CD tooling to ensure that reports are generated at a regular cadence. This is possible with some simple workflow code in, for example, GitHub:

GitHub Example

In GitHub, you create a workflow to run the report nightly:

name: Run Report
on:
  workflow_dispatch:
  schedule:
    - cron: "0 0 * * *"
jobs:
  run-report:
    name: Generate Updated ReSim Report
    runs-on: ubuntu-20.04
    steps:
      - name: Checkout
        uses: actions/checkout@v3
      - name: Fetch ReSim CLI
        run: curl -L https://github.com/resim-ai/api-client/releases/latest/download/resim-linux-amd64 -o resim-cli
      - name: Make ReSim CLI Executable
        run: chmod +x resim-cli
      - name: Run Report
        run: |
          REPORT_ID_OUTPUT=$(./resim-cli reports create --client-id $CLI_CLIENT_ID --client-secret $CLI_CLIENT_SECRET \
            --project $PROJECT_NAME --metrics-build-id ${METRICS_BUILD_ID} \
            --branch ${BRANCH_NAME} --test-suite ${TEST_SUITE_NAME} --length ${REPORT_LENGTH} --github)
          echo $REPORT_ID_OUTPUT
        env:
          PROJECT_NAME: "<your project name>"
          TEST_SUITE_NAME: "<your test suite name>"
          BRANCH_NAME: "<your branch name>"
          METRICS_BUILD_ID: "<your metrics build id>"
          REPORT_LENGTH: "<the number of days to generate the report for>"
          CLI_CLIENT_ID: ${{ secrets.CLI_CLIENT_ID }}
          CLI_CLIENT_SECRET: ${{ secrets.CLI_CLIENT_SECRET }}

Gitlab Example

In Gitlab, the process is similar, with slightly amended syntax:

stages:
  - build

run-report:
  stage: build

  # Run this job every night at midnight
  rules:
    - when: "schedule"
      cron: "0 0 * * *"

  variables:
    PROJECT_NAME: <your project name>
    TEST_SUITE_NAME: <your test suite name>
    BRANCH_NAME: <your branch name>
    METRICS_BUILD_ID: <your metrics build id>
    REPORT_LENGTH: <the number of days to generate the report for>

  script:
    # Install the latest version of the ReSim CLI
    - curl -L https://github.com/resim-ai/api-client/releases/latest/download/resim-linux-amd64 -o resim
    - chmod +x resim
    # Run the report
    - REPORT_ID_OUTPUT=$(./resim reports create --project $PROJECT_NAME --metrics-build-id ${METRICS_BUILD_ID} \
      --branch ${BRANCH_NAME} --test-suite ${TEST_SUITE_NAME} --length ${REPORT_LENGTH} --github)
    - echo $REPORT_ID_OUTPUT