Skip to content

Models registry (rpx_benchmark.models)

Named factory registry for turning "some_model_name" into a BenchmarkableModel via lazy module imports. The CLI's --model choice list comes from the available_models() function.

Registry

registry

Name → factory registry for RPX model adapters.

A factory is (device: str, **kwargs) -> BenchmarkableModel. The actual import of the factory module is deferred until :func:resolve is called, so the top-level package can be imported without torch, transformers, or any model-specific dependency.

available_models(include_deferred: bool = False) -> List[str]

Return sorted registered model names.

By default excludes deferred stubs so the CLI's --model choice list is runnable-only. Pass include_deferred=True to list the full intended slate.

Source code in rpx_benchmark/models/registry.py
def available_models(include_deferred: bool = False) -> List[str]:
    """Return sorted registered model names.

    By default excludes deferred stubs so the CLI's ``--model`` choice
    list is runnable-only. Pass ``include_deferred=True`` to list the
    full intended slate.
    """
    names = sorted(MODEL_REGISTRY.keys())
    if include_deferred:
        return names
    return [n for n in names if n not in DEFERRED_MODELS]

get_factory(name: str) -> Callable[..., BenchmarkableModel]

Return the factory function registered under name (lazy import).

Parameters:

Name Type Description Default
name str

Registered model name. Use :func:available_models to list the current slate.

required

Returns:

Type Description
Callable

The factory function. Instantiate the model by calling it with the appropriate device / kwargs.

Raises:

Type Description
ConfigError

If name is not in the registry. The error lists every currently registered model so typos are obvious.

Source code in rpx_benchmark/models/registry.py
def get_factory(name: str) -> Callable[..., BenchmarkableModel]:
    """Return the factory function registered under ``name`` (lazy import).

    Parameters
    ----------
    name : str
        Registered model name. Use :func:`available_models` to list
        the current slate.

    Returns
    -------
    Callable
        The factory function. Instantiate the model by calling it
        with the appropriate device / kwargs.

    Raises
    ------
    ConfigError
        If ``name`` is not in the registry. The error lists every
        currently registered model so typos are obvious.
    """
    if name not in MODEL_REGISTRY:
        raise ConfigError(
            f"Unknown model {name!r}.",
            hint=(
                "Registered models: "
                + ", ".join(available_models(include_deferred=True))
            ),
        )
    module_suffix, factory_name = MODEL_REGISTRY[name]
    module = importlib.import_module(f"rpx_benchmark.models.{module_suffix}")
    return getattr(module, factory_name)

resolve(name: str, *, device: str = 'cuda', **kwargs) -> BenchmarkableModel

Look up name and call the factory with device + extra kwargs.

Source code in rpx_benchmark/models/registry.py
def resolve(name: str, *, device: str = "cuda", **kwargs) -> BenchmarkableModel:
    """Look up ``name`` and call the factory with ``device`` + extra kwargs."""
    factory = get_factory(name)
    return factory(device=device, **kwargs)

register(name: str, module_suffix: str, factory_name: str) -> None

Third-party code can register additional model factories at runtime.

Source code in rpx_benchmark/models/registry.py
def register(name: str, module_suffix: str, factory_name: str) -> None:
    """Third-party code can register additional model factories at runtime."""
    MODEL_REGISTRY[name] = (module_suffix, factory_name)

Shipped factories

depth_anything_v2

Depth Anything V2 — metric-indoor variants (Small / Base / Large).

Thin factories over :func:rpx_benchmark.adapters.depth_hf.make_hf_depth_model.

depth_pro

Depth Pro (Apple) — sharp monocular metric depth.

Loaded via the apple/DepthPro-hf transformers port.

zoedepth

ZoeDepth — legacy metric depth baseline (archived upstream May 2025).

Kept as a robot-learning anchor because it's the RGB->3D workhorse inside 3D Diffusion Policy (RSS'24) and FlowPolicy.

unidepth_v2

UniDepth V2 — universal metric depth with intrinsics self-prompting.

ViT-Base and ViT-Large variants share the same adapter; the factories below differ only in the checkpoint id passed to UniDepthV2.from_pretrained. Pass camera_k=<3x3 np.ndarray> to override UniDepth's self-prompted intrinsics with real calibration.

metric3d_v2

Metric3D V2 — ViT Small / Large / Giant2 via torch.hub.

All variants use the same canonical-focal letterbox pipeline; only the entry argument to torch.hub.load changes.

Deferred stubs

_deferred

Stub factories for depth models that are intentionally not yet wired into the monocular absolute depth pipeline.

Each stub raises NotImplementedError with a concrete explanation so users running rpx models see the intended slate and understand why these entries do not yet run.

Remove this file's entries from the registry once the corresponding constraints are addressed.

video_depth_anything_large(*, device: str = 'cuda', **_) -> BenchmarkableModel

Defer: Video Depth Anything is a video model.

Evaluating it per-frame throws away its core contribution (temporal consistency), and benchmarking it properly needs a sequence-level evaluation mode over consecutive RPX frames within a phase. Wire that mode first, then add a real factory.

Source code in rpx_benchmark/models/_deferred.py
def video_depth_anything_large(*, device: str = "cuda", **_) -> BenchmarkableModel:
    """Defer: Video Depth Anything is a *video* model.

    Evaluating it per-frame throws away its core contribution (temporal
    consistency), and benchmarking it properly needs a sequence-level
    evaluation mode over consecutive RPX frames within a phase. Wire
    that mode first, then add a real factory.
    """
    return _deferred(
        "video_depth_anything_large",
        "Sequence-level model; per-frame eval is meaningless. Add a "
        "temporal benchmark mode before wiring this factory.",
    )

prompt_depth_anything_vits(*, device: str = 'cuda', **_) -> BenchmarkableModel

Defer: PromptDA needs a sparse depth prompt for metric output.

Without a prompt the model falls back to relative depth, which fails the "monocular absolute" contract of this pipeline. PromptDA belongs in a separate "prompted depth completion" task where we can feed D435 sparse samples as the prompt.

Source code in rpx_benchmark/models/_deferred.py
def prompt_depth_anything_vits(*, device: str = "cuda", **_) -> BenchmarkableModel:
    """Defer: PromptDA needs a sparse depth *prompt* for metric output.

    Without a prompt the model falls back to relative depth, which fails
    the "monocular *absolute*" contract of this pipeline. PromptDA
    belongs in a separate "prompted depth completion" task where we can
    feed D435 sparse samples as the prompt.
    """
    return _deferred(
        "prompt_depth_anything_vits",
        "Requires sparse depth prompt to produce metric output. "
        "Belongs in a prompted-depth task, not monocular absolute depth.",
    )

depth_anything_3(*, device: str = 'cuda', **_) -> BenchmarkableModel

Defer: Depth Anything 3 is not yet available via transformers.

The native repo ships .pt weights for multi-view geometry; once an AutoModelForDepthEstimation checkpoint lands on the HF hub, replace this stub with a 10-line factory that calls :func:make_hf_depth_model.

Source code in rpx_benchmark/models/_deferred.py
def depth_anything_3(*, device: str = "cuda", **_) -> BenchmarkableModel:
    """Defer: Depth Anything 3 is not yet available via transformers.

    The native repo ships ``.pt`` weights for multi-view geometry; once
    an ``AutoModelForDepthEstimation`` checkpoint lands on the HF hub,
    replace this stub with a 10-line factory that calls
    :func:`make_hf_depth_model`.
    """
    return _deferred(
        "depth_anything_3",
        "Not yet in the transformers AutoModelForDepthEstimation registry. "
        "Wire via make_hf_depth_model once the HF checkpoint is published.",
    )