Exceptions (rpx_benchmark.exceptions)¶
Every error raised by rpx_benchmark public APIs is a subclass of
RPXError. Catch that for blanket handling, or catch a specific
subclass for targeted recovery.
exceptions
¶
Exception hierarchy for the RPX benchmark toolkit.
Every error raised by rpx_benchmark public APIs is a subclass of
:class:RPXError, so callers can write::
import rpx_benchmark as rpx
try:
result, report, _ = rpx.run_monocular_depth(cfg)
except rpx.exceptions.ConfigError as e:
# bad config: show the user what to fix
print(f"[config] {e}")
except rpx.exceptions.DownloadError:
print("offline — re-run with --cache-dir pointing at a local mirror")
except rpx.RPXError as e:
# any other benchmark-specific failure
print(f"[rpx] {e}")
The hierarchy is deliberately narrow: library code raises the most specific subclass that applies, and each subclass carries a human-readable message explaining what went wrong and what the user can do to fix it.
Hierarchy
.. code-block:: text
RPXError
├── ConfigError invalid MonocularDepthRunConfig / CLI flag
├── DatasetError
│ ├── ManifestError malformed or missing manifest file
│ └── DownloadError HuggingFace or network failure
├── ModelError
│ └── AdapterError input/output adapter or factory error
└── MetricError evaluator or metric calculation error
RPXError(message: str, *, hint: Optional[str] = None, details: Optional[dict[str, Any]] = None)
¶
Bases: Exception
Base exception for every error raised by the RPX benchmark toolkit.
All library code raises a subclass of this so user code can write a
single except RPXError to catch all benchmark failures without
accidentally swallowing unrelated exceptions.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
message
|
str
|
Human-readable description of what failed. Should include a hint about what the user can do next. |
required |
hint
|
str
|
Additional remediation advice rendered after the main message. |
None
|
details
|
dict
|
Structured context (e.g. offending field, expected value) that higher-level code may inspect. |
None
|
Examples:
>>> from rpx_benchmark.exceptions import RPXError
>>> raise RPXError("something went wrong", hint="check the cache dir")
...
Source code in rpx_benchmark/exceptions.py
ConfigError(message: str, *, hint: Optional[str] = None, details: Optional[dict[str, Any]] = None)
¶
Bases: RPXError
Raised when a user-supplied config is invalid.
Examples:
MonocularDepthRunConfigbuilt with bothmodelandhf_checkpointset.- CLI given
--device cudaon a CPU-only host with--strict-deviceenabled. - Unknown difficulty split.
Source code in rpx_benchmark/exceptions.py
DatasetError(message: str, *, hint: Optional[str] = None, details: Optional[dict[str, Any]] = None)
¶
Bases: RPXError
Base class for dataset load / manifest / download failures.
Source code in rpx_benchmark/exceptions.py
ManifestError(message: str, *, hint: Optional[str] = None, details: Optional[dict[str, Any]] = None)
¶
Bases: DatasetError
Manifest JSON is missing, malformed, or references missing files.
Raised by :class:rpx_benchmark.loader.RPXDataset when the loader
cannot resolve a sample from the manifest it was handed.
Examples:
- Manifest missing the
taskfield. - Sample lists
rgbthat does not exist on disk. - Task value is not in :class:
rpx_benchmark.api.TaskType.
Source code in rpx_benchmark/exceptions.py
DownloadError(message: str, *, hint: Optional[str] = None, details: Optional[dict[str, Any]] = None)
¶
Bases: DatasetError
HuggingFace download or cache lookup failed.
Raised by :mod:rpx_benchmark.hub when snapshot_download or
hf_hub_download fails (network issue, bad repo id, missing
revision, permission denied).
Source code in rpx_benchmark/exceptions.py
ModelError(message: str, *, hint: Optional[str] = None, details: Optional[dict[str, Any]] = None)
¶
Bases: RPXError
Raised by model factories or the runner when a model misbehaves.
Examples:
- Model returned a different number of predictions than samples.
- Model's
taskattribute does not match the dataset task. - Prediction dataclass has a wrong shape.
Source code in rpx_benchmark/exceptions.py
AdapterError(message: str, *, hint: Optional[str] = None, details: Optional[dict[str, Any]] = None)
¶
Bases: ModelError
Input or output adapter produced an invalid payload.
Examples:
InputAdapter.prepareraised during preprocessing.OutputAdapter.finalizereturned a non-DepthPredictionfor the monocular depth task.- HF processor's
post_process_depth_estimationsignature does not accept the kwargs the adapter wants to pass.
Source code in rpx_benchmark/exceptions.py
MetricError(message: str, *, hint: Optional[str] = None, details: Optional[dict[str, Any]] = None)
¶
Bases: RPXError
Raised when a metric calculator cannot compute a score.
Examples:
- Prediction dataclass is the wrong type for the task.
- Ground-truth shape does not match prediction shape.
- Unknown metric name requested from a registry.