update detail lineage plan

This commit is contained in:
Stuart Axelbrooke 2025-11-27 15:59:38 +08:00
parent 368558d9d8
commit 6cb11af642

View file

@ -98,8 +98,98 @@ partition_consumers: BTreeMap<String, Vec<String>> // partition_ref → consume
Built from read_deps on job success.
## Open Questions
## Design Decisions
1. How do we visualize retries? (Same want, multiple job attempts)
2. Should partition lineage show historical versions or just current?
3. Performance strategy for high fan-out (partition read by 1000s of consumers)?
1. **Retries**: List all job runs triggered by a want, collapsing retries in the UI (expandable)
2. **Lineage UUIDs**: Resolve partition refs to canonical UUIDs at job success time (jobs don't need to know about UUIDs)
3. **High fan-out**: Truncate to N items with "+X more" expansion
## Implementation Plan
### Phase 1: Data Model
**1.1 Extend JobRunSuccessEventV1**
```protobuf
message JobRunSuccessEventV1 {
string job_run_id = 1;
repeated ReadDeps read_deps = 2; // NEW: preserves impacted→read relationships
}
```
**1.2 Extend SucceededState to store resolved UUIDs**
```rust
pub struct SucceededState {
pub succeeded_at: u64,
pub read_deps: Vec<ReadDeps>, // from event
pub read_partition_uuids: BTreeMap<String, Uuid>, // ref → UUID at read time
pub wrote_partition_uuids: BTreeMap<String, Uuid>, // ref → UUID (from building_partitions)
}
```
UUIDs resolved by looking up canonical partitions when processing success event.
**1.3 Add consumer index to BuildState**
```rust
// input_partition_ref → list of (output_partition_ref, job_run_id)
partition_consumers: BTreeMap<String, Vec<(String, String)>>
```
Populated from `read_deps` when processing JobRunSuccessEventV1.
### Phase 2: Extend Existing API Endpoints
**2.1 GET /api/wants/:id**
Add to response:
- `job_runs`: All job runs servicing this want (with status, partitions built)
- `derivative_wants`: Wants spawned by dep-miss from this want's jobs
**2.2 GET /api/partitions/:ref**
Add to response:
- `built_by`: Job run that built this partition (with read_deps + resolved UUIDs)
- `upstream`: Input partitions (refs + UUIDs) from builder's read_deps
- `downstream`: Consumer partitions (refs + UUIDs) from consumer index
**2.3 GET /api/job_runs/:id**
Add to response:
- `read_deps`: With resolved UUIDs for each partition
- `wrote_partitions`: With UUIDs
- `derivative_wants`: If DepMiss, the wants that were spawned
### Phase 3: Frontend
**3.1 Want detail page**
Add "Fulfillment" section:
- List of job runs (retries collapsed, expandable)
- Derivative wants as nested items
- Partition UUIDs linked to partition detail
**3.2 Partition detail page**
Add "Lineage" section:
- Upstream: builder job → input partitions (navigable)
- Downstream: consumer jobs → output partitions (truncated at N)
**3.3 JobRun detail page**
Add:
- "Read" section with partition refs + UUIDs
- "Wrote" section with partition refs + UUIDs
- "Derivative Wants" section (if DepMiss)
### Phase 4: Job Integration
Extend `DATABUILD_DEP_READ_JSON` parsing to run on job success (not just dep-miss). Jobs already emit this; we just need to capture it.
## Sequencing
1. Proto + state changes
2. Event handler updates
3. API response extensions
4. Frontend enhancements