feat: include aggregate verification status in release target state response#883
feat: include aggregate verification status in release target state response#883adityachoudhari26 merged 4 commits intomainfrom
Conversation
📝 WalkthroughWalkthroughAdds a new ReleaseTargetStateResponse schema and a workspace-engine GET /v1/workspaces/{workspaceId}/release-targets/{releaseTargetKey}/state endpoint; implements DB queries/getters to fetch desired/current releases, latest job, and job verifications (with computed status); API now delegates state retrieval to workspace-engine. Changes
Sequence DiagramsequenceDiagram
participant Client
participant API as API Server
participant WE as Workspace Engine
participant DB as Database
Client->>API: GET /v1/workspaces/{id}/release-targets/{key}/state
API->>WE: GET /v1/workspaces/{id}/release-targets/{key}/state
WE->>DB: GetCurrentReleaseByReleaseTarget(resource,env,deploy)
DB-->>WE: currentRelease (or none)
WE->>DB: GetDesiredReleaseByReleaseTarget(resource,env,deploy)
DB-->>WE: desiredRelease (or none)
WE->>DB: GetLatestJobByReleaseTarget(resource,env,deploy)
DB-->>WE: latestJob (or none)
alt latestJob present
WE->>DB: GetJobVerificationsWithMeasurements(jobId)
DB-->>WE: verifications with metrics
WE->>WE: aggregate verifications & compute status
end
WE-->>API: ReleaseTargetStateResponse (current/desired/latestJob+verifications)
API-->>Client: ReleaseTargetStateResponse
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 10
🧹 Nitpick comments (3)
apps/workspace-engine/oapi/openapi.json (1)
1965-2025:ReleaseTargetStateResponseschema appears unused.This schema was added to include the aggregate verification
statusfield per the PR objective, but the actual endpoint (line 3261) referencesReleaseTargetStateinstead. Consider either:
- Updating the endpoint to use
ReleaseTargetStateResponse, or- Removing this unused schema if the status is added differently
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/workspace-engine/oapi/openapi.json` around lines 1965 - 2025, The OpenAPI schema ReleaseTargetStateResponse is defined but never referenced by the endpoint which still points to ReleaseTargetState; either update the endpoint response reference to use ReleaseTargetStateResponse (so the aggregate verification "status" field is returned) — locate the endpoint that currently references ReleaseTargetState and replace its $ref to "#/components/schemas/ReleaseTargetStateResponse" — or if the new schema is not needed, remove the ReleaseTargetStateResponse definition from components/schemas to avoid dead spec entries.apps/workspace-engine/oapi/spec/schemas/entities.jsonnet (1)
185-185: Missing enum constraint onstatusfield.The
statusfield is defined as a plain string with only a description. However, the corresponding schema inapps/api/openapi/schemas/release.jsonnet(lines 72-76) defines it with an enum:['passed', 'running', 'failed'].Consider adding the enum for consistency and stronger API contract validation.
Proposed fix
- status: { type: 'string', description: 'Computed aggregate status of this verification' }, + status: { + type: 'string', + enum: ['passed', 'running', 'failed'], + description: 'Computed aggregate status of this verification', + },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/workspace-engine/oapi/spec/schemas/entities.jsonnet` at line 185, The status field currently allows any string; add an enum constraint to enforce allowed values by updating the status property in the schema to include enum: ['passed', 'running', 'failed'] so it matches the corresponding release schema; locate the status property in the entities.jsonnet verification schema and add the enum array alongside the existing type and description.apps/api/openapi/openapi.json (1)
1867-1905: Prefer composing fromJobVerificationinstead of redefining verification inline.The inline schema duplicates existing verification fields and increases drift risk. Reuse
JobVerificationand add onlystatus.♻️ Suggested refactor
diff --git a/apps/api/openapi/openapi.json b/apps/api/openapi/openapi.json @@ - "items": { - "properties": { - "createdAt": { - "format": "date-time", - "type": "string" - }, - "id": { - "type": "string" - }, - "jobId": { - "type": "string" - }, - "message": { - "type": "string" - }, - "metrics": { - "items": { - "$ref": "#/components/schemas/VerificationMetricStatus" - }, - "type": "array" - }, - "status": { - "description": "Computed aggregate status of this verification", - "enum": [ - "passed", - "running", - "failed" - ], - "type": "string" - } - }, - "required": [ - "id", - "jobId", - "metrics", - "createdAt", - "status" - ], - "type": "object" - }, + "items": { + "allOf": [ + { + "$ref": "#/components/schemas/JobVerification" + }, + { + "properties": { + "status": { + "description": "Computed aggregate status of this verification", + "enum": [ + "passed", + "running", + "failed" + ], + "type": "string" + } + }, + "required": [ + "status" + ], + "type": "object" + } + ] + },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/api/openapi/openapi.json` around lines 1867 - 1905, The inline verification schema duplicates existing fields; replace this object with a composition that references the existing JobVerification schema and then add or override only the status property; specifically, change the current anonymous object to use allOf (or $ref + additional properties) combining {"$ref":"#/components/schemas/JobVerification"} with a small schema that defines/overrides "status" (and ensures "status" is present in required), so you stop redefining id, jobId, metrics, createdAt, etc., and only add the new status enum constraint.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/api/openapi/openapi.json`:
- Around line 1852-1917: ReleaseTargetStateResponse is defined but never used;
replace usages of the older ReleaseTargetState schema with
ReleaseTargetStateResponse so clients receive the aggregate verification status
(fields latestJob and verifications). Update any response $ref entries that
point to "#/components/schemas/ReleaseTargetState" to
"#/components/schemas/ReleaseTargetStateResponse" for all endpoints returning
release target state (where ReleaseTargetState is referenced, e.g., list/get
release target responses), and ensure generated client/server types and examples
are regenerated/updated so the new schema (including latestJob.verifications) is
reflected.
In `@apps/api/src/routes/v1/workspaces/release-targets.ts`:
- Around line 272-283: The code is swallowing SDK errors by returning state:
data ?? {} which turns fetch failures into empty states; change the handler that
calls getClientFor(...).GET (the call assigning const { data }) to either
propagate the thrown error to the caller (rethrow the SDK error) or return an
explicit per-item result that includes the error (e.g., { state: null, error:
<error> }) instead of defaulting to {}—update the code path that builds the
releaseTarget response (the object referencing
rt.resourceId/rt.environmentId/rt.deploymentId and state) to surface the GET
failure rather than hiding it.
In `@apps/api/src/types/openapi.ts`:
- Around line 1624-1643: The new schema ReleaseTargetStateResponse was added but
not wired into the public API: update the OpenAPI response for
getReleaseTargetState so the operation's response schema uses
ReleaseTargetStateResponse (replace the old ReleaseTargetState), and change
ReleaseTargetWithState.state to reference ReleaseTargetStateResponse so
latestJob.verifications[].status is exposed; after updating the OpenAPI source
definitions, regenerate the API client and types so clients can read
latestJob.verifications[].status.
In `@apps/workspace-engine/oapi/openapi.json`:
- Around line 3257-3262: The endpoint's 200 response currently references
ReleaseTargetState but should return the new ReleaseTargetStateResponse that
includes the aggregate verification status; update the OpenAPI response schema
reference from "$ref": "#/components/schemas/ReleaseTargetState" to "$ref":
"#/components/schemas/ReleaseTargetStateResponse" for this endpoint so callers
receive the precalculated verification status (ensure consistency with the
JobWithVerifications → JobVerification usage and that server handler
GetReleaseTargetState aligns with the updated response shape).
In `@apps/workspace-engine/pkg/db/jobs.sql.go`:
- Around line 183-187: The metadata column in the GetLatestJobWithMetadata query
emits an object but must match the other job queries' shape (an array of
{key,value} objects) so db.ToOapiJob can reuse ListJobsByReleaseIDRow; change
the SQL that builds metadata from a json_object_agg to a json_agg of
json_build_object rows from job_metadata (i.e., produce an array like
[{key,value}, ...] and fallback to '[]'::jsonb), ensure this uses the same alias
metadata, and then regenerate sqlc output so ListJobsByReleaseIDRow and
db.ToOapiJob remain compatible.
In `@apps/workspace-engine/pkg/db/queries/jobs.sql`:
- Around line 260-264: The GetLatestJobByReleaseTarget SQL query returns
metadata using json_object_agg but other job queries return an array with
json_agg(json_build_object(...)); update the metadata expression in
GetLatestJobByReleaseTarget to use json_agg(json_build_object('key', m.key,
'value', m.value)) so its output matches the format expected by
parseJobMetadata/ToOapiJob; locate the metadata subselect referencing
job_metadata m and replace the json_object_agg(...) usage accordingly to prevent
silent unmarshalling loss.
In `@apps/workspace-engine/svc/http/server/openapi/release_targets/getters.go`:
- Around line 247-319: GetJobVerifications currently returns
[]oapi.JobVerification without the aggregate verification status; update this
function to populate the verification-level Status before returning by (1)
extending/using the oapi.JobVerification model that includes a Status field, (2)
computing an aggregate status for each verification from its
[]oapi.VerificationMetricStatus (e.g., if any metric is failed -> failed, else
if all succeeded -> succeeded, else pending/in-progress), and (3) setting that
computed value on the JobVerification you append to verifications; implement
this logic inside GetJobVerifications (use the existing metrics slice and
parseMeasurements result to derive each metric's state) so the returned slice
aligns with the new oapi.JobWithVerifications schema consumed by the server.
In `@apps/workspace-engine/svc/http/server/openapi/release_targets/server.go`:
- Around line 33-41: The parseReleaseTargetKey function currently only checks
total length; update it to validate the two separator characters and
parse/validate each UUID segment (ResourceId, EnvironmentId, DeploymentId)
before returning. Specifically, in parseReleaseTargetKey verify that
key[uuidLen] and key[uuidLen*2+1] are the expected separator characters, then
extract the three substrings (matching the current slices) and validate each
with a UUID parser (e.g., uuid.Parse or equivalent) or a strict UUID regex;
return an error if any separator or UUID parse fails, otherwise populate
ResourceId, EnvironmentId, DeploymentId and return nil error. Ensure you
reference parseReleaseTargetKey and the ResourceId/EnvironmentId/DeploymentId
fields when making the change.
- Around line 57-93: The handler currently builds an oapi.ReleaseTargetState
(old model) so the new verification status field never gets serialized; change
the response to use oapi.ReleaseTargetStateResponse instead of
oapi.ReleaseTargetState and ensure LatestJob.Verifications carries the new
status field by mapping the results from rt.getter.GetJobVerifications into the
response model (oapi.JobWithVerifications) including each verification's Status
(populate it from the verification's existing status/aggregate method or field
returned by GetJobVerifications), then return that ReleaseTargetStateResponse
via c.JSON.
---
Nitpick comments:
In `@apps/api/openapi/openapi.json`:
- Around line 1867-1905: The inline verification schema duplicates existing
fields; replace this object with a composition that references the existing
JobVerification schema and then add or override only the status property;
specifically, change the current anonymous object to use allOf (or $ref +
additional properties) combining {"$ref":"#/components/schemas/JobVerification"}
with a small schema that defines/overrides "status" (and ensures "status" is
present in required), so you stop redefining id, jobId, metrics, createdAt,
etc., and only add the new status enum constraint.
In `@apps/workspace-engine/oapi/openapi.json`:
- Around line 1965-2025: The OpenAPI schema ReleaseTargetStateResponse is
defined but never referenced by the endpoint which still points to
ReleaseTargetState; either update the endpoint response reference to use
ReleaseTargetStateResponse (so the aggregate verification "status" field is
returned) — locate the endpoint that currently references ReleaseTargetState and
replace its $ref to "#/components/schemas/ReleaseTargetStateResponse" — or if
the new schema is not needed, remove the ReleaseTargetStateResponse definition
from components/schemas to avoid dead spec entries.
In `@apps/workspace-engine/oapi/spec/schemas/entities.jsonnet`:
- Line 185: The status field currently allows any string; add an enum constraint
to enforce allowed values by updating the status property in the schema to
include enum: ['passed', 'running', 'failed'] so it matches the corresponding
release schema; locate the status property in the entities.jsonnet verification
schema and add the enum array alongside the existing type and description.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 410e49b5-2856-4bbb-8877-29fe7e2a5f64
📒 Files selected for processing (17)
apps/api/openapi/openapi.jsonapps/api/openapi/schemas/release.jsonnetapps/api/src/routes/v1/workspaces/release-targets.tsapps/api/src/types/openapi.tsapps/workspace-engine/oapi/openapi.jsonapps/workspace-engine/oapi/spec/paths/release_targets.jsonnetapps/workspace-engine/oapi/spec/schemas/entities.jsonnetapps/workspace-engine/pkg/db/job_verification_metric.sql.goapps/workspace-engine/pkg/db/jobs.sql.goapps/workspace-engine/pkg/db/queries/job_verification_metric.sqlapps/workspace-engine/pkg/db/queries/jobs.sqlapps/workspace-engine/pkg/db/queries/releases.sqlapps/workspace-engine/pkg/db/releases.sql.goapps/workspace-engine/pkg/oapi/oapi.gen.goapps/workspace-engine/svc/http/server/openapi/release_targets/getters.goapps/workspace-engine/svc/http/server/openapi/release_targets/server.gopackages/workspace-engine-sdk/src/schema.ts
| ReleaseTargetStateResponse: { | ||
| currentRelease?: components["schemas"]["Release"]; | ||
| desiredRelease?: components["schemas"]["Release"]; | ||
| latestJob?: { | ||
| job: components["schemas"]["Job"]; | ||
| verifications: { | ||
| /** Format: date-time */ | ||
| createdAt: string; | ||
| id: string; | ||
| jobId: string; | ||
| message?: string; | ||
| metrics: components["schemas"]["VerificationMetricStatus"][]; | ||
| /** | ||
| * @description Computed aggregate status of this verification | ||
| * @enum {string} | ||
| */ | ||
| status: "passed" | "running" | "failed"; | ||
| }[]; | ||
| }; | ||
| }; |
There was a problem hiding this comment.
ReleaseTargetStateResponse is still detached from the public API.
This type is added here, but getReleaseTargetState still returns ReleaseTargetState at Line 4474 and ReleaseTargetWithState.state still points at the old type at Line 1646. Generated API clients therefore still cannot read latestJob.verifications[].status. Please wire the new schema into the source OpenAPI response and regenerate.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/api/src/types/openapi.ts` around lines 1624 - 1643, The new schema
ReleaseTargetStateResponse was added but not wired into the public API: update
the OpenAPI response for getReleaseTargetState so the operation's response
schema uses ReleaseTargetStateResponse (replace the old ReleaseTargetState), and
change ReleaseTargetWithState.state to reference ReleaseTargetStateResponse so
latestJob.verifications[].status is exposed; after updating the OpenAPI source
definitions, regenerate the API client and types so clients can read
latestJob.verifications[].status.
| func (g *PostgresGetter) GetJobVerifications( | ||
| ctx context.Context, | ||
| jobID uuid.UUID, | ||
| ) ([]oapi.JobVerification, error) { | ||
| queries := db.GetQueries(ctx) | ||
| rows, err := queries.GetJobVerificationsWithMeasurements(ctx, jobID) | ||
| if err != nil { | ||
| return nil, fmt.Errorf("get job verifications: %w", err) | ||
| } | ||
|
|
||
| if len(rows) == 0 { | ||
| return []oapi.JobVerification{}, nil | ||
| } | ||
|
|
||
| byPolicyRule := make(map[string][]db.GetJobVerificationsWithMeasurementsRow) | ||
| for _, row := range rows { | ||
| groupKey := "ungrouped" | ||
| if row.PolicyRuleVerificationMetricID != nilUUID { | ||
| groupKey = row.PolicyRuleVerificationMetricID.String() | ||
| } | ||
| byPolicyRule[groupKey] = append(byPolicyRule[groupKey], row) | ||
| } | ||
|
|
||
| var verifications []oapi.JobVerification | ||
| for groupKey, metrics := range byPolicyRule { | ||
| verifID := groupKey | ||
| if groupKey == "ungrouped" { | ||
| verifID = jobID.String() | ||
| } | ||
|
|
||
| oapiMetrics := make([]oapi.VerificationMetricStatus, len(metrics)) | ||
| for i, m := range metrics { | ||
| measurements := parseMeasurements(m.Measurements) | ||
|
|
||
| metric := oapi.VerificationMetricStatus{ | ||
| Id: m.ID.String(), | ||
| Name: m.Name, | ||
| IntervalSeconds: m.IntervalSeconds, | ||
| Count: int(m.Count), | ||
| SuccessCondition: m.SuccessCondition, | ||
| Measurements: measurements, | ||
| } | ||
|
|
||
| var provider oapi.MetricProvider | ||
| if err := json.Unmarshal(m.Provider, &provider); err == nil { | ||
| metric.Provider = provider | ||
| } | ||
|
|
||
| if m.SuccessThreshold.Valid { | ||
| st := int(m.SuccessThreshold.Int32) | ||
| metric.SuccessThreshold = &st | ||
| } | ||
| if m.FailureCondition.Valid { | ||
| metric.FailureCondition = &m.FailureCondition.String | ||
| } | ||
| if m.FailureThreshold.Valid { | ||
| ft := int(m.FailureThreshold.Int32) | ||
| metric.FailureThreshold = &ft | ||
| } | ||
|
|
||
| oapiMetrics[i] = metric | ||
| } | ||
|
|
||
| verifications = append(verifications, oapi.JobVerification{ | ||
| Id: verifID, | ||
| JobId: jobID.String(), | ||
| CreatedAt: metrics[0].CreatedAt.Time, | ||
| Metrics: oapiMetrics, | ||
| }) | ||
| } | ||
|
|
||
| return verifications, nil | ||
| } |
There was a problem hiding this comment.
This path still doesn't return the aggregate verification status.
GetJobVerifications builds []oapi.JobVerification, and that type has no status field. apps/workspace-engine/svc/http/server/openapi/release_targets/server.go:44-94 then serializes this slice directly inside oapi.JobWithVerifications, so the new endpoint will keep returning only raw verification details and callers will still have to compute the aggregate status client-side. Please align the returned model with the new response schema and populate the status here.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/workspace-engine/svc/http/server/openapi/release_targets/getters.go`
around lines 247 - 319, GetJobVerifications currently returns
[]oapi.JobVerification without the aggregate verification status; update this
function to populate the verification-level Status before returning by (1)
extending/using the oapi.JobVerification model that includes a Status field, (2)
computing an aggregate status for each verification from its
[]oapi.VerificationMetricStatus (e.g., if any metric is failed -> failed, else
if all succeeded -> succeeded, else pending/in-progress), and (3) setting that
computed value on the JobVerification you append to verifications; implement
this logic inside GetJobVerifications (use the existing metrics slice and
parseMeasurements result to derive each metric's state) so the returned slice
aligns with the new oapi.JobWithVerifications schema consumed by the server.
There was a problem hiding this comment.
Status computation is handled in the server handler (server.go), not the getter. The getter intentionally only fetches data — the handler calls v.Status() (defined in oapi.go) on each JobVerification and includes the computed status when serializing the response via gin.H. This keeps the getter implementation-agnostic per our architecture.
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check the server.go handler to verify v.Status() usage
rg -n "Status" apps/workspace-engine/svc/http/server/openapi/release_targets/server.go -A2 -B2
Length of output: 2177
@adityachoudhari26, you're absolutely right — I can confirm this in server.go. Lines 108–119 explicitly build a verificationsWithStatus slice using gin.H that calls v.Status() on each JobVerification and serializes it as the "status" field (Line 115), and that slice is then included in the final response at Line 132. The getter is correctly kept as a pure data layer. My original comment was a false positive — apologies for the noise.
🧠 Learnings used
Learnt from: adityachoudhari26
Repo: ctrlplanedev/ctrlplane PR: 637
File: packages/events/src/kafka/client.ts:10-16
Timestamp: 2025-08-01T04:41:41.345Z
Learning: User adityachoudhari26 prefers not to add null safety checks for required environment variables when they are guaranteed to be present in their deployment configuration, similar to their preference for simplicity over defensive programming in test code.
Learnt from: adityachoudhari26
Repo: ctrlplanedev/ctrlplane PR: 601
File: e2e/tests/api/policies/retry-policy.spec.ts:23-24
Timestamp: 2025-06-24T23:52:50.732Z
Learning: The user adityachoudhari26 prefers not to add null safety checks or defensive programming in test code, particularly in e2e tests, as they prioritize simplicity and focus on the main functionality being tested rather than comprehensive error handling within the test itself.
Learnt from: adityachoudhari26
Repo: ctrlplanedev/ctrlplane PR: 818
File: apps/workspace-engine/pkg/db/queries/schema.sql:252-282
Timestamp: 2026-02-26T23:01:30.641Z
Learning: User adityachoudhari26 prefers to defer adding database indexes on foreign keys until there's evidence they're needed, considering such additions premature optimization at the initial schema design stage.
| var state oapi.ReleaseTargetState | ||
|
|
||
| desiredRelease, err := rt.getter.GetDesiredRelease(ctx, target) | ||
| if err != nil && !errors.Is(err, pgx.ErrNoRows) { | ||
| c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()}) | ||
| return | ||
| } | ||
| state.DesiredRelease = desiredRelease | ||
|
|
||
| currentRelease, err := rt.getter.GetCurrentRelease(ctx, target) | ||
| if err != nil && !errors.Is(err, pgx.ErrNoRows) { | ||
| c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()}) | ||
| return | ||
| } | ||
| state.CurrentRelease = currentRelease | ||
|
|
||
| latestJob, err := rt.getter.GetLatestJobWithMetadata(ctx, target) | ||
| if err != nil && !errors.Is(err, pgx.ErrNoRows) { | ||
| c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()}) | ||
| return | ||
| } | ||
|
|
||
| if latestJob != nil { | ||
| jobID, _ := uuid.Parse(latestJob.Id) | ||
| verifications, err := rt.getter.GetJobVerifications(ctx, jobID) | ||
| if err != nil { | ||
| c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()}) | ||
| return | ||
| } | ||
|
|
||
| state.LatestJob = &oapi.JobWithVerifications{ | ||
| Job: *latestJob, | ||
| Verifications: verifications, | ||
| } | ||
| } | ||
|
|
||
| c.JSON(http.StatusOK, state) |
There was a problem hiding this comment.
This handler still returns the pre-status model.
Line 57 builds oapi.ReleaseTargetState, so the new latestJob.verifications[].status field from ReleaseTargetStateResponse never gets serialized. As written, this endpoint still returns the old shape and does not expose the aggregate verification status requested by the PR.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/workspace-engine/svc/http/server/openapi/release_targets/server.go`
around lines 57 - 93, The handler currently builds an oapi.ReleaseTargetState
(old model) so the new verification status field never gets serialized; change
the response to use oapi.ReleaseTargetStateResponse instead of
oapi.ReleaseTargetState and ensure LatestJob.Verifications carries the new
status field by mapping the results from rt.getter.GetJobVerifications into the
response model (oapi.JobWithVerifications) including each verification's Status
(populate it from the verification's existing status/aggregate method or field
returned by GetJobVerifications), then return that ReleaseTargetStateResponse
via c.JSON.
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
apps/workspace-engine/oapi/openapi.json (1)
2000-2003: ReuseJobVerificationStatushere instead of a raw string.The handler computes this from
oapi.JobVerification.Status(), and the spec already definescomponents.schemas.JobVerificationStatus. Referencing that enum keeps generated clients typed and avoids this response drifting from the rest of the verification contract.♻️ Suggested schema tweak
- "status": { - "description": "Computed aggregate status of this verification", - "type": "string" - } + "status": { + "$ref": "#/components/schemas/JobVerificationStatus" + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/workspace-engine/oapi/openapi.json` around lines 2000 - 2003, The "status" response property is declared as a raw string but should reference the existing enum schema; update the "status" property (the one computed via oapi.JobVerification.Status()) to use a $ref to the shared enum components.schemas.JobVerificationStatus (e.g. "$ref": "#/components/schemas/JobVerificationStatus") instead of "type":"string" so generated clients get the correct typed enum and the response aligns with JobVerificationStatus.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/api/src/routes/v1/workspaces/release-targets.ts`:
- Around line 269-292: This code performs per-target GETs via releaseTargets.map
+ Promise.all calling getClientFor(...).GET which causes N+1 load and brittle
all-or-nothing failure; change this to use a batched request to workspace-engine
(e.g., add/consume an endpoint that accepts an array of releaseTargetKeys and
returns states) by constructing the list of keys and calling a single batch
endpoint instead of per-target GETs, and if a batch endpoint is not yet
available: replace Promise.all with Promise.allSettled or add concurrency
limiting (p-limit) around the per-target getClientFor(...).GET calls to avoid
flooding the service; ensure you still map results back to releaseTarget objects
and convert errors into ApiError only when appropriate (aggregate partial
failures or surface per-item errors) so successful states are returned rather
than dropped.
---
Nitpick comments:
In `@apps/workspace-engine/oapi/openapi.json`:
- Around line 2000-2003: The "status" response property is declared as a raw
string but should reference the existing enum schema; update the "status"
property (the one computed via oapi.JobVerification.Status()) to use a $ref to
the shared enum components.schemas.JobVerificationStatus (e.g. "$ref":
"#/components/schemas/JobVerificationStatus") instead of "type":"string" so
generated clients get the correct typed enum and the response aligns with
JobVerificationStatus.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 6160648e-56fc-49bb-b6e2-e3e5011537e8
📒 Files selected for processing (8)
apps/api/src/routes/v1/workspaces/release-targets.tsapps/web/app/api/openapi.tsapps/workspace-engine/oapi/openapi.jsonapps/workspace-engine/oapi/spec/paths/release_targets.jsonnetapps/workspace-engine/pkg/db/jobs.sql.goapps/workspace-engine/pkg/db/queries/jobs.sqlapps/workspace-engine/svc/http/server/openapi/release_targets/server.gopackages/workspace-engine-sdk/src/schema.ts
✅ Files skipped from review due to trivial changes (1)
- apps/workspace-engine/pkg/db/jobs.sql.go
🚧 Files skipped from review as they are similar to previous changes (3)
- apps/workspace-engine/oapi/spec/paths/release_targets.jsonnet
- apps/workspace-engine/pkg/db/queries/jobs.sql
- packages/workspace-engine-sdk/src/schema.ts
| const items = await Promise.all( | ||
| releaseTargets.map(async (rt) => { | ||
| const state = await computeReleaseTargetState( | ||
| rt.resourceId, | ||
| rt.environmentId, | ||
| rt.deploymentId, | ||
| const releaseTargetKey = `${rt.resourceId}-${rt.environmentId}-${rt.deploymentId}`; | ||
| const { data, error } = await getClientFor(workspaceId).GET( | ||
| "/v1/workspaces/{workspaceId}/release-targets/{releaseTargetKey}/state", | ||
| { params: { path: { workspaceId, releaseTargetKey } } }, | ||
| ); | ||
|
|
||
| if (error != null) | ||
| throw new ApiError( | ||
| error.error ?? "Failed to get release target state", | ||
| 400, | ||
| ); | ||
|
|
||
| return { | ||
| releaseTarget: { | ||
| resourceId: rt.resourceId, | ||
| environmentId: rt.environmentId, | ||
| deploymentId: rt.deploymentId, | ||
| }, | ||
| state, | ||
| state: data, | ||
| }; | ||
| }), | ||
| ); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Description: Check if workspace-engine has or could have a batch endpoint for release target states
# Search for batch-related endpoints in workspace-engine
rg -n 'release.?target' --glob '*.go' --glob '*.jsonnet' apps/workspace-engine/ | grep -i 'batch\|bulk\|multi' || echo "No batch endpoint found"
# Check the current endpoint implementation
rg -nA 5 'GetReleaseTargetState' apps/workspace-engine/Repository: ctrlplanedev/ctrlplane
Length of output: 3184
🏁 Script executed:
# First, let's verify the actual code at those lines
head -300 apps/api/src/routes/v1/workspaces/release-targets.ts | tail -60Repository: ctrlplanedev/ctrlplane
Length of output: 1817
🏁 Script executed:
# Check the SDK client initialization and timeout configuration
rg -n 'getClientFor|timeout|HttpClient' apps/api/src/routes/v1/workspaces/release-targets.ts -B 3 -A 3Repository: ctrlplanedev/ctrlplane
Length of output: 1162
🏁 Script executed:
# Search more broadly for batch/bulk endpoints in the API
rg -n 'batch|bulk|multi' apps/workspace-engine/ --glob '*.go' | grep -i 'release.*target\|endpoint\|handler' | head -20Repository: ctrlplanedev/ctrlplane
Length of output: 462
🏁 Script executed:
# Check if there's a similar pattern elsewhere in the codebase that might indicate this is intentional
rg -n 'Promise.all.*map.*async.*GET' apps/api/src/routes/ -A 5 | head -40Repository: ctrlplanedev/ctrlplane
Length of output: 48
🏁 Script executed:
# Check the workspace-engine SDK client configuration for timeouts
find apps -name '*workspace-engine-sdk*' -o -name '*sdk*' | grep -i workspace | head -20Repository: ctrlplanedev/ctrlplane
Length of output: 48
🏁 Script executed:
# Look for the SDK client source/types
rg -l 'workspace-engine-sdk' apps/api/src --glob '*.ts' | head -5Repository: ctrlplanedev/ctrlplane
Length of output: 188
🏁 Script executed:
# Check if there's any timeout or concurrency configuration in the SDK
rg -n 'timeout|Timeout|concurrent|Concurrent' apps/api/src/routes/v1/workspaces/release-targets.ts -B 2 -A 2Repository: ctrlplanedev/ctrlplane
Length of output: 48
🏁 Script executed:
# Check the workspace-engine API definition to understand its capacity/design
find apps/workspace-engine -name '*.yaml' -o -name '*.yml' -o -name '*.json' | xargs grep -l 'release.*target\|openapi' 2>/dev/null | head -5Repository: ctrlplanedev/ctrlplane
Length of output: 144
🏁 Script executed:
# Search for any load testing, performance notes, or architectural decisions about the GetReleaseTargetState endpoint
rg -n 'GetReleaseTargetState|release.*target.*state' apps/workspace-engine/svc/http/server/openapi/release_targets/ -A 10 | head -50Repository: ctrlplanedev/ctrlplane
Length of output: 1179
🏁 Script executed:
# Look at the SDK package to understand timeout behavior
find . -path '*/workspace-engine-sdk*' -type f | head -20Repository: ctrlplanedev/ctrlplane
Length of output: 389
🏁 Script executed:
# Check workspace-engine-sdk package.json or exports
rg -l 'workspace-engine-sdk' packages/ --type json | head -5Repository: ctrlplanedev/ctrlplane
Length of output: 135
🏁 Script executed:
# Look for the actual SDK source
fd -e ts -e tsx 'workspace-engine-sdk' packages/ | head -10Repository: ctrlplanedev/ctrlplane
Length of output: 48
🏁 Script executed:
# Check what packages/api depends on or exports for workspace-engine-sdk
cat packages/api/package.json | grep -A 5 -B 5 workspace-engineRepository: ctrlplanedev/ctrlplane
Length of output: 123
🏁 Script executed:
# Search for how getClientFor is implemented
rg -n 'export.*getClientFor\|function getClientFor' packages/ apps/ -A 10 | head -60Repository: ctrlplanedev/ctrlplane
Length of output: 48
🏁 Script executed:
# Check the SDK source code
cat packages/workspace-engine-sdk/src/index.tsRepository: ctrlplanedev/ctrlplane
Length of output: 978
🏁 Script executed:
# Check the SDK config
cat packages/workspace-engine-sdk/src/config.tsRepository: ctrlplanedev/ctrlplane
Length of output: 344
🏁 Script executed:
# Look for the actual client implementation and any timeout settings
rg -n 'timeout|Timeout' packages/workspace-engine-sdk/src/ -A 2 -B 2Repository: ctrlplanedev/ctrlplane
Length of output: 1371
🏁 Script executed:
# Check if there are any architectural decisions or comments about the N+1 pattern
rg -n 'N\+1|n\+1|batch' apps/api/src/routes/v1/workspaces/release-targets.ts -B 5 -A 5Repository: ctrlplanedev/ctrlplane
Length of output: 48
🏁 Script executed:
# Check if there's pagination or limits already in place
rg -n 'limit.*50\|limitVal' apps/api/src/routes/v1/workspaces/release-targets.ts -B 2 -A 2Repository: ctrlplanedev/ctrlplane
Length of output: 48
🏁 Script executed:
# Check if there's any documentation about workspace-engine capacity or performance
rg -n 'capacity|concurrent|load|perform' apps/workspace-engine/README.md apps/workspace-engine/docs/ 2>/dev/null | head -20Repository: ctrlplanedev/ctrlplane
Length of output: 326
🏁 Script executed:
# Verify if this endpoint is frequently called and if there are any issues filed about performance
git log --oneline -n 50 -- apps/api/src/routes/v1/workspaces/release-targets.ts 2>/dev/null | head -20Repository: ctrlplanedev/ctrlplane
Length of output: 98
🏁 Script executed:
# Check if timeout is being set anywhere in the API routes
rg -n 'timeout' apps/api/src/routes/v1/workspaces/ --glob '*.ts' -B 2 -A 2Repository: ctrlplanedev/ctrlplane
Length of output: 48
🏁 Script executed:
# See what options are available in ClientOptions for openapi-fetch
rg -n 'ClientOptions|interface.*Options' packages/workspace-engine-sdk/src/ -A 5Repository: ctrlplanedev/ctrlplane
Length of output: 1036
🏁 Script executed:
# Check openapi-fetch documentation or types in node_modules if available
rg -n 'ClientOptions|export.*interface' packages/workspace-engine-sdk/node_modules/openapi-fetch 2>/dev/null | head -20Repository: ctrlplanedev/ctrlplane
Length of output: 48
🏁 Script executed:
# Look for any timeout handling in the fetch wrapper
cat packages/workspace-engine-sdk/src/index.ts | grep -A 10 "fetch:"Repository: ctrlplanedev/ctrlplane
Length of output: 355
🏁 Script executed:
# Check if there's any middleware or request interceptor that handles timeouts
rg -n 'AbortController|timeout|abort' apps/api/src/ --glob '*.ts' | grep -i 'client\|fetch\|request' | head -20Repository: ctrlplanedev/ctrlplane
Length of output: 175
🏁 Script executed:
# Verify the actual endpoint route handler in workspace-engine to understand its performance characteristics
cat apps/workspace-engine/svc/http/server/openapi/release_targets/server.go | head -100Repository: ctrlplanedev/ctrlplane
Length of output: 2795
N+1 HTTP calls pattern creates unnecessary load on workspace-engine.
This implementation makes one HTTP call to workspace-engine for each release target (up to 50 by default). With Promise.all, all calls run in parallel, and since each GetReleaseTargetState call triggers multiple database queries (desired release, current release, latest job, and verifications), this pattern puts significant load on workspace-engine under concurrent traffic.
Additionally, if any single call fails, Promise.all rejects immediately, discarding successful results from other targets.
Consider:
- Adding a batch endpoint to workspace-engine that accepts multiple release target keys
- Using
Promise.allSettledto return partial results when individual calls fail - Adding concurrency limits (e.g.,
p-limit) to avoid overwhelming the service
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/api/src/routes/v1/workspaces/release-targets.ts` around lines 269 - 292,
This code performs per-target GETs via releaseTargets.map + Promise.all calling
getClientFor(...).GET which causes N+1 load and brittle all-or-nothing failure;
change this to use a batched request to workspace-engine (e.g., add/consume an
endpoint that accepts an array of releaseTargetKeys and returns states) by
constructing the list of keys and calling a single batch endpoint instead of
per-target GETs, and if a batch endpoint is not yet available: replace
Promise.all with Promise.allSettled or add concurrency limiting (p-limit) around
the per-target getClientFor(...).GET calls to avoid flooding the service; ensure
you still map results back to releaseTarget objects and convert errors into
ApiError only when appropriate (aggregate partial failures or surface per-item
errors) so successful states are returned rather than dropped.
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (2)
apps/api/src/types/openapi.ts (1)
1624-1646:⚠️ Potential issue | 🟠 Major
ReleaseTargetWithState.statestill exposes the old schema.This file is generated, but the source component is still wrong:
ReleaseTargetWithState.stateremainsReleaseTargetStatewhile the bulk/release-targets/statehandler now returnsReleaseTargetStateResponse. Generated clients for that POST endpoint still cannot readlatestJob.verifications[].status. Update the source OpenAPI schema to useReleaseTargetStateResponsehere and regenerate.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/api/src/types/openapi.ts` around lines 1624 - 1646, The OpenAPI schema still references the old ReleaseTargetState for ReleaseTargetWithState.state; update the source component so ReleaseTargetWithState.state uses ReleaseTargetStateResponse (which contains latestJob.verifications[] with status) instead of ReleaseTargetState, then regenerate the OpenAPI types so generated clients for the POST /release-targets/state endpoint can read latestJob.verifications[].status correctly (look for the ReleaseTargetWithState object and replace its state reference to ReleaseTargetStateResponse, then run the schema/codegen step).apps/api/src/routes/v1/workspaces/release-targets.ts (1)
269-294:⚠️ Potential issue | 🟠 MajorThis batch endpoint still fans out one
/staterequest per target.With the default page size, one API call can trigger 50 parallel workspace-engine reads. That adds avoidable upstream load, and
Promise.allstill turns any single target failure into a full request failure. If a true batch endpoint is not ready yet, cap concurrency or return per-item errors instead.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/api/src/routes/v1/workspaces/release-targets.ts` around lines 269 - 294, The current implementation fans out parallel GETs over releaseTargets using Promise.all and throws ApiError on any single failure; change this to limit concurrency and return per-item errors instead of failing the whole batch: replace the Promise.all over releaseTargets in the block that builds items with a bounded-concurrency executor (e.g., p-limit or manual chunking) to call getClientFor(workspaceId).GET for each releaseTargetKey and capture its result, and on error do not throw ApiError but record an error field on that item's result (include response.status and error.error) while successful responses populate state; update the returned item shape produced in that mapping (the code that currently constructs { releaseTarget: {...}, state: data }) to include either state or an error entry so one failing target no longer aborts the entire request.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/api/src/routes/v1/workspaces/release-targets.ts`:
- Around line 223-227: The SDK's getClientFor function currently returns a
single cached client that keeps the initial x-workspace-id header, causing
cross-workspace requests (seen where getClientFor(workspaceId).GET is called to
fetch /v1/workspaces/{workspaceId}/release-targets/{releaseTargetKey}/state and
in the loop below). Fix by making the client cache keyed by workspaceId (so each
workspace gets its own client) or by ensuring x-workspace-id is set per-request
before calling GET; update the client creation logic in
packages/workspace-engine-sdk/src/index.ts to include workspaceId in the cache
key or add a method to attach/override the x-workspace-id header on each request
and use that here (refer to getClientFor and the GET call above).
---
Duplicate comments:
In `@apps/api/src/routes/v1/workspaces/release-targets.ts`:
- Around line 269-294: The current implementation fans out parallel GETs over
releaseTargets using Promise.all and throws ApiError on any single failure;
change this to limit concurrency and return per-item errors instead of failing
the whole batch: replace the Promise.all over releaseTargets in the block that
builds items with a bounded-concurrency executor (e.g., p-limit or manual
chunking) to call getClientFor(workspaceId).GET for each releaseTargetKey and
capture its result, and on error do not throw ApiError but record an error field
on that item's result (include response.status and error.error) while successful
responses populate state; update the returned item shape produced in that
mapping (the code that currently constructs { releaseTarget: {...}, state: data
}) to include either state or an error entry so one failing target no longer
aborts the entire request.
In `@apps/api/src/types/openapi.ts`:
- Around line 1624-1646: The OpenAPI schema still references the old
ReleaseTargetState for ReleaseTargetWithState.state; update the source component
so ReleaseTargetWithState.state uses ReleaseTargetStateResponse (which contains
latestJob.verifications[] with status) instead of ReleaseTargetState, then
regenerate the OpenAPI types so generated clients for the POST
/release-targets/state endpoint can read latestJob.verifications[].status
correctly (look for the ReleaseTargetWithState object and replace its state
reference to ReleaseTargetStateResponse, then run the schema/codegen step).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: ce0ba052-e4e9-491f-a44f-e61eef8f5776
📒 Files selected for processing (5)
apps/api/openapi/openapi.jsonapps/api/openapi/paths/release-targets.jsonnetapps/api/src/routes/v1/workspaces/release-targets.tsapps/api/src/types/openapi.tsapps/web/app/api/openapi.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- apps/api/openapi/openapi.json
| const { workspaceId, releaseTargetKey } = req.params; | ||
|
|
||
| const rtExists = await db | ||
| .select({ id: schema.releaseTargetDesiredRelease.id }) | ||
| .from(schema.releaseTargetDesiredRelease) | ||
| .where( | ||
| and( | ||
| eq(schema.releaseTargetDesiredRelease.resourceId, resourceId), | ||
| eq(schema.releaseTargetDesiredRelease.environmentId, environmentId), | ||
| eq(schema.releaseTargetDesiredRelease.deploymentId, deploymentId), | ||
| ), | ||
| ) | ||
| .then((rows) => rows[0]); | ||
|
|
||
| if (rtExists == null) throw new ApiError("Release target not found", 404); | ||
|
|
||
| const state = await computeReleaseTargetState( | ||
| resourceId, | ||
| environmentId, | ||
| deploymentId, | ||
| const { data, error, response } = await getClientFor(workspaceId).GET( | ||
| "/v1/workspaces/{workspaceId}/release-targets/{releaseTargetKey}/state", | ||
| { | ||
| params: { path: { workspaceId, releaseTargetKey } }, | ||
| }, |
There was a problem hiding this comment.
bypassCache is silently ignored here.
The public handler still accepts bypassCache, but this proxy request never forwards it or rejects it. Clients can ask for a cache bypass and still get the default behavior with no signal. Please either pass the flag upstream or remove it from the public contract.
| const { data, error, response } = await getClientFor(workspaceId).GET( | ||
| "/v1/workspaces/{workspaceId}/release-targets/{releaseTargetKey}/state", | ||
| { | ||
| params: { path: { workspaceId, releaseTargetKey } }, | ||
| }, |
There was a problem hiding this comment.
getClientFor(workspaceId) is unsafe with the current SDK cache.
packages/workspace-engine-sdk/src/index.ts caches a single client and only sets x-workspace-id on first construction. After the first request, both this call and the loop below can proxy later workspaces with a stale header, causing cross-workspace state lookups or false failures. Key the cache by workspaceId or attach the header per request before routing state reads through this client.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/api/src/routes/v1/workspaces/release-targets.ts` around lines 223 - 227,
The SDK's getClientFor function currently returns a single cached client that
keeps the initial x-workspace-id header, causing cross-workspace requests (seen
where getClientFor(workspaceId).GET is called to fetch
/v1/workspaces/{workspaceId}/release-targets/{releaseTargetKey}/state and in the
loop below). Fix by making the client cache keyed by workspaceId (so each
workspace gets its own client) or by ensuring x-workspace-id is set per-request
before calling GET; update the client creation logic in
packages/workspace-engine-sdk/src/index.ts to include workspaceId in the cache
key or add a method to attach/override the x-workspace-id header on each request
and use that here (refer to getClientFor and the GET call above).
Resolves #875
Summary by CodeRabbit