Conversation
|
Warning Rate limit exceeded@jsbroks has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 15 minutes and 42 seconds before requesting another review. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📒 Files selected for processing (2)
WalkthroughAdds workspace create UI and tRPC mutation; implements end-to-end deployment tracing (in-memory + DB stores, recorder, persistence, schema, docs, tests); threads recorder through releasemanager (planning → eligibility → execution); renames many job status constants to JobStatus*; adds deployment traces UI and tRPC routes. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Web as Frontend
participant TRPC as tRPC
participant DB as Database
Web->>TRPC: workspace.create {name, slug}
TRPC->>DB: SELECT WHERE slug
alt slug exists
DB-->>TRPC: conflict
TRPC-->>Web: error
else
TRPC->>DB: BEGIN
TRPC->>DB: INSERT workspace, user_role, UPDATE user.activeWorkspaceId
TRPC->>DB: COMMIT
TRPC-->>Web: created
end
sequenceDiagram
autonumber
participant Orch as DeploymentOrchestrator
participant Rec as ReconcileTarget (recorder)
participant Planner as Planner
participant Elig as Eligibility
participant Exec as Executor
participant Store as TraceStore (DB / InMemory)
Orch->>Rec: create recorder (root span)
Orch->>Planner: PlanDeployment(..., recorder)
Planner->>Rec: Start PlanningPhase
Planner->>Elig: ShouldCreateJob(..., recorder)
Elig->>Rec: record checks/results
Elig-->>Planner: eligibility result
Planner->>Exec: ExecuteRelease(..., recorder)
Exec->>Rec: record execution spans & Job trace token
Exec-->>Orch: job created
Orch->>Rec: Complete and Persist
Rec->>Store: Persist() -> WriteSpans(batch)
Store-->>Rec: persisted
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes
Possibly related PRs
Suggested reviewers
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Signed-off-by: Justin Brooks <jsbroks@gmail.com>
There was a problem hiding this comment.
Actionable comments posted: 6
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
apps/workspace-engine/pkg/workspace/releasemanager/manager.go (1)
383-424: Do not overwrite failed recorder status during ReconcileTarget
recorder.Complete(trace.StatusFailed)is invoked on error paths, but the deferred block always finishes by callingComplete(trace.StatusCompleted), masking every failure as success. Mirror the pattern above: keep astatusvariable, update it when an error occurs, and callCompleteonce inside the defer.- recorder := trace.NewReconcileTarget(workspaceID, releaseTarget.Key()) - - // Ensure trace is persisted even if reconciliation fails - defer func() { - // Complete the trace recorder with appropriate status - status := trace.StatusCompleted - if span.SpanContext().IsValid() { - // Check if span has error status - // We'll default to completed, errors will be captured in the trace phases - } - recorder.Complete(status) - + recorder := trace.NewReconcileTarget(workspaceID, releaseTarget.Key()) + status := trace.StatusCompleted + defer func() { + recorder.Complete(status) // Persist traces - log error but don't fail reconciliation if err := recorder.Persist(m.traceStore); err != nil { log.Error("Failed to persist deployment trace", "workspace_id", workspaceID, "release_target", releaseTarget.Key(), "error", err.Error()) } }() @@ resource, exists := m.store.Resources.Get(releaseTarget.ResourceId) if !exists { err := fmt.Errorf("resource %q not found", releaseTarget.ResourceId) span.RecordError(err) span.SetStatus(codes.Error, "resource not found") - recorder.Complete(trace.StatusFailed) + status = trace.StatusFailed return err } @@ if err != nil { span.RecordError(err) span.SetStatus(codes.Error, "failed to get relationships") - recorder.Complete(trace.StatusFailed) + status = trace.StatusFailed return err }
🧹 Nitpick comments (3)
apps/workspace-engine/pkg/workspace/store/store.go (1)
68-70: Add a doc comment for the exported method.Please add a short GoDoc so users understand purpose (trace/log correlation).
As per coding guidelines.
+// ID returns the workspace ID associated with this Store. Useful for +// correlating logs and traces that operate on this store instance. func (s *Store) ID() string { return s.id }apps/workspace-engine/pkg/workspace/workspace.go (1)
25-27: Fail fast with a clearer message if traceStore is missing.Surface the configuration error here instead of relying on a downstream panic.
As per coding guidelines.
- // Create release manager with trace store (will panic if nil) - ws.releasemanager = releasemanager.New(s, ws.traceStore) + // Create release manager with trace store (mandatory) + if ws.traceStore == nil { + panic("workspace.New: traceStore is required; provide workspace.WithTraceStore(...)") + } + ws.releasemanager = releasemanager.New(s, ws.traceStore)apps/workspace-engine/pkg/workspace/releasemanager/deployment/executor.go (1)
63-88: Log trace steps only after the work succeedsLines 67 and 82 mark the “Persist release” and “Create job” steps as
StepResultPassbefore the respective operations succeed. When an error occurs later, the trace still shows a successful step as well as the failure entry, which is misleading. Please move the success logging after each operation completes so the trace mirrors reality.@@ - if createJobAction != nil { - createJobAction.AddStep("Persist release", trace.StepResultPass, "Release persisted to store") - } if err := e.store.Releases.Upsert(ctx, releaseToDeploy); err != nil { span.RecordError(err) span.SetStatus(codes.Error, "failed to persist release") if createJobAction != nil { createJobAction.AddStep("Persist release", trace.StepResultFail, fmt.Sprintf("Failed: %s", err.Error())) } return nil, err } + if createJobAction != nil { + createJobAction.AddStep("Persist release", trace.StepResultPass, "Release persisted to store") + } @@ - if createJobAction != nil { - createJobAction.AddStep("Create job", trace.StepResultPass, "Job created") - } newJob, err := e.jobFactory.CreateJobForRelease(ctx, releaseToDeploy) if err != nil { span.RecordError(err) span.SetStatus(codes.Error, "failed to create job") if createJobAction != nil { createJobAction.AddStep("Create job", trace.StepResultFail, fmt.Sprintf("Failed: %s", err.Error())) } return nil, err } + if createJobAction != nil { + createJobAction.AddStep("Create job", trace.StepResultPass, "Job created") + }
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (23)
apps/web/app/routes.ts(1 hunks)apps/web/app/routes/workspaces/create.tsx(1 hunks)apps/workspace-engine/main.go(3 hunks)apps/workspace-engine/oapi/openapi.json(1 hunks)apps/workspace-engine/pkg/oapi/oapi.gen.go(3 hunks)apps/workspace-engine/pkg/workspace/opts.go(1 hunks)apps/workspace-engine/pkg/workspace/releasemanager/deployment/executor.go(6 hunks)apps/workspace-engine/pkg/workspace/releasemanager/deployment/job_eligibility.go(6 hunks)apps/workspace-engine/pkg/workspace/releasemanager/deployment/planner.go(11 hunks)apps/workspace-engine/pkg/workspace/releasemanager/deployment_orchestrator.go(6 hunks)apps/workspace-engine/pkg/workspace/releasemanager/manager.go(12 hunks)apps/workspace-engine/pkg/workspace/releasemanager/trace/DATABASE_USAGE.md(1 hunks)apps/workspace-engine/pkg/workspace/releasemanager/trace/db_store.go(1 hunks)apps/workspace-engine/pkg/workspace/releasemanager/trace/recorder.go(1 hunks)apps/workspace-engine/pkg/workspace/store/store.go(3 hunks)apps/workspace-engine/pkg/workspace/workspace.go(2 hunks)integrations/kubernetes-job-agent/src/config.ts(1 hunks)packages/auth/src/env.ts(1 hunks)packages/db/drizzle/0135_lumpy_overlord.sql(1 hunks)packages/db/drizzle/meta/_journal.json(1 hunks)packages/db/src/schema/deployment-trace.ts(1 hunks)packages/db/src/schema/index.ts(1 hunks)packages/trpc/src/routes/workspace.ts(2 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{js,jsx,ts,tsx,json,md,yml,yaml}
📄 CodeRabbit inference engine (CLAUDE.md)
Formatting: Prettier is used with
@ctrlplane/prettier-config
Files:
apps/workspace-engine/oapi/openapi.jsonpackages/db/src/schema/deployment-trace.tspackages/db/src/schema/index.tspackages/db/drizzle/meta/_journal.jsonapps/web/app/routes/workspaces/create.tsxapps/web/app/routes.tsapps/workspace-engine/pkg/workspace/releasemanager/trace/DATABASE_USAGE.mdpackages/trpc/src/routes/workspace.tsintegrations/kubernetes-job-agent/src/config.tspackages/auth/src/env.ts
apps/workspace-engine/**/*.go
📄 CodeRabbit inference engine (apps/workspace-engine/CLAUDE.md)
apps/workspace-engine/**/*.go: Do not add extraneous inline comments that state the obvious
Do not add comments that simply restate what the code does
Do not add comments for standard Go patterns (e.g., noting WaitGroup or semaphore usage)
Write comments that explain why, document complex logic/algorithms, provide non-obvious context, include TODO/FIXME, and document exported functions/types/methods
Files:
apps/workspace-engine/pkg/workspace/store/store.goapps/workspace-engine/pkg/workspace/opts.goapps/workspace-engine/pkg/workspace/releasemanager/deployment/executor.goapps/workspace-engine/pkg/workspace/releasemanager/trace/recorder.goapps/workspace-engine/pkg/workspace/releasemanager/deployment_orchestrator.goapps/workspace-engine/pkg/oapi/oapi.gen.goapps/workspace-engine/pkg/workspace/releasemanager/deployment/job_eligibility.goapps/workspace-engine/pkg/workspace/workspace.goapps/workspace-engine/pkg/workspace/releasemanager/deployment/planner.goapps/workspace-engine/pkg/workspace/releasemanager/trace/db_store.goapps/workspace-engine/pkg/workspace/releasemanager/manager.goapps/workspace-engine/main.go
**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.{ts,tsx}: Use TypeScript with explicit types (prefer interfaces for public APIs)
Import styles: Use named imports, group imports by source (std lib > external > internal)
Consistent type imports:import type { Type } from "module"
Prefer async/await over raw promises
Handle errors explicitly (use try/catch and typed error responses)
Files:
packages/db/src/schema/deployment-trace.tspackages/db/src/schema/index.tsapps/web/app/routes/workspaces/create.tsxapps/web/app/routes.tspackages/trpc/src/routes/workspace.tsintegrations/kubernetes-job-agent/src/config.tspackages/auth/src/env.ts
⚙️ CodeRabbit configuration file
**/*.{ts,tsx}: Note on Error Handling:
Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error handling. These are acceptable as long as they maintain clarity and predictability.
Files:
packages/db/src/schema/deployment-trace.tspackages/db/src/schema/index.tsapps/web/app/routes/workspaces/create.tsxapps/web/app/routes.tspackages/trpc/src/routes/workspace.tsintegrations/kubernetes-job-agent/src/config.tspackages/auth/src/env.ts
🧠 Learnings (14)
📚 Learning: 2025-08-12T18:13:54.630Z
Learnt from: CR
Repo: ctrlplanedev/ctrlplane PR: 0
File: apps/workspace-engine/CLAUDE.md:0-0
Timestamp: 2025-08-12T18:13:54.630Z
Learning: Applies to apps/workspace-engine/**/*.go : Write comments that explain why, document complex logic/algorithms, provide non-obvious context, include TODO/FIXME, and document exported functions/types/methods
Applied to files:
apps/workspace-engine/pkg/workspace/store/store.goapps/workspace-engine/pkg/workspace/releasemanager/deployment_orchestrator.goapps/workspace-engine/pkg/workspace/workspace.goapps/workspace-engine/main.go
📚 Learning: 2025-06-01T19:10:11.535Z
Learnt from: adityachoudhari26
Repo: ctrlplanedev/ctrlplane PR: 579
File: packages/db/src/schema/rules/concurrency.ts:0-0
Timestamp: 2025-06-01T19:10:11.535Z
Learning: In the ctrlplane codebase, when defining database schemas with Drizzle ORM, it's an intentional pattern to spread base fields (like `basePolicyRuleFields`) and then redefine specific fields to add additional constraints (like unique constraints or foreign key references). The TypeScript field overwriting behavior is deliberately used to override base field definitions with more specific requirements.
Applied to files:
packages/db/src/schema/deployment-trace.tspackages/trpc/src/routes/workspace.ts
📚 Learning: 2025-09-26T01:53:05.472Z
Learnt from: adityachoudhari26
Repo: ctrlplanedev/ctrlplane PR: 668
File: apps/webservice/src/app/api/v1/deployments/[deploymentId]/variables/variable-diff-check.ts:104-134
Timestamp: 2025-09-26T01:53:05.472Z
Learning: The deployment variable value tables in the ctrlplane/db schema do not have timestamp fields like createdAt or updatedAt. The deploymentVariableValueReference table has fields: id, variableValueId, reference, path, defaultValue. The deploymentVariableValue table has fields: id, variableId, resourceSelector, priority.
Applied to files:
packages/db/src/schema/deployment-trace.ts
📚 Learning: 2024-11-01T02:37:25.510Z
Learnt from: adityachoudhari26
Repo: ctrlplanedev/ctrlplane PR: 188
File: packages/api/src/router/deployment.ts:144-161
Timestamp: 2024-11-01T02:37:25.510Z
Learning: In `packages/api/src/router/deployment.ts`, when using Drizzle ORM, there is a limitation when referencing the same table twice in a relational builder query (rbq), requiring separate queries to avoid issues.
Applied to files:
packages/db/src/schema/deployment-trace.ts
📚 Learning: 2025-04-28T18:41:58.813Z
Learnt from: adityachoudhari26
Repo: ctrlplanedev/ctrlplane PR: 515
File: apps/webservice/src/app/api/v1/release-targets/[releaseTargetId]/releases/route.ts:103-108
Timestamp: 2025-04-28T18:41:58.813Z
Learning: In this project, full records from the `deployment` and `deployment_version` tables are considered safe for public API consumption, and there's no need to create restricted DTOs for them.
Applied to files:
packages/db/src/schema/deployment-trace.ts
📚 Learning: 2024-10-29T02:04:50.312Z
Learnt from: adityachoudhari26
Repo: ctrlplanedev/ctrlplane PR: 181
File: packages/api/src/router/deployment.ts:116-131
Timestamp: 2024-10-29T02:04:50.312Z
Learning: In `packages/api/src/router/deployment.ts`, the `list.byDeploymentId` procedure requires multiple database queries due to limitations of the `releaseMatchesCondition` function.
Applied to files:
packages/db/src/schema/deployment-trace.ts
📚 Learning: 2025-05-30T21:48:48.868Z
Learnt from: adityachoudhari26
Repo: ctrlplanedev/ctrlplane PR: 578
File: apps/webservice/src/app/[workspaceSlug]/(app)/resources/(sidebar)/providers/integrations/github/GithubDialog.tsx:58-58
Timestamp: 2025-05-30T21:48:48.868Z
Learning: In the ctrlplane codebase, the shadcn form UI version allows initializing forms with `useForm({ schema: formSchema })` directly, without needing to import and use `zodResolver` from 'hookform/resolvers/zod'. This is different from standard react-hook-form usage.
Applied to files:
apps/web/app/routes/workspaces/create.tsx
📚 Learning: 2025-10-24T00:02:29.723Z
Learnt from: adityachoudhari26
Repo: ctrlplanedev/ctrlplane PR: 696
File: apps/workspace-engine/pkg/workspace/releasemanager/deployment/executor.go:63-67
Timestamp: 2025-10-24T00:02:29.723Z
Learning: In workspace-engine replay mode (apps/workspace-engine/pkg/workspace/releasemanager/deployment/executor.go), jobs are created with Pending status even during replay because job updates are still accepted and processed. If a job remains Pending after replay completes, it genuinely is still pending because no updates occurred during replay.
Applied to files:
apps/workspace-engine/pkg/workspace/releasemanager/deployment/executor.goapps/workspace-engine/pkg/workspace/releasemanager/deployment_orchestrator.goapps/workspace-engine/pkg/workspace/releasemanager/deployment/job_eligibility.go
📚 Learning: 2025-08-12T20:49:05.086Z
Learnt from: adityachoudhari26
Repo: ctrlplanedev/ctrlplane PR: 655
File: apps/workspace-engine/pkg/engine/workspace/fluent.go:166-171
Timestamp: 2025-08-12T20:49:05.086Z
Learning: The UpdateDeploymentVersions() method with OperationCreate case in apps/workspace-engine/pkg/engine/workspace/fluent.go is specifically designed only for creating new deployment versions, not for handling potential duplicates or existing versions.
Applied to files:
apps/workspace-engine/pkg/workspace/releasemanager/deployment_orchestrator.go
📚 Learning: 2025-08-12T18:13:54.630Z
Learnt from: CR
Repo: ctrlplanedev/ctrlplane PR: 0
File: apps/workspace-engine/CLAUDE.md:0-0
Timestamp: 2025-08-12T18:13:54.630Z
Learning: Applies to apps/workspace-engine/**/*.go : Do not add comments for standard Go patterns (e.g., noting WaitGroup or semaphore usage)
Applied to files:
apps/workspace-engine/pkg/workspace/workspace.goapps/workspace-engine/main.go
📚 Learning: 2025-08-12T18:13:54.630Z
Learnt from: CR
Repo: ctrlplanedev/ctrlplane PR: 0
File: apps/workspace-engine/CLAUDE.md:0-0
Timestamp: 2025-08-12T18:13:54.630Z
Learning: Applies to apps/workspace-engine/**/*_test.go : Follow the existing test structure used in *_test.go files
Applied to files:
apps/workspace-engine/pkg/workspace/releasemanager/deployment/planner.goapps/workspace-engine/main.go
📚 Learning: 2025-03-24T18:46:38.894Z
Learnt from: adityachoudhari26
Repo: ctrlplanedev/ctrlplane PR: 395
File: packages/api/src/router/environment-page/resources/router.ts:40-45
Timestamp: 2025-03-24T18:46:38.894Z
Learning: The `takeFirst` utility function in the codebase (from `ctrlplane/db`) throws an Error with message "Found non unique or inexistent value" if the result array doesn't contain exactly one element, making additional null/undefined checks unnecessary after its use.
Applied to files:
packages/trpc/src/routes/workspace.ts
📚 Learning: 2025-08-05T20:11:59.881Z
Learnt from: adityachoudhari26
Repo: ctrlplanedev/ctrlplane PR: 644
File: docker-compose.dev.yaml:58-59
Timestamp: 2025-08-05T20:11:59.881Z
Learning: In ctrlplanedev/ctrlplane, the development environment uses a hybrid setup where infrastructure services (Kafka, Redis, DB) run in Docker containers but applications run directly on the host via pnpm dev. For this setup, KAFKA_ADVERTISED_LISTENERS should use localhost addresses because host applications connect to Kafka via exposed port mapping, not inter-container communication.
Applied to files:
integrations/kubernetes-job-agent/src/config.ts
📚 Learning: 2025-08-05T20:11:46.668Z
Learnt from: adityachoudhari26
Repo: ctrlplanedev/ctrlplane PR: 644
File: docker-compose.dev.yaml:58-59
Timestamp: 2025-08-05T20:11:46.668Z
Learning: In the ctrlplane project, the development setup uses a hybrid architecture where infrastructure services (Kafka, Redis, PostgreSQL) run in Docker containers via docker-compose.dev.yaml, but applications run directly on the host using `pnpm dev`. This means KAFKA_ADVERTISED_LISTENERS should use `localhost` addresses since host applications connect through port mappings rather than Docker network communication.
Applied to files:
integrations/kubernetes-job-agent/src/config.ts
🧬 Code graph analysis (12)
packages/db/src/schema/deployment-trace.ts (1)
packages/db/src/schema/workspace.ts (1)
workspace(19-28)
apps/web/app/routes/workspaces/create.tsx (2)
apps/web/app/api/trpc.tsx (1)
trpc(15-15)packages/db/src/schema/workspace.ts (1)
workspace(19-28)
apps/workspace-engine/pkg/workspace/opts.go (2)
apps/workspace-engine/pkg/workspace/workspace.go (1)
Workspace(31-38)apps/workspace-engine/pkg/workspace/releasemanager/manager.go (1)
PersistenceStore(37-37)
apps/workspace-engine/pkg/workspace/releasemanager/deployment/executor.go (5)
apps/workspace-engine/pkg/oapi/oapi.gen.go (5)
Release(493-500)Job(318-330)Id(75-75)ExternalId(74-74)Status(81-81)apps/workspace-engine/pkg/workspace/releasemanager/trace/recorder.go (1)
ReconcileTarget(15-37)apps/workspace-engine/pkg/workspace/releasemanager/trace/phases.go (2)
ExecutionPhase(190-194)Action(280-284)apps/workspace-engine/pkg/workspace/releasemanager/trace/types.go (2)
StepResultPass(55-55)StepResultFail(56-56)apps/workspace-engine/pkg/workspace/releasemanager/trace/token.go (1)
GenerateDefaultTraceToken(42-44)
apps/workspace-engine/pkg/workspace/releasemanager/deployment_orchestrator.go (2)
apps/workspace-engine/pkg/workspace/releasemanager/trace/recorder.go (1)
ReconcileTarget(15-37)apps/workspace-engine/pkg/workspace/releasemanager/deployment/planner.go (1)
WithTraceRecorder(69-73)
apps/workspace-engine/pkg/oapi/oapi.gen.go (2)
apps/web/app/routes/ws/deployments/_components/types.ts (1)
JobStatus(11-21)apps/workspace-engine/pkg/oapi/oapi.go (1)
ReleaseVerificationStatus(19-19)
apps/workspace-engine/pkg/workspace/releasemanager/deployment/job_eligibility.go (3)
apps/workspace-engine/pkg/workspace/releasemanager/trace/recorder.go (1)
ReconcileTarget(15-37)apps/workspace-engine/pkg/workspace/releasemanager/trace/phases.go (1)
EligibilityPhase(101-105)apps/workspace-engine/pkg/workspace/releasemanager/trace/types.go (4)
CheckResultPass(47-47)CheckResultFail(48-48)DecisionApproved(63-63)DecisionRejected(64-64)
apps/workspace-engine/pkg/workspace/workspace.go (1)
apps/workspace-engine/pkg/workspace/releasemanager/manager.go (2)
New(41-57)PersistenceStore(37-37)
apps/workspace-engine/pkg/workspace/releasemanager/deployment/planner.go (4)
apps/workspace-engine/pkg/workspace/releasemanager/trace/recorder.go (1)
ReconcileTarget(15-37)apps/workspace-engine/pkg/workspace/releasemanager/trace/phases.go (1)
PlanningPhase(12-16)apps/workspace-engine/pkg/workspace/releasemanager/trace/types.go (4)
DecisionRejected(64-64)DecisionApproved(63-63)ResultAllowed(39-39)ResultBlocked(40-40)apps/workspace-engine/pkg/oapi/oapi.gen.go (2)
DeploymentVersion(196-207)Id(75-75)
apps/workspace-engine/pkg/workspace/releasemanager/manager.go (5)
apps/workspace-engine/pkg/workspace/manager/manager.go (1)
PersistenceStore(95-97)apps/workspace-engine/pkg/workspace/store/store.go (2)
New(13-39)Store(41-66)apps/workspace-engine/pkg/workspace/workspace.go (1)
New(10-29)apps/workspace-engine/pkg/workspace/releasemanager/trace/recorder.go (2)
NewReconcileTarget(40-42)ReconcileTarget(15-37)apps/workspace-engine/pkg/workspace/releasemanager/trace/types.go (2)
StatusCompleted(72-72)StatusFailed(73-73)
packages/trpc/src/routes/workspace.ts (3)
packages/trpc/src/trpc.ts (1)
protectedProcedure(97-97)packages/db/src/schema/workspace.ts (1)
workspace(19-28)packages/validators/src/auth/index.ts (1)
predefinedRoles(131-151)
apps/workspace-engine/main.go (4)
apps/workspace-engine/pkg/db/client.go (1)
GetPool(20-40)apps/workspace-engine/pkg/workspace/releasemanager/trace/db_store.go (1)
NewDBStore(21-23)apps/workspace-engine/pkg/workspace/manager/manager.go (3)
Configure(40-44)WithPersistentStore(28-32)WithWorkspaceCreateOptions(34-38)apps/workspace-engine/pkg/workspace/opts.go (1)
WithTraceStore(13-17)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
- GitHub Check: Typecheck
- GitHub Check: Lint
- GitHub Check: tests
- GitHub Check: build (linux/amd64)
- GitHub Check: build-and-push (linux/amd64)
🔇 Additional comments (6)
packages/auth/src/env.ts (1)
7-7: Confirm BASE_URL default aligns with the running auth hostSwitching the default from
http://localhost:3000tohttp://localhost:5173will make NextAuth callbacks and any absolute links target the Vite dev port. Unless the auth server itself was moved to 5173 (I don’t see that change in this PR), local sign-in flows will start failing because they still bind on 3000. Please double-check which port the auth server actually listens on and either revert this fallback or include the matching server change.integrations/kubernetes-job-agent/src/config.ts (1)
9-9: Verify the agent’s default API endpointThe Kubernetes job agent previously defaulted to hitting the API on port 3000. Pointing it at
http://localhost:5173will now talk to the Vite UI server unless the API itself also moved. That would break local runs that rely on the default. Can you confirm the backend actually serves on 5173 and, if not, keep the fallback at 3000 (or add a proxy) so the agent can still connect out of the box?apps/workspace-engine/pkg/workspace/store/store.go (1)
15-16: Constructor wiring looks good.Storing wsId on Store is clean and keeps ID immutable.
apps/workspace-engine/pkg/workspace/releasemanager/trace/recorder.go (1)
278-281: LGTM: clear, thread-safe accessor.Initialized once and never mutated; no lock required.
packages/db/src/schema/index.ts (1)
25-25: Re-export is valid and properly centralized.The source file exists with correctly exported symbols (deploymentTraceSpan, DeploymentTraceSpan, InsertDeploymentTraceSpan, deploymentTraceSpanRelations). No issues found.
packages/db/drizzle/meta/_journal.json (1)
951-956: Verify migration and schema files exist in your local repository.The verification tools cannot access the full directory structure in this sandbox environment. You must manually confirm:
- Migration file exists:
packages/db/drizzle/0135_lumpy_overlord.sql(non-empty)- Schema file exists:
packages/db/src/schema/deployment-trace.ts(non-empty)- Export line 25 in
packages/db/src/schema/index.tscorrectly references"./deployment-trace.js"If these files are missing, the journal entry is orphaned and must be removed or the files created.
| // Initialize database pool for trace store | ||
| pgxPool := db.GetPool(ctx) | ||
| traceStore := trace.NewDBStore(pgxPool) | ||
| log.Info("Deployment trace store initialized") | ||
|
|
There was a problem hiding this comment.
Initialize OTEL before creating the DB pool; avoid forcing insecure OTLP.
- Move initTracer/initMetrics before db.GetPool() so otelpgx captures connect/pool spans from the start.
- Do not unconditionally use otlptracehttp.WithInsecure()/otlpmetrichttp.WithInsecure(); respect scheme or config to enable TLS.
Minimal re-ordering:
func main() {
ctx := context.Background()
+ // Initialize OpenTelemetry first
+ cleanupTracer, err := initTracer()
+ if err != nil { log.Fatal("Failed to initialize tracer", "error", err) }
+ defer cleanupTracer()
+ cleanupMetrics, err := initMetrics()
+ if err != nil { log.Fatal("Failed to initialize metrics", "error", err) }
+ defer cleanupMetrics()
- // Initialize database pool for trace store
- pgxPool := db.GetPool(ctx)
+ // Initialize database pool for trace store (after OTEL)
+ pgxPool := db.GetPool(ctx)
traceStore := trace.NewDBStore(pgxPool)For TLS-aware exporter options (outside this hunk), consider:
// In initTracer/initMetrics:
// Parse scheme; default to https when absent; only use WithInsecure for http.
u, _ := url.Parse(config.Global.OTELExporterOTLPEndpoint)
scheme := ""
hostport := endpoint
if u != nil && u.Scheme != "" {
scheme, hostport = u.Scheme, u.Host
}
opts := []otlptracehttp.Option{}
if hostport != "" { opts = append(opts, otlptracehttp.WithEndpoint(hostport)) }
if scheme == "http" {
opts = append(opts, otlptracehttp.WithInsecure())
}
// For https: leave secure defaults or add WithTLSClientConfig as needed.Based on learnings.
🤖 Prompt for AI Agents
In apps/workspace-engine/main.go around lines 212 to 216, the tracer/metrics
initialization is happening after creating the DB pool and the OTLP exporters
are being forced insecure; move calls to initTracer and initMetrics to run
before db.GetPool() so otelpgx can capture connection/pool spans from process
start, and update the exporter option logic to parse the OTLP endpoint
scheme/host and only add WithInsecure()/WithInsecure() when the scheme is
explicitly "http" (use https by default or leave TLS defaults, and pass only the
host:port to WithEndpoint when present).
| "ReleaseVerificationStatus": { | ||
| "enum": [ | ||
| "running", | ||
| "passed", | ||
| "failed", | ||
| "cancelled" | ||
| ], | ||
| "type": "string" | ||
| }, |
There was a problem hiding this comment.
Enum added but unused — wire it or drop it.
ReleaseVerificationStatus exists but isn’t referenced. If intended, add a status field to ReleaseVerification.
@@ "ReleaseVerification": {
"properties": {
+ "status": {
+ "$ref": "#/components/schemas/ReleaseVerificationStatus"
+ },
"createdAt": {Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In apps/workspace-engine/oapi/openapi.json around lines 1149-1157, the
ReleaseVerificationStatus enum is defined but unused; either remove it or wire
it into the ReleaseVerification schema — to fix, add a "status" property to the
ReleaseVerification object that uses "$ref":
"#/components/schemas/ReleaseVerificationStatus" (set its type as string via the
ref), decide whether it should be required and update the schema's "required"
array accordingly, and update any example/request/response fixtures or tests to
include the new status field; alternatively delete the ReleaseVerificationStatus
enum definition if it is not needed.
| // Generate trace token for external executors | ||
| if recorder != nil && createJobAction != nil { | ||
| traceToken := trace.GenerateDefaultTraceToken(recorder.RootTraceID(), newJob.Id) | ||
| createJobAction.AddMetadata("trace_token", traceToken) | ||
| createJobAction.AddMetadata("job_id", newJob.Id) | ||
| createJobAction.AddStep("Generate trace token", trace.StepResultPass, "Token generated for external executor") | ||
|
|
||
| // Store token in job's external ID or config for now | ||
| // This will be used by the job agent to report back | ||
| if newJob.ExternalId == nil { | ||
| newJob.ExternalId = &traceToken | ||
| } | ||
| } | ||
|
|
||
| if createJobAction != nil { | ||
| createJobAction.AddStep("Persist job", trace.StepResultPass, "Job persisted to store") | ||
| } | ||
|
|
||
| // Step 3: Dispatch job to integration (ASYNC) |
There was a problem hiding this comment.
Persist the trace token before writing the job
Line 108 mutates newJob.ExternalId after we already called e.store.Jobs.Upsert, so the persisted job never stores the trace token. Any worker that later reads from the store will miss the token entirely, breaking the external executor flow this block is setting up. Please move the token generation (or a follow-up Upsert) ahead of the initial write so the stored job contains the token.
@@
- span.AddEvent("Persisting job to store")
- e.store.Jobs.Upsert(ctx, newJob)
- span.SetAttributes(
- attribute.Bool("job.created", true),
- attribute.String("job.id", newJob.Id),
- attribute.String("job.status", string(newJob.Status)),
- )
-
- // Generate trace token for external executors
- if recorder != nil && createJobAction != nil {
+ // Generate trace token for external executors
+ if recorder != nil && createJobAction != nil {
traceToken := trace.GenerateDefaultTraceToken(recorder.RootTraceID(), newJob.Id)
createJobAction.AddMetadata("trace_token", traceToken)
createJobAction.AddMetadata("job_id", newJob.Id)
createJobAction.AddStep("Generate trace token", trace.StepResultPass, "Token generated for external executor")
if newJob.ExternalId == nil {
newJob.ExternalId = &traceToken
}
}
+ span.AddEvent("Persisting job to store")
+ e.store.Jobs.Upsert(ctx, newJob)
+ span.SetAttributes(
+ attribute.Bool("job.created", true),
+ attribute.String("job.id", newJob.Id),
+ attribute.String("job.status", string(newJob.Status)),
+ )Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
apps/workspace-engine/pkg/workspace/releasemanager/deployment/executor.go around
lines 100 to 118: the trace token is generated and assigned to newJob.ExternalId
after the job was already persisted, so the stored job never contains the token;
move the trace token generation and assignment so newJob.ExternalId is set
before the call to e.store.Jobs.Upsert (or alternatively perform an immediate
Upsert after setting ExternalId), and keep the createJobAction metadata/steps
the same so the token is persisted with the job record.
| // Handle upsert - reconcile the target with pre-computed relationships | ||
| // Create a recorder for this target | ||
| workspaceID := m.store.ID() | ||
| recorder := trace.NewReconcileTarget(workspaceID, state.entity.Key()) | ||
| defer func() { | ||
| recorder.Complete(trace.StatusCompleted) | ||
| if err := recorder.Persist(m.traceStore); err != nil { | ||
| log.Error("Failed to persist deployment trace", | ||
| "workspace_id", workspaceID, | ||
| "release_target", state.entity.Key(), | ||
| "error", err.Error()) | ||
| } | ||
| }() | ||
|
|
||
| relationships := resourceRelationships[state.entity.ResourceId] | ||
| if err := m.reconcileTargetWithRelationships(ctx, state.entity, false, relationships); err != nil { | ||
| if err := m.reconcileTargetWithRelationships(ctx, state.entity, false, relationships, recorder); err != nil { | ||
| log.Warn("error reconciling release target", | ||
| "release_target", state.entity.Key(), | ||
| "error", err.Error()) | ||
| recorder.Complete(trace.StatusFailed) | ||
| } |
There was a problem hiding this comment.
Keep the recorded status from being overwritten
The deferred block unconditionally calls recorder.Complete(trace.StatusCompleted), so any earlier recorder.Complete(trace.StatusFailed) (executed when reconciliation errors) gets overwritten to “completed”. Persisted spans will therefore never reflect failures. Track the status in a variable instead and call Complete exactly once in the defer.
- recorder := trace.NewReconcileTarget(workspaceID, state.entity.Key())
- defer func() {
- recorder.Complete(trace.StatusCompleted)
+ recorder := trace.NewReconcileTarget(workspaceID, state.entity.Key())
+ status := trace.StatusCompleted
+ defer func() {
+ recorder.Complete(status)
if err := recorder.Persist(m.traceStore); err != nil {
log.Error("Failed to persist deployment trace",
"workspace_id", workspaceID,
"release_target", state.entity.Key(),
"error", err.Error())
}
}()
@@
- if err := m.reconcileTargetWithRelationships(ctx, state.entity, false, relationships, recorder); err != nil {
+ if err := m.reconcileTargetWithRelationships(ctx, state.entity, false, relationships, recorder); err != nil {
log.Warn("error reconciling release target",
"release_target", state.entity.Key(),
"error", err.Error())
- recorder.Complete(trace.StatusFailed)
+ status = trace.StatusFailed
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Handle upsert - reconcile the target with pre-computed relationships | |
| // Create a recorder for this target | |
| workspaceID := m.store.ID() | |
| recorder := trace.NewReconcileTarget(workspaceID, state.entity.Key()) | |
| defer func() { | |
| recorder.Complete(trace.StatusCompleted) | |
| if err := recorder.Persist(m.traceStore); err != nil { | |
| log.Error("Failed to persist deployment trace", | |
| "workspace_id", workspaceID, | |
| "release_target", state.entity.Key(), | |
| "error", err.Error()) | |
| } | |
| }() | |
| relationships := resourceRelationships[state.entity.ResourceId] | |
| if err := m.reconcileTargetWithRelationships(ctx, state.entity, false, relationships); err != nil { | |
| if err := m.reconcileTargetWithRelationships(ctx, state.entity, false, relationships, recorder); err != nil { | |
| log.Warn("error reconciling release target", | |
| "release_target", state.entity.Key(), | |
| "error", err.Error()) | |
| recorder.Complete(trace.StatusFailed) | |
| } | |
| // Handle upsert - reconcile the target with pre-computed relationships | |
| // Create a recorder for this target | |
| workspaceID := m.store.ID() | |
| recorder := trace.NewReconcileTarget(workspaceID, state.entity.Key()) | |
| status := trace.StatusCompleted | |
| defer func() { | |
| recorder.Complete(status) | |
| if err := recorder.Persist(m.traceStore); err != nil { | |
| log.Error("Failed to persist deployment trace", | |
| "workspace_id", workspaceID, | |
| "release_target", state.entity.Key(), | |
| "error", err.Error()) | |
| } | |
| }() | |
| relationships := resourceRelationships[state.entity.ResourceId] | |
| if err := m.reconcileTargetWithRelationships(ctx, state.entity, false, relationships, recorder); err != nil { | |
| log.Warn("error reconciling release target", | |
| "release_target", state.entity.Key(), | |
| "error", err.Error()) | |
| status = trace.StatusFailed | |
| } |
🤖 Prompt for AI Agents
In apps/workspace-engine/pkg/workspace/releasemanager/manager.go around lines
148 to 168, the deferred block always calls
recorder.Complete(trace.StatusCompleted) which overwrites any earlier failure
status; change this by introducing a local status variable (e.g., status :=
trace.StatusCompleted), set status = trace.StatusFailed when reconciliation
returns an error, and in the defer call recorder.Complete(status) so Complete is
called exactly once with the final status; also use that status when persisting
the trace/logging if needed.
apps/workspace-engine/pkg/workspace/releasemanager/trace/db_store.go
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
apps/workspace-engine/pkg/workspace/releasemanager/manager.go (1)
383-434: Critical: Same status overwrite bug as ProcessChanges.The defer initializes
status := trace.StatusCompleted(line 390) and never updates it, so lines 413 and 422'srecorder.Complete(trace.StatusFailed)calls get overwritten by line 395'srecorder.Complete(status). Failed reconciliations will be persisted as completed.Apply the same fix pattern:
defer func() { - // Complete the trace recorder with appropriate status - status := trace.StatusCompleted - if span.SpanContext().IsValid() { - // Check if span has error status - // We'll default to completed, errors will be captured in the trace phases - } recorder.Complete(status) // Persist traces - log error but don't fail reconciliation if err := recorder.Persist(m.traceStore); err != nil { log.Error("Failed to persist deployment trace", "workspace_id", workspaceID, "release_target", releaseTarget.Key(), "error", err.Error()) } }() + // Track final status for deferred completion + status := trace.StatusCompleted + // Compute relationships on-demand for this single resource span.AddEvent("Getting resource and relationships") resource, exists := m.store.Resources.Get(releaseTarget.ResourceId) if !exists { err := fmt.Errorf("resource %q not found", releaseTarget.ResourceId) span.RecordError(err) span.SetStatus(codes.Error, "resource not found") - recorder.Complete(trace.StatusFailed) + status = trace.StatusFailed return err } entity := relationships.NewResourceEntity(resource) resourceRelationships, err := m.store.Relationships.GetRelatedEntities(ctx, entity) if err != nil { span.RecordError(err) span.SetStatus(codes.Error, "failed to get relationships") - recorder.Complete(trace.StatusFailed) + status = trace.StatusFailed return err }
♻️ Duplicate comments (3)
apps/workspace-engine/pkg/workspace/releasemanager/deployment/executor.go (1)
92-113: Persist the trace token before storing the job.We write the job via
e.store.Jobs.Upsertand only afterwards assignnewJob.ExternalId = &traceToken. That late mutation never reaches the store, so downstream workers will never see the trace token. Please move the token generation/assignment ahead of the initial Upsert (or perform a second Upsert right after settingExternalId) so the persisted row contains the token.- span.AddEvent("Persisting job to store") - e.store.Jobs.Upsert(ctx, newJob) - span.SetAttributes( - attribute.Bool("job.created", true), - attribute.String("job.id", newJob.Id), - attribute.String("job.status", string(newJob.Status)), - ) - - // Generate trace token for external executors - if recorder != nil && createJobAction != nil { + // Generate trace token for external executors + if recorder != nil && createJobAction != nil { traceToken := trace.GenerateDefaultTraceToken(recorder.RootTraceID(), newJob.Id) createJobAction.AddMetadata("trace_token", traceToken) createJobAction.AddMetadata("job_id", newJob.Id) createJobAction.AddStep("Generate trace token", trace.StepResultPass, "Token generated for external executor") // Store token in job's external ID or config for now // This will be used by the job agent to report back if newJob.ExternalId == nil { newJob.ExternalId = &traceToken } } + span.AddEvent("Persisting job to store") + e.store.Jobs.Upsert(ctx, newJob) + span.SetAttributes( + attribute.Bool("job.created", true), + attribute.String("job.id", newJob.Id), + attribute.String("job.status", string(newJob.Status)), + )apps/workspace-engine/pkg/workspace/releasemanager/trace/db_store.go (1)
59-166: Fail fast whenworkspace_idis missing.
workspace_idis declared NOT NULL (UUID), but we never verify that the attribute is present; when it’s absent we enqueue an empty string and Postgres raisesinvalid input syntax for type uuid: "", aborting the whole batch. Please detect this case right after the attribute loop and return a descriptive error (or skip the span) before building the insert.for _, attr := range span.Attributes() { … } + if workspaceID == "" { + return fmt.Errorf("span %q (%s) missing required %s attribute", span.Name(), traceID, attrWorkspaceID) + } + // Serialize attributes to JSONapps/workspace-engine/pkg/workspace/releasemanager/manager.go (1)
148-168: Critical: Deferred Complete() overwrites failure status.The defer at line 153 unconditionally calls
recorder.Complete(trace.StatusCompleted), so line 167'srecorder.Complete(trace.StatusFailed)gets overwritten. Persisted traces will never show failures.Track status in a variable and call
Completeonce:- recorder := trace.NewReconcileTarget(workspaceID, state.entity.Key()) - defer func() { - recorder.Complete(trace.StatusCompleted) + recorder := trace.NewReconcileTarget(workspaceID, state.entity.Key()) + status := trace.StatusCompleted + defer func() { + recorder.Complete(status) if err := recorder.Persist(m.traceStore); err != nil { log.Error("Failed to persist deployment trace", "workspace_id", workspaceID, "release_target", state.entity.Key(), "error", err.Error()) } }() relationships := resourceRelationships[state.entity.ResourceId] - if err := m.reconcileTargetWithRelationships(ctx, state.entity, false, relationships, recorder); err != nil { + if err := m.reconcileTargetWithRelationships(ctx, state.entity, false, relationships, recorder); err != nil { log.Warn("error reconciling release target", "release_target", state.entity.Key(), "error", err.Error()) - recorder.Complete(trace.StatusFailed) + status = trace.StatusFailed }
🧹 Nitpick comments (6)
apps/workspace-engine/pkg/workspace/releasemanager/policy/evaluator/environmentprogression/environmentprogression_test.go (1)
176-184: Status enum updates are consistent and correct.Nothing else changed. Minor nit: generated field name MinimumSockTimeMinutes reads like “Soak”; if schema ever changes, mirror that here.
Also applies to: 256-264, 443-451, 659-667, 941-949, 1111-1119, 1549-1557
apps/workspace-engine/pkg/workspace/store/release_targets.go (1)
60-66: Correct enum switch; behavior unchanged.Optionally route success checks through a shared predicate (e.g., job.IsSuccessful()) if available to avoid drift.
Also applies to: 83-96
apps/workspace-engine/test/e2e/engine_deployment_version_test.go (1)
82-85: Set CompletedAt when marking jobs terminal to avoid future coupling.Several tests flip Status to Successful/Failure without CompletedAt. Add a timestamp for realism and to protect code that relies on completion time.
Example (apply similarly elsewhere):
- initialJob.Status = oapi.JobStatusSuccessful + now := time.Now() + initialJob.Status = oapi.JobStatusSuccessful + initialJob.CompletedAt = &nowAlso applies to: 186-192, 288-290, 445-447, 458-460, 471-473, 566-568, 752-757, 780-782
apps/workspace-engine/test/e2e/engine_redeploy_test.go (1)
82-85: Add CompletedAt when setting terminal statuses in redeploy tests.Keeps data realistic and avoids regressions if completion time is consulted later.
Example pattern:
- job.Status = oapi.JobStatusSuccessful + now := time.Now() + job.Status = oapi.JobStatusSuccessful + job.CompletedAt = &nowApply similarly for Failure/Cancelled/Skipped where present.
Also applies to: 189-192, 287-290, 444-447, 457-460, 471-473, 566-568, 755-757, 780-782, 785-787
apps/workspace-engine/test/e2e/engine_variable_change_evaluation_test.go (1)
76-79: Enum migration to oapi.JobStatus is complete; consider extracting a helper to reduce repeated job completion patterns.*The codebase contains no leftover old constants. However, the pattern of setting Status, CompletedAt, and calling PushEvent repeats across lines 76–79, 218–221, 323–326, 431–434, 542–545, 645–648, 743–746, and 777–803. A simple helper like
markJobComplete(job *oapi.Job, now time.Time)would improve readability and consistency.apps/workspace-engine/pkg/workspace/releasemanager/deployment/executor.go (1)
63-88: Record action steps only after the underlying operation succeeds.Each step is marked as
passbefore we actually attempt the Upsert/Create call, so failure scenarios emit both a “pass” and a “fail” entry for the same step. Please move the success-step logging to after the operation completes (and keep the failure logging in the error branch) to keep the trace truthful.
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (3)
apps/workspace-engine/pkg/events/handler/resources/resourceproviders_cache_test.go (1)
21-21: Inconsistent workspace initialization across tests.This test creates a workspace without explicitly setting the trace store, while all other tests in this file (lines 88, 118, 168, 221, 274, 327, 346) explicitly use
workspace.WithTraceStore(trace.NewInMemoryStore()). Although the default trace store is already an in-memory store per the workspace implementation, the inconsistency makes the test setup confusing.For consistency, either:
- Add the explicit trace store option here to match other tests:
- ws := workspace.New(ctx, "test-workspace-1") + ws := workspace.New(ctx, "test-workspace-1", workspace.WithTraceStore(trace.NewInMemoryStore()))
- Or remove the explicit option from all tests since it matches the default behavior anyway.
apps/workspace-engine/pkg/workspace/workspace.go (1)
11-30: Default trace store properly addresses previous nil panic concern.The initialization of
traceStorewithtrace.NewInMemoryStore()at line 19 provides a safe default, preventing the panic scenario flagged in the previous review. The ordering (default → options → manager creation) is correct.However, the comment on line 27 is now extraneous per coding guidelines and should be removed.
Based on coding guidelines.
Apply this diff to remove the extraneous comment:
- // Create release manager with trace store (will panic if nil) ws.releasemanager = releasemanager.New(s, ws.traceStore)apps/workspace-engine/pkg/workspace/releasemanager/trace/memory_store_test.go (1)
120-123: Guard against empty span slices before pointer checks.If
GetSpansever returns an empty slice (e.g., a regression wipes spans), the test will panic on thespans1[0]dereference instead of reporting a clear failure. A quick length check keeps the assertion informative.- // Should be copies, not the same slice - if &spans1[0] == &spans2[0] { - t.Error("GetSpans should return copies, not same slice") - } + // Should be copies, not the same slice + if len(spans1) == 0 { + t.Fatal("expected at least one span to validate copy semantics") + } + if &spans1[0] == &spans2[0] { + t.Error("GetSpans should return copies, not same slice") + }
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (8)
apps/workspace-engine/pkg/events/handler/resources/resourceproviders_cache_test.go(8 hunks)apps/workspace-engine/pkg/workspace/releasemanager/trace/memory_store_test.go(1 hunks)apps/workspace-engine/pkg/workspace/releasemanager/trace/phases_test.go(1 hunks)apps/workspace-engine/pkg/workspace/releasemanager/trace/token.go(1 hunks)apps/workspace-engine/pkg/workspace/workspace.go(2 hunks)apps/workspace-engine/test/e2e/engine_trace_test.go(1 hunks)apps/workspace-engine/test/integration/trace_helpers.go(1 hunks)apps/workspace-engine/test/integration/workspace.go(3 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- apps/workspace-engine/test/integration/workspace.go
🧰 Additional context used
📓 Path-based instructions (2)
apps/workspace-engine/**/*.go
📄 CodeRabbit inference engine (apps/workspace-engine/CLAUDE.md)
apps/workspace-engine/**/*.go: Do not add extraneous inline comments that state the obvious
Do not add comments that simply restate what the code does
Do not add comments for standard Go patterns (e.g., noting WaitGroup or semaphore usage)
Write comments that explain why, document complex logic/algorithms, provide non-obvious context, include TODO/FIXME, and document exported functions/types/methods
Files:
apps/workspace-engine/pkg/workspace/releasemanager/trace/token.goapps/workspace-engine/pkg/workspace/releasemanager/trace/memory_store_test.goapps/workspace-engine/pkg/events/handler/resources/resourceproviders_cache_test.goapps/workspace-engine/pkg/workspace/workspace.goapps/workspace-engine/test/e2e/engine_trace_test.goapps/workspace-engine/pkg/workspace/releasemanager/trace/phases_test.goapps/workspace-engine/test/integration/trace_helpers.go
apps/workspace-engine/**/*_test.go
📄 CodeRabbit inference engine (apps/workspace-engine/CLAUDE.md)
Follow the existing test structure used in *_test.go files
Files:
apps/workspace-engine/pkg/workspace/releasemanager/trace/memory_store_test.goapps/workspace-engine/pkg/events/handler/resources/resourceproviders_cache_test.goapps/workspace-engine/test/e2e/engine_trace_test.goapps/workspace-engine/pkg/workspace/releasemanager/trace/phases_test.go
🧠 Learnings (11)
📓 Common learnings
Learnt from: adityachoudhari26
Repo: ctrlplanedev/ctrlplane PR: 696
File: apps/workspace-engine/pkg/workspace/releasemanager/deployment/executor.go:63-67
Timestamp: 2025-10-24T00:02:29.723Z
Learning: In workspace-engine replay mode (apps/workspace-engine/pkg/workspace/releasemanager/deployment/executor.go), jobs are created with Pending status even during replay because job updates are still accepted and processed. If a job remains Pending after replay completes, it genuinely is still pending because no updates occurred during replay.
📚 Learning: 2025-08-12T18:13:54.630Z
Learnt from: CR
Repo: ctrlplanedev/ctrlplane PR: 0
File: apps/workspace-engine/CLAUDE.md:0-0
Timestamp: 2025-08-12T18:13:54.630Z
Learning: Applies to apps/workspace-engine/**/*_test.go : Follow the existing test structure used in *_test.go files
Applied to files:
apps/workspace-engine/pkg/workspace/releasemanager/trace/memory_store_test.goapps/workspace-engine/pkg/events/handler/resources/resourceproviders_cache_test.goapps/workspace-engine/test/e2e/engine_trace_test.goapps/workspace-engine/pkg/workspace/releasemanager/trace/phases_test.goapps/workspace-engine/test/integration/trace_helpers.go
📚 Learning: 2025-08-12T18:13:54.630Z
Learnt from: CR
Repo: ctrlplanedev/ctrlplane PR: 0
File: apps/workspace-engine/CLAUDE.md:0-0
Timestamp: 2025-08-12T18:13:54.630Z
Learning: Applies to apps/workspace-engine/pkg/model/selector/**/*_test.go : Write comprehensive, data-driven tests for new condition types
Applied to files:
apps/workspace-engine/pkg/workspace/releasemanager/trace/memory_store_test.goapps/workspace-engine/test/e2e/engine_trace_test.goapps/workspace-engine/pkg/workspace/releasemanager/trace/phases_test.go
📚 Learning: 2025-08-12T18:13:54.630Z
Learnt from: CR
Repo: ctrlplanedev/ctrlplane PR: 0
File: apps/workspace-engine/CLAUDE.md:0-0
Timestamp: 2025-08-12T18:13:54.630Z
Learning: Applies to apps/workspace-engine/**/*.go : Write comments that explain why, document complex logic/algorithms, provide non-obvious context, include TODO/FIXME, and document exported functions/types/methods
Applied to files:
apps/workspace-engine/pkg/events/handler/resources/resourceproviders_cache_test.goapps/workspace-engine/pkg/workspace/workspace.goapps/workspace-engine/test/e2e/engine_trace_test.goapps/workspace-engine/test/integration/trace_helpers.go
📚 Learning: 2025-08-12T18:13:54.630Z
Learnt from: CR
Repo: ctrlplanedev/ctrlplane PR: 0
File: apps/workspace-engine/CLAUDE.md:0-0
Timestamp: 2025-08-12T18:13:54.630Z
Learning: Applies to apps/workspace-engine/**/*.go : Do not add comments for standard Go patterns (e.g., noting WaitGroup or semaphore usage)
Applied to files:
apps/workspace-engine/pkg/events/handler/resources/resourceproviders_cache_test.goapps/workspace-engine/pkg/workspace/workspace.goapps/workspace-engine/test/e2e/engine_trace_test.go
📚 Learning: 2025-08-12T18:13:54.630Z
Learnt from: CR
Repo: ctrlplanedev/ctrlplane PR: 0
File: apps/workspace-engine/CLAUDE.md:0-0
Timestamp: 2025-08-12T18:13:54.630Z
Learning: Applies to apps/workspace-engine/**/*.go : Do not add comments that simply restate what the code does
Applied to files:
apps/workspace-engine/pkg/workspace/workspace.goapps/workspace-engine/test/e2e/engine_trace_test.go
📚 Learning: 2025-08-12T18:13:54.630Z
Learnt from: CR
Repo: ctrlplanedev/ctrlplane PR: 0
File: apps/workspace-engine/CLAUDE.md:0-0
Timestamp: 2025-08-12T18:13:54.630Z
Learning: Applies to apps/workspace-engine/**/*.go : Do not add extraneous inline comments that state the obvious
Applied to files:
apps/workspace-engine/pkg/workspace/workspace.goapps/workspace-engine/test/e2e/engine_trace_test.go
📚 Learning: 2025-08-12T18:13:54.630Z
Learnt from: CR
Repo: ctrlplanedev/ctrlplane PR: 0
File: apps/workspace-engine/CLAUDE.md:0-0
Timestamp: 2025-08-12T18:13:54.630Z
Learning: Applies to apps/workspace-engine/pkg/model/selector/**/*_test.go : Include edge cases in tests (empty values, special characters, unicode) for condition types
Applied to files:
apps/workspace-engine/test/e2e/engine_trace_test.go
📚 Learning: 2025-08-12T18:13:54.630Z
Learnt from: CR
Repo: ctrlplanedev/ctrlplane PR: 0
File: apps/workspace-engine/CLAUDE.md:0-0
Timestamp: 2025-08-12T18:13:54.630Z
Learning: Applies to apps/workspace-engine/pkg/model/selector/**/*_test.go : Use table-driven tests for all condition types
Applied to files:
apps/workspace-engine/test/e2e/engine_trace_test.goapps/workspace-engine/pkg/workspace/releasemanager/trace/phases_test.go
📚 Learning: 2025-08-12T20:49:05.086Z
Learnt from: adityachoudhari26
Repo: ctrlplanedev/ctrlplane PR: 655
File: apps/workspace-engine/pkg/engine/workspace/fluent.go:166-171
Timestamp: 2025-08-12T20:49:05.086Z
Learning: The UpdateDeploymentVersions() method with OperationCreate case in apps/workspace-engine/pkg/engine/workspace/fluent.go is specifically designed only for creating new deployment versions, not for handling potential duplicates or existing versions.
Applied to files:
apps/workspace-engine/test/e2e/engine_trace_test.go
📚 Learning: 2025-10-24T00:02:29.723Z
Learnt from: adityachoudhari26
Repo: ctrlplanedev/ctrlplane PR: 696
File: apps/workspace-engine/pkg/workspace/releasemanager/deployment/executor.go:63-67
Timestamp: 2025-10-24T00:02:29.723Z
Learning: In workspace-engine replay mode (apps/workspace-engine/pkg/workspace/releasemanager/deployment/executor.go), jobs are created with Pending status even during replay because job updates are still accepted and processed. If a job remains Pending after replay completes, it genuinely is still pending because no updates occurred during replay.
Applied to files:
apps/workspace-engine/test/e2e/engine_trace_test.go
🧬 Code graph analysis (6)
apps/workspace-engine/pkg/workspace/releasemanager/trace/memory_store_test.go (3)
apps/workspace-engine/pkg/workspace/releasemanager/trace/memory_store.go (1)
NewInMemoryStore(17-21)apps/workspace-engine/pkg/workspace/releasemanager/trace/recorder.go (1)
NewReconcileTarget(40-42)apps/workspace-engine/pkg/workspace/releasemanager/trace/types.go (2)
StatusCompleted(72-72)ResultAllowed(39-39)
apps/workspace-engine/pkg/events/handler/resources/resourceproviders_cache_test.go (3)
apps/workspace-engine/pkg/workspace/workspace.go (1)
New(11-31)apps/workspace-engine/pkg/workspace/opts.go (1)
WithTraceStore(13-17)apps/workspace-engine/pkg/workspace/releasemanager/trace/memory_store.go (1)
NewInMemoryStore(17-21)
apps/workspace-engine/pkg/workspace/workspace.go (5)
apps/workspace-engine/pkg/workspace/releasemanager/manager.go (2)
New(41-57)PersistenceStore(37-37)apps/workspace-engine/pkg/workspace/store/store.go (1)
New(13-39)apps/workspace-engine/pkg/workspace/opts.go (1)
WorkspaceOption(11-11)apps/workspace-engine/pkg/workspace/releasemanager/trace/memory_store.go (1)
NewInMemoryStore(17-21)apps/workspace-engine/pkg/workspace/releasemanager/trace/types.go (1)
PersistenceStore(78-80)
apps/workspace-engine/test/e2e/engine_trace_test.go (7)
apps/workspace-engine/pkg/workspace/workspace.go (2)
New(11-31)Workspace(33-40)apps/workspace-engine/test/integration/workspace.go (1)
NewTestWorkspace(38-66)apps/workspace-engine/test/integration/opts.go (15)
WithSystem(92-113)WithDeployment(324-342)DeploymentID(375-379)DeploymentName(362-367)DeploymentJobAgent(381-385)DeploymentCelResourceSelector(387-393)WithResource(115-140)PolicyID(712-716)PolicyName(700-704)WithPolicyTargetSelector(718-728)PolicyTargetCelEnvironmentSelector(773-779)PolicyTargetCelResourceSelector(797-803)WithPolicyRule(730-747)WithRuleAnyApproval(831-838)WithRuleGradualRollout(842-849)apps/workspace-engine/test/integration/trace_helpers.go (12)
ClearTraces(18-20)GetAllTraces(13-15)VerifyTraceStructure(47-81)AssertSpanExists(276-284)AssertSpanAttributes(84-117)VerifySpanDepth(182-217)VerifyTraceTimeline(120-146)VerifySequenceNumbers(220-237)DumpTrace(240-261)CountSpansByType(264-273)FindSpansByPhase(33-44)GetSpanAttribute(172-179)apps/workspace-engine/test/integration/creators/deployment_version.go (1)
NewDeploymentVersion(15-36)apps/workspace-engine/pkg/events/handler/handler.go (1)
DeploymentVersionCreate(43-43)apps/workspace-engine/pkg/workspace/releasemanager/trace/types.go (8)
Phase(13-13)PhasePlanning(17-17)PhaseEligibility(18-18)PhaseExecution(19-19)NodeTypeEvaluation(29-29)NodeTypeDecision(31-31)NodeTypeCheck(30-30)NodeTypeAction(32-32)
apps/workspace-engine/pkg/workspace/releasemanager/trace/phases_test.go (3)
apps/workspace-engine/pkg/workspace/releasemanager/trace/recorder.go (1)
NewReconcileTarget(40-42)apps/workspace-engine/pkg/workspace/releasemanager/trace/types.go (16)
ResultAllowed(39-39)StatusCompleted(72-72)NodeTypeEvaluation(29-29)ResultBlocked(40-40)DecisionApproved(63-63)DecisionRejected(64-64)CheckResultPass(47-47)NodeTypeCheck(30-30)CheckResultFail(48-48)StepResultPass(55-55)StepResultFail(56-56)PhaseAction(20-20)PhaseReconciliation(16-16)PhasePlanning(17-17)PhaseEligibility(18-18)PhaseExecution(19-19)apps/workspace-engine/pkg/workspace/releasemanager/trace/token.go (1)
ParseTraceToken(103-109)
apps/workspace-engine/test/integration/trace_helpers.go (2)
apps/workspace-engine/test/integration/workspace.go (1)
TestWorkspace(31-36)apps/workspace-engine/pkg/workspace/releasemanager/trace/types.go (2)
Phase(13-13)NodeType(25-25)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (8)
- GitHub Check: Typecheck
- GitHub Check: Lint
- GitHub Check: build (linux/amd64)
- GitHub Check: tests
- GitHub Check: workspace-engine-tests
- GitHub Check: build (linux/amd64)
- GitHub Check: build-and-push (linux/amd64)
- GitHub Check: build (linux/amd64)
🔇 Additional comments (5)
apps/workspace-engine/pkg/workspace/workspace.go (2)
7-7: LGTM - Import required for trace store initialization.The trace package import is needed for the in-memory store constructor.
39-39: LGTM - Trace store field properly declared.The unexported
traceStorefield is correctly typed and used internally by the workspace.apps/workspace-engine/pkg/workspace/releasemanager/trace/token.go (1)
102-109: Handy public parser helper.Nice to have this wrapper so callers that only need the IDs don’t have to unpack the whole TraceToken. Keeps consumers tidy.
apps/workspace-engine/pkg/workspace/releasemanager/trace/phases_test.go (1)
166-196: Great coverage of decision outcomes.Appreciate how these assertions lock in the status mapping for approved vs rejected decisions; that’ll guard against regressions in the span status plumbing.
apps/workspace-engine/test/integration/trace_helpers.go (1)
119-147: Timeline validation helper looks solid.The parent/child containment checks are exactly what we need to detect subtle ordering regressions in trace emission—nicely done.
| if len(traceIDs) < 2 { | ||
| t.Logf("Note: Found %d unique trace IDs, expected at least 2", len(traceIDs)) | ||
| } |
There was a problem hiding this comment.
Make the concurrent traces expectation an assertion.
The test name promises separate traces for concurrent deployments, but today we only log when fewer than two trace IDs show up. That means a regression collapsing everything into one trace will still pass. Please make this a hard assertion so the suite really enforces the behaviour.
- if len(traceIDs) < 2 {
- t.Logf("Note: Found %d unique trace IDs, expected at least 2", len(traceIDs))
- }
+ if len(traceIDs) < 2 {
+ t.Fatalf("expected at least 2 unique trace IDs for concurrent deployments, got %d", len(traceIDs))
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if len(traceIDs) < 2 { | |
| t.Logf("Note: Found %d unique trace IDs, expected at least 2", len(traceIDs)) | |
| } | |
| if len(traceIDs) < 2 { | |
| t.Fatalf("expected at least 2 unique trace IDs for concurrent deployments, got %d", len(traceIDs)) | |
| } |
🤖 Prompt for AI Agents
In apps/workspace-engine/test/e2e/engine_trace_test.go around lines 489 to 491,
the test currently only logs when fewer than two unique trace IDs are found;
change this to a hard assertion so the test fails on regression (e.g., replace
the t.Logf with t.Fatalf("expected at least 2 unique trace IDs, found %d",
len(traceIDs)) or, if the project uses testify, use require.Greaterf(t,
len(traceIDs), 1, "expected at least 2 unique trace IDs")); ensure any required
import (testify/require) is added if you use that form.
There was a problem hiding this comment.
Actionable comments posted: 7
♻️ Duplicate comments (1)
apps/workspace-engine/test/e2e/engine_trace_test.go (1)
487-489: Make concurrent traces requirement an assertionWe’re still just logging when concurrent deployments collapse to one trace ID, so the test passes after a regression. Please fail hard so we actually guard the behaviour.
- if len(traceIDs) < 2 { - t.Logf("Note: Found %d unique trace IDs, expected at least 2", len(traceIDs)) - } + if len(traceIDs) < 2 { + t.Fatalf("expected at least 2 unique trace IDs for concurrent deployments, got %d", len(traceIDs)) + }
🧹 Nitpick comments (1)
apps/workspace-engine/pkg/workspace/releasemanager/trace/phases.go (1)
225-229: Consider validating config keys for attribute naming.Config keys are concatenated directly into attribute keys without validation. While this is likely safe for system-controlled config, consider whether:
- Keys with special characters (dots, spaces) could cause issues
- Very long keys should be truncated
- Sensitive values need filtering
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (22)
DEPLOYMENT_TRACES_SUMMARY.md(1 hunks)apps/web/app/root.tsx(2 hunks)apps/web/app/routes.ts(2 hunks)apps/web/app/routes/ws/deployments/_components/DeploymentsNavbarTabs.tsx(2 hunks)apps/web/app/routes/ws/deployments/_components/TraceFilters.tsx(1 hunks)apps/web/app/routes/ws/deployments/_components/TraceTree.tsx(1 hunks)apps/web/app/routes/ws/deployments/_components/trace-utils.ts(1 hunks)apps/web/app/routes/ws/deployments/page.$deploymentId.traces.tsx(1 hunks)apps/web/app/routes/ws/jobs/_components/JobActions.tsx(4 hunks)apps/web/app/routes/ws/jobs/_components/JobsTable.tsx(2 hunks)apps/web/app/routes/ws/jobs/jobs.tsx(3 hunks)apps/web/package.json(1 hunks)apps/workspace-engine/pkg/workspace/releasemanager/deployment/job_eligibility.go(6 hunks)apps/workspace-engine/pkg/workspace/releasemanager/deployment/planner.go(9 hunks)apps/workspace-engine/pkg/workspace/releasemanager/trace/memory_store_test.go(1 hunks)apps/workspace-engine/pkg/workspace/releasemanager/trace/phases.go(3 hunks)apps/workspace-engine/test/e2e/engine_trace_test.go(1 hunks)apps/workspace-engine/test/integration/trace_helpers.go(1 hunks)packages/trpc/src/root.ts(2 hunks)packages/trpc/src/routes/DEPLOYMENT_TRACES.md(1 hunks)packages/trpc/src/routes/deployment-traces.ts(1 hunks)packages/validators/src/auth/index.ts(1 hunks)
🧰 Additional context used
📓 Path-based instructions (4)
**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.{ts,tsx}: Use TypeScript with explicit types (prefer interfaces for public APIs)
Import styles: Use named imports, group imports by source (std lib > external > internal)
Consistent type imports:import type { Type } from "module"
Prefer async/await over raw promises
Handle errors explicitly (use try/catch and typed error responses)
Files:
packages/trpc/src/root.tsapps/web/app/routes/ws/deployments/_components/DeploymentsNavbarTabs.tsxapps/web/app/routes/ws/deployments/_components/TraceFilters.tsxpackages/validators/src/auth/index.tsapps/web/app/routes/ws/deployments/page.$deploymentId.traces.tsxapps/web/app/root.tsxpackages/trpc/src/routes/deployment-traces.tsapps/web/app/routes/ws/jobs/_components/JobsTable.tsxapps/web/app/routes/ws/deployments/_components/trace-utils.tsapps/web/app/routes/ws/jobs/jobs.tsxapps/web/app/routes.tsapps/web/app/routes/ws/jobs/_components/JobActions.tsxapps/web/app/routes/ws/deployments/_components/TraceTree.tsx
⚙️ CodeRabbit configuration file
**/*.{ts,tsx}: Note on Error Handling:
Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error handling. These are acceptable as long as they maintain clarity and predictability.
Files:
packages/trpc/src/root.tsapps/web/app/routes/ws/deployments/_components/DeploymentsNavbarTabs.tsxapps/web/app/routes/ws/deployments/_components/TraceFilters.tsxpackages/validators/src/auth/index.tsapps/web/app/routes/ws/deployments/page.$deploymentId.traces.tsxapps/web/app/root.tsxpackages/trpc/src/routes/deployment-traces.tsapps/web/app/routes/ws/jobs/_components/JobsTable.tsxapps/web/app/routes/ws/deployments/_components/trace-utils.tsapps/web/app/routes/ws/jobs/jobs.tsxapps/web/app/routes.tsapps/web/app/routes/ws/jobs/_components/JobActions.tsxapps/web/app/routes/ws/deployments/_components/TraceTree.tsx
**/*.{js,jsx,ts,tsx,json,md,yml,yaml}
📄 CodeRabbit inference engine (CLAUDE.md)
Formatting: Prettier is used with
@ctrlplane/prettier-config
Files:
packages/trpc/src/root.tsapps/web/app/routes/ws/deployments/_components/DeploymentsNavbarTabs.tsxapps/web/app/routes/ws/deployments/_components/TraceFilters.tsxpackages/validators/src/auth/index.tsapps/web/app/routes/ws/deployments/page.$deploymentId.traces.tsxpackages/trpc/src/routes/DEPLOYMENT_TRACES.mdapps/web/package.jsonapps/web/app/root.tsxpackages/trpc/src/routes/deployment-traces.tsapps/web/app/routes/ws/jobs/_components/JobsTable.tsxDEPLOYMENT_TRACES_SUMMARY.mdapps/web/app/routes/ws/deployments/_components/trace-utils.tsapps/web/app/routes/ws/jobs/jobs.tsxapps/web/app/routes.tsapps/web/app/routes/ws/jobs/_components/JobActions.tsxapps/web/app/routes/ws/deployments/_components/TraceTree.tsx
apps/workspace-engine/**/*.go
📄 CodeRabbit inference engine (apps/workspace-engine/CLAUDE.md)
apps/workspace-engine/**/*.go: Do not add extraneous inline comments that state the obvious
Do not add comments that simply restate what the code does
Do not add comments for standard Go patterns (e.g., noting WaitGroup or semaphore usage)
Write comments that explain why, document complex logic/algorithms, provide non-obvious context, include TODO/FIXME, and document exported functions/types/methods
Files:
apps/workspace-engine/pkg/workspace/releasemanager/deployment/planner.goapps/workspace-engine/pkg/workspace/releasemanager/trace/memory_store_test.goapps/workspace-engine/test/e2e/engine_trace_test.goapps/workspace-engine/pkg/workspace/releasemanager/trace/phases.goapps/workspace-engine/test/integration/trace_helpers.goapps/workspace-engine/pkg/workspace/releasemanager/deployment/job_eligibility.go
apps/workspace-engine/**/*_test.go
📄 CodeRabbit inference engine (apps/workspace-engine/CLAUDE.md)
Follow the existing test structure used in *_test.go files
Files:
apps/workspace-engine/pkg/workspace/releasemanager/trace/memory_store_test.goapps/workspace-engine/test/e2e/engine_trace_test.go
🧠 Learnings (14)
📓 Common learnings
Learnt from: adityachoudhari26
Repo: ctrlplanedev/ctrlplane PR: 696
File: apps/workspace-engine/pkg/workspace/releasemanager/deployment/executor.go:63-67
Timestamp: 2025-10-24T00:02:29.723Z
Learning: In workspace-engine replay mode (apps/workspace-engine/pkg/workspace/releasemanager/deployment/executor.go), jobs are created with Pending status even during replay because job updates are still accepted and processed. If a job remains Pending after replay completes, it genuinely is still pending because no updates occurred during replay.
📚 Learning: 2024-10-29T02:04:50.312Z
Learnt from: adityachoudhari26
Repo: ctrlplanedev/ctrlplane PR: 181
File: packages/api/src/router/deployment.ts:116-131
Timestamp: 2024-10-29T02:04:50.312Z
Learning: In `packages/api/src/router/deployment.ts`, the `list.byDeploymentId` procedure requires multiple database queries due to limitations of the `releaseMatchesCondition` function.
Applied to files:
packages/trpc/src/root.tspackages/trpc/src/routes/DEPLOYMENT_TRACES.mdpackages/trpc/src/routes/deployment-traces.tsDEPLOYMENT_TRACES_SUMMARY.mdapps/web/app/routes.ts
📚 Learning: 2024-11-01T02:37:25.510Z
Learnt from: adityachoudhari26
Repo: ctrlplanedev/ctrlplane PR: 188
File: packages/api/src/router/deployment.ts:144-161
Timestamp: 2024-11-01T02:37:25.510Z
Learning: In `packages/api/src/router/deployment.ts`, when using Drizzle ORM, there is a limitation when referencing the same table twice in a relational builder query (rbq), requiring separate queries to avoid issues.
Applied to files:
packages/trpc/src/root.tspackages/trpc/src/routes/deployment-traces.ts
📚 Learning: 2024-11-27T23:16:35.580Z
Learnt from: adityachoudhari26
Repo: ctrlplanedev/ctrlplane PR: 237
File: apps/webservice/src/app/[workspaceSlug]/(app)/_components/deployment-resource-drawer/DeploymentResourceDrawer.tsx:43-50
Timestamp: 2024-11-27T23:16:35.580Z
Learning: In `DeploymentResourceDrawer.tsx`, the `isOpen` variable already checks whether `deploymentId`, `environmentId`, and `resourceId` are non-null, so additional null checks in query `enabled` conditions are not necessary.
Applied to files:
apps/web/app/routes/ws/deployments/_components/DeploymentsNavbarTabs.tsxapps/web/app/routes/ws/jobs/jobs.tsx
📚 Learning: 2025-08-12T18:13:54.630Z
Learnt from: CR
Repo: ctrlplanedev/ctrlplane PR: 0
File: apps/workspace-engine/CLAUDE.md:0-0
Timestamp: 2025-08-12T18:13:54.630Z
Learning: Applies to apps/workspace-engine/**/*_test.go : Follow the existing test structure used in *_test.go files
Applied to files:
apps/workspace-engine/pkg/workspace/releasemanager/deployment/planner.goapps/workspace-engine/pkg/workspace/releasemanager/trace/memory_store_test.goapps/workspace-engine/test/e2e/engine_trace_test.goapps/workspace-engine/test/integration/trace_helpers.go
📚 Learning: 2025-08-12T18:13:54.630Z
Learnt from: CR
Repo: ctrlplanedev/ctrlplane PR: 0
File: apps/workspace-engine/CLAUDE.md:0-0
Timestamp: 2025-08-12T18:13:54.630Z
Learning: Applies to apps/workspace-engine/pkg/model/selector/**/*_test.go : Write comprehensive, data-driven tests for new condition types
Applied to files:
apps/workspace-engine/pkg/workspace/releasemanager/trace/memory_store_test.goapps/workspace-engine/test/e2e/engine_trace_test.go
📚 Learning: 2024-11-01T02:35:07.352Z
Learnt from: adityachoudhari26
Repo: ctrlplanedev/ctrlplane PR: 188
File: apps/webservice/src/app/[workspaceSlug]/_components/release-channel-drawer/Usage.tsx:42-44
Timestamp: 2024-11-01T02:35:07.352Z
Learning: In `apps/webservice/src/app/[workspaceSlug]/_components/release-channel-drawer/Usage.tsx`, within the `Usage` component, the arrays used for filtering inherited environments are expected to remain small. Therefore, performance optimizations for this filtering logic are not necessary.
Applied to files:
apps/web/app/routes/ws/deployments/_components/TraceFilters.tsxapps/web/app/routes/ws/jobs/_components/JobsTable.tsxapps/web/app/routes/ws/jobs/jobs.tsx
📚 Learning: 2025-08-12T20:49:05.086Z
Learnt from: adityachoudhari26
Repo: ctrlplanedev/ctrlplane PR: 655
File: apps/workspace-engine/pkg/engine/workspace/fluent.go:166-171
Timestamp: 2025-08-12T20:49:05.086Z
Learning: The UpdateDeploymentVersions() method with OperationCreate case in apps/workspace-engine/pkg/engine/workspace/fluent.go is specifically designed only for creating new deployment versions, not for handling potential duplicates or existing versions.
Applied to files:
packages/validators/src/auth/index.tsapps/workspace-engine/test/e2e/engine_trace_test.go
📚 Learning: 2025-08-12T18:13:54.630Z
Learnt from: CR
Repo: ctrlplanedev/ctrlplane PR: 0
File: apps/workspace-engine/CLAUDE.md:0-0
Timestamp: 2025-08-12T18:13:54.630Z
Learning: Applies to apps/workspace-engine/**/*.go : Write comments that explain why, document complex logic/algorithms, provide non-obvious context, include TODO/FIXME, and document exported functions/types/methods
Applied to files:
apps/workspace-engine/test/e2e/engine_trace_test.goapps/workspace-engine/test/integration/trace_helpers.go
📚 Learning: 2025-08-12T18:13:54.630Z
Learnt from: CR
Repo: ctrlplanedev/ctrlplane PR: 0
File: apps/workspace-engine/CLAUDE.md:0-0
Timestamp: 2025-08-12T18:13:54.630Z
Learning: Applies to apps/workspace-engine/pkg/model/selector/**/*_test.go : Use table-driven tests for all condition types
Applied to files:
apps/workspace-engine/test/e2e/engine_trace_test.go
📚 Learning: 2025-10-24T00:02:29.723Z
Learnt from: adityachoudhari26
Repo: ctrlplanedev/ctrlplane PR: 696
File: apps/workspace-engine/pkg/workspace/releasemanager/deployment/executor.go:63-67
Timestamp: 2025-10-24T00:02:29.723Z
Learning: In workspace-engine replay mode (apps/workspace-engine/pkg/workspace/releasemanager/deployment/executor.go), jobs are created with Pending status even during replay because job updates are still accepted and processed. If a job remains Pending after replay completes, it genuinely is still pending because no updates occurred during replay.
Applied to files:
apps/workspace-engine/test/e2e/engine_trace_test.goapps/workspace-engine/pkg/workspace/releasemanager/deployment/job_eligibility.go
📚 Learning: 2025-04-12T22:08:13.790Z
Learnt from: zacharyblasczyk
Repo: ctrlplanedev/ctrlplane PR: 408
File: apps/webservice/src/components/form/job-agent/JobAgentJenkinsPipelineConfig.tsx:26-31
Timestamp: 2025-04-12T22:08:13.790Z
Learning: For Jenkins job configuration, two approaches are needed: (1) a simple URL input form for airgapped environments (current focus) and (2) a dropdown selection interface for non-airgapped environments where the Jenkins server is accessible. A component similar to DeploymentJobAgentGithubConfig.tsx is preferred.
Applied to files:
apps/web/app/routes/ws/jobs/jobs.tsx
📚 Learning: 2025-03-16T19:41:44.129Z
Learnt from: zacharyblasczyk
Repo: ctrlplanedev/ctrlplane PR: 382
File: apps/webservice/src/app/api/v1/deployments/[deploymentId]/route.ts:82-88
Timestamp: 2025-03-16T19:41:44.129Z
Learning: In Next.js 15, dynamic route parameters (params) return Promises instead of direct values in async contexts. When accessing properties like `params.deploymentId` in an async function, use `(await params).deploymentId` to avoid the "params should be awaited before using its properties" error. This applies to API routes, page components, and other async contexts.
Applied to files:
apps/web/app/routes.ts
📚 Learning: 2025-08-12T18:13:54.630Z
Learnt from: CR
Repo: ctrlplanedev/ctrlplane PR: 0
File: apps/workspace-engine/CLAUDE.md:0-0
Timestamp: 2025-08-12T18:13:54.630Z
Learning: Applies to apps/workspace-engine/**/*.go : Do not add comments for standard Go patterns (e.g., noting WaitGroup or semaphore usage)
Applied to files:
apps/workspace-engine/test/integration/trace_helpers.go
🧬 Code graph analysis (12)
packages/trpc/src/root.ts (1)
packages/trpc/src/routes/deployment-traces.ts (1)
deploymentTracesRouter(9-283)
apps/workspace-engine/pkg/workspace/releasemanager/deployment/planner.go (4)
apps/workspace-engine/pkg/workspace/releasemanager/trace/recorder.go (1)
ReconcileTarget(15-37)apps/workspace-engine/pkg/workspace/releasemanager/trace/phases.go (1)
PlanningPhase(12-16)apps/workspace-engine/pkg/workspace/releasemanager/trace/types.go (4)
DecisionRejected(64-64)DecisionApproved(63-63)ResultAllowed(39-39)ResultBlocked(40-40)apps/workspace-engine/pkg/oapi/oapi.gen.go (3)
DeploymentVersion(196-207)RuleEvaluation(584-608)Id(75-75)
apps/workspace-engine/pkg/workspace/releasemanager/trace/memory_store_test.go (3)
apps/workspace-engine/pkg/workspace/releasemanager/trace/memory_store.go (1)
NewInMemoryStore(17-21)apps/workspace-engine/pkg/workspace/releasemanager/trace/recorder.go (1)
NewReconcileTarget(40-42)apps/workspace-engine/pkg/workspace/releasemanager/trace/types.go (2)
StatusCompleted(72-72)ResultAllowed(39-39)
apps/web/app/routes/ws/deployments/_components/TraceFilters.tsx (4)
apps/web/app/components/ui/popover.tsx (3)
Popover(33-33)PopoverTrigger(33-33)PopoverContent(33-33)apps/web/app/components/ui/button.tsx (1)
Button(60-60)apps/web/app/components/ui/badge.tsx (1)
Badge(46-46)apps/web/app/components/ui/select.tsx (5)
Select(175-175)SelectTrigger(183-183)SelectValue(184-184)SelectContent(176-176)SelectItem(178-178)
apps/web/app/routes/ws/deployments/page.$deploymentId.traces.tsx (5)
apps/web/app/routes/ws/deployments/_components/trace-utils.ts (7)
DeploymentTraceSpan(3-28)formatTimestamp(105-115)buildTraceTree(37-83)formatDuration(98-100)calculateSpanDuration(88-93)getPhaseColor(168-186)getStatusColor(141-163)apps/web/app/components/WorkspaceProvider.tsx (1)
useWorkspace(33-38)apps/web/app/routes/ws/deployments/_components/DeploymentProvider.tsx (1)
useDeployment(31-37)apps/web/app/routes/ws/deployments/_components/TraceFilters.tsx (2)
TraceFiltersState(19-23)TraceFilters(32-208)apps/web/app/api/trpc.tsx (1)
trpc(15-15)
apps/workspace-engine/test/e2e/engine_trace_test.go (7)
apps/workspace-engine/pkg/workspace/workspace.go (2)
New(11-31)Workspace(33-40)apps/workspace-engine/test/integration/workspace.go (1)
NewTestWorkspace(38-66)apps/workspace-engine/test/integration/opts.go (18)
JobAgentID(686-690)JobAgentName(680-684)WithSystem(92-113)WithDeployment(324-342)DeploymentID(375-379)DeploymentName(362-367)DeploymentJobAgent(381-385)DeploymentCelResourceSelector(387-393)WithEnvironment(344-358)WithResource(115-140)PolicyID(712-716)PolicyName(700-704)WithPolicyTargetSelector(718-728)PolicyTargetCelEnvironmentSelector(773-779)PolicyTargetCelResourceSelector(797-803)WithPolicyRule(730-747)WithRuleAnyApproval(831-838)WithRuleGradualRollout(842-849)apps/workspace-engine/test/integration/trace_helpers.go (13)
ClearTraces(18-20)GetAllTraces(13-15)VerifyTraceStructure(47-81)AssertSpanExists(295-303)AssertSpanAttributes(84-117)AssertPhaseExists(316-325)VerifySpanDepth(182-217)VerifyTraceTimeline(120-146)VerifySequenceNumbers(220-256)DumpTrace(259-280)CountSpansByType(283-292)FindSpansByPhase(33-44)GetSpanAttribute(172-179)apps/workspace-engine/test/integration/creators/deployment_version.go (1)
NewDeploymentVersion(15-36)apps/workspace-engine/pkg/events/handler/handler.go (1)
DeploymentVersionCreate(43-43)apps/workspace-engine/pkg/workspace/releasemanager/trace/types.go (8)
Phase(13-13)PhasePlanning(17-17)PhaseEligibility(18-18)PhaseExecution(19-19)NodeTypeEvaluation(29-29)NodeTypeDecision(31-31)NodeTypeCheck(30-30)NodeTypeAction(32-32)
packages/trpc/src/routes/deployment-traces.ts (1)
packages/trpc/src/trpc.ts (2)
router(45-45)protectedProcedure(97-97)
apps/web/app/routes/ws/jobs/jobs.tsx (3)
apps/web/app/routes/ws/jobs/_components/ResourceFilter.tsx (1)
ResourceFilter(38-85)apps/web/app/routes/ws/jobs/_components/EnvironmentFilter.tsx (1)
EnvironmentFilter(30-72)apps/web/app/routes/ws/jobs/_components/DeploymentFilter.tsx (1)
DeploymentFilter(30-74)
apps/workspace-engine/test/integration/trace_helpers.go (2)
apps/workspace-engine/test/integration/workspace.go (1)
TestWorkspace(31-36)apps/workspace-engine/pkg/workspace/releasemanager/trace/types.go (2)
Phase(13-13)NodeType(25-25)
apps/workspace-engine/pkg/workspace/releasemanager/deployment/job_eligibility.go (3)
apps/workspace-engine/pkg/workspace/releasemanager/trace/recorder.go (1)
ReconcileTarget(15-37)apps/workspace-engine/pkg/workspace/releasemanager/trace/phases.go (1)
EligibilityPhase(107-111)apps/workspace-engine/pkg/workspace/releasemanager/trace/types.go (4)
CheckResultPass(47-47)CheckResultFail(48-48)DecisionApproved(63-63)DecisionRejected(64-64)
apps/web/app/routes/ws/jobs/_components/JobActions.tsx (4)
apps/web/app/components/WorkspaceProvider.tsx (1)
useWorkspace(33-38)apps/web/app/api/trpc.tsx (1)
trpc(15-15)apps/web/app/routes/ws/deployments/_components/trace-utils.ts (1)
buildTraceTree(37-83)apps/web/app/routes/ws/deployments/_components/TraceTree.tsx (1)
TraceTree(294-375)
apps/web/app/routes/ws/deployments/_components/TraceTree.tsx (2)
apps/web/app/routes/ws/deployments/_components/trace-utils.ts (6)
TraceTreeNode(30-32)DeploymentTraceSpan(3-28)getSpanDisplayInfo(238-285)calculateSpanDuration(88-93)formatRelativeTime(120-136)formatDuration(98-100)apps/web/app/lib/utils.ts (1)
cn(4-6)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (8)
- GitHub Check: workspace-engine-tests
- GitHub Check: build (linux/amd64)
- GitHub Check: Typecheck
- GitHub Check: Lint
- GitHub Check: build (linux/amd64)
- GitHub Check: tests
- GitHub Check: build-and-push (linux/amd64)
- GitHub Check: build (linux/amd64)
🔇 Additional comments (15)
apps/workspace-engine/pkg/workspace/releasemanager/trace/phases.go (2)
75-78: LGTM! Clean builder pattern for attribute chaining.The
SetAttributesmethod provides a convenient way to add multiple attributes while maintaining chainability, consistent with the existingAddMetadataandSetResultmethods.
88-99: LGTM! Enhanced observability with message attribute.Recording the message as both a structured attribute (
ctrlplane.message) and in the span status improves trace querying capabilities.apps/web/package.json (1)
58-58: LGTM! Dependency aligns with usage.The
nuqsdependency is appropriately added to support theNuqsAdapterwrapper introduced inapps/web/app/root.tsx.apps/web/app/root.tsx (2)
1-1: LGTM! Appropriate adapter import.The
NuqsAdapterimport from the React Router v7 adapter is correctly specified for the version in use.
56-60: LGTM! Proper integration of URL query state management.Wrapping the
OutletwithNuqsAdapterenables URL query string state management throughout the application. The adapter is correctly positioned in the component tree.DEPLOYMENT_TRACES_SUMMARY.md (1)
1-209: LGTM! Comprehensive API documentation.The documentation thoroughly covers the deployment traces API implementation, including endpoints, authorization, usage examples, and integration patterns. This will be valuable for developers working with the traces feature.
packages/trpc/src/root.ts (2)
1-1: LGTM! Proper router import.The
deploymentTracesRouterimport follows the established pattern for router modules in this file.
25-25: LGTM! Consistent router integration.The
deploymentTracesrouter is properly mounted in the mainappRouter, following the same pattern as other routers in the application.apps/web/app/routes/ws/jobs/jobs.tsx (2)
10-13: LGTM! Filter component imports.The filter components are properly imported and match the implementations shown in the relevant code snippets.
38-60: LGTM! Well-structured header layout with filters.The header layout properly accommodates the new filter components using flexbox. The
shrink-0classes on the trigger and separator prevent unwanted shrinking, while theflex-1spacer correctly pushes the filters to the right side of the header.apps/web/app/routes.ts (2)
6-6: LGTM! Workspace creation route added.The route for workspace creation is properly placed at the root protected level and follows the established routing pattern.
20-23: LGTM! Deployment traces route added.The traces route is properly nested under the deployments layout and follows the established pattern for deployment-related routes (consistent with resources, versions, and release-targets routes).
packages/validators/src/auth/index.ts (1)
105-107: LGTM! Permissions follow established conventions.The new
DeploymentTraceGetandDeploymentTraceListpermissions follow the established naming pattern and will be automatically included in the viewer role through the filter at lines 138-140. They align with the authorization checks in thedeploymentTracesRouter.apps/web/app/routes/ws/jobs/_components/JobsTable.tsx (2)
94-98: LGTM! Proper truncation for External ID.The External ID cell now uses a fixed 200px width with truncation, preventing long IDs from breaking the table layout. The
overflow-hiddenclass on the cell andtruncateon the inner div work together correctly.
119-119: LGTM! Spacing simplification.The removal of
py-2simplifies the container spacing while maintaining the vertical gap between elements withspace-y-2.
| const releaseTargetsMap = useMemo(() => { | ||
| return Object.fromEntries( | ||
| releaseTargetsQuery.data?.items.map((rt) => [ | ||
| `${rt.resource.id}-${rt.environment.id}-${rt.deployment.id}`, | ||
| rt, | ||
| ]) ?? [], | ||
| ); | ||
| }, [releaseTargetsQuery.data?.items]); | ||
|
|
There was a problem hiding this comment.
Fix release target key mapping breaks lookup
releaseTargetsMap is keyed with a locally reconstructed ${resource.id}-${environment.id}-${deployment.id} string, but both the incoming trace data (trace.releaseTargetKey) and the filter payloads rely on the canonical releaseTarget.key returned by the API. Because of that mismatch we never resolve the environment/resource metadata for root traces and release-target filtering silently fails. Please key everything off the server-provided releaseTarget.key so the lookup and filters stay in sync.
- const releaseTargetsMap = useMemo(() => {
- return Object.fromEntries(
- releaseTargetsQuery.data?.items.map((rt) => [
- `${rt.resource.id}-${rt.environment.id}-${rt.deployment.id}`,
- rt,
- ]) ?? [],
- );
- }, [releaseTargetsQuery.data?.items]);
+ const releaseTargetsMap = useMemo(() => {
+ if (!releaseTargetsQuery.data?.items) {
+ return {};
+ }
+ return Object.fromEntries(
+ releaseTargetsQuery.data.items
+ .filter((rt) => rt.releaseTarget?.key)
+ .map((rt) => [rt.releaseTarget.key, rt]),
+ );
+ }, [releaseTargetsQuery.data?.items]);
…
- const releaseTargetOptions = releaseTargets.map((rt) => ({
- key: `${rt.releaseTarget.deploymentId}-${rt.releaseTarget.environmentId}-${rt.releaseTarget.resourceId}`,
- name: `${rt.environment.name}/${rt.resource.name}`,
- }));
+ const releaseTargetOptions = releaseTargets
+ .filter((rt) => rt.releaseTarget?.key)
+ .map((rt) => ({
+ key: rt.releaseTarget.key,
+ name: `${rt.environment.name}/${rt.resource.name}`,
+ }));Also applies to: 223-226
🤖 Prompt for AI Agents
In apps/web/app/routes/ws/deployments/page.$deploymentId.traces.tsx around lines
155-163 (and similarly lines 223-226), the releaseTargetsMap is built using a
reconstructed `${resource.id}-${environment.id}-${deployment.id}` key which does
not match the server-provided releaseTarget.key used by trace.releaseTargetKey
and filter payloads; change the map to use the canonical releaseTarget.key
returned by the API when creating entries and update any lookups to use that
same releaseTarget.key so environment/resource metadata resolves correctly and
filters work as expected.
| if planning != nil { | ||
| for _, result := range firstResults { | ||
| evaluation := planning.StartEvaluation(result.Message) | ||
| display := firstVersion.Name | ||
| if display == "" { | ||
| display = firstVersion.Tag | ||
| } | ||
| evaluation.SetAttributes( | ||
| attribute.String("ctrlplane.group_by", firstVersion.Id), | ||
| attribute.String("ctrlplane.group_by_name", display), | ||
| attribute.String("ctrlplane.version_id", firstVersion.Id), | ||
| attribute.String("ctrlplane.version_name", firstVersion.Name), | ||
| attribute.String("ctrlplane.version_tag", firstVersion.Tag), | ||
| ) | ||
| evaluation.SetResult(trace.ResultBlocked, result.Message).End() | ||
| } |
There was a problem hiding this comment.
Fix trace outcome when replaying first-version evaluations
Line [374]: when no version is deployable, we iterate firstResults and force every evaluation into trace.ResultBlocked. However firstResults contains all checks the first candidate ran before the blocker fired, so successful checks are now reported as failures. This corrupts the trace and makes debugging harder. Please derive the trace outcome from result.Allowed (and optionally guard firstVersion != nil) instead of hard-coding blocked.
- if planning != nil {
- for _, result := range firstResults {
- evaluation := planning.StartEvaluation(result.Message)
- display := firstVersion.Name
- if display == "" {
- display = firstVersion.Tag
- }
- evaluation.SetAttributes(
- attribute.String("ctrlplane.group_by", firstVersion.Id),
- attribute.String("ctrlplane.group_by_name", display),
- attribute.String("ctrlplane.version_id", firstVersion.Id),
- attribute.String("ctrlplane.version_name", firstVersion.Name),
- attribute.String("ctrlplane.version_tag", firstVersion.Tag),
- )
- evaluation.SetResult(trace.ResultBlocked, result.Message).End()
- }
- }
+ if planning != nil && firstVersion != nil {
+ display := firstVersion.Name
+ if display == "" {
+ display = firstVersion.Tag
+ }
+
+ for _, result := range firstResults {
+ evaluation := planning.StartEvaluation(result.Message)
+
+ outcome := trace.ResultAllowed
+ if !result.Allowed {
+ outcome = trace.ResultBlocked
+ }
+
+ evaluation.SetAttributes(
+ attribute.String("ctrlplane.group_by", firstVersion.Id),
+ attribute.String("ctrlplane.group_by_name", display),
+ attribute.String("ctrlplane.version_id", firstVersion.Id),
+ attribute.String("ctrlplane.version_name", firstVersion.Name),
+ attribute.String("ctrlplane.version_tag", firstVersion.Tag),
+ )
+ evaluation.SetResult(outcome, result.Message).End()
+ }
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if planning != nil { | |
| for _, result := range firstResults { | |
| evaluation := planning.StartEvaluation(result.Message) | |
| display := firstVersion.Name | |
| if display == "" { | |
| display = firstVersion.Tag | |
| } | |
| evaluation.SetAttributes( | |
| attribute.String("ctrlplane.group_by", firstVersion.Id), | |
| attribute.String("ctrlplane.group_by_name", display), | |
| attribute.String("ctrlplane.version_id", firstVersion.Id), | |
| attribute.String("ctrlplane.version_name", firstVersion.Name), | |
| attribute.String("ctrlplane.version_tag", firstVersion.Tag), | |
| ) | |
| evaluation.SetResult(trace.ResultBlocked, result.Message).End() | |
| } | |
| if planning != nil && firstVersion != nil { | |
| display := firstVersion.Name | |
| if display == "" { | |
| display = firstVersion.Tag | |
| } | |
| for _, result := range firstResults { | |
| evaluation := planning.StartEvaluation(result.Message) | |
| outcome := trace.ResultAllowed | |
| if !result.Allowed { | |
| outcome = trace.ResultBlocked | |
| } | |
| evaluation.SetAttributes( | |
| attribute.String("ctrlplane.group_by", firstVersion.Id), | |
| attribute.String("ctrlplane.group_by_name", display), | |
| attribute.String("ctrlplane.version_id", firstVersion.Id), | |
| attribute.String("ctrlplane.version_name", firstVersion.Name), | |
| attribute.String("ctrlplane.version_tag", firstVersion.Tag), | |
| ) | |
| evaluation.SetResult(outcome, result.Message).End() | |
| } | |
| } |
🤖 Prompt for AI Agents
In apps/workspace-engine/pkg/workspace/releasemanager/deployment/planner.go
around lines 372 to 387, the code currently forces every replayed first-version
evaluation to trace.ResultBlocked which incorrectly marks successful checks as
failures; update the logic to (1) guard that firstVersion != nil before using
its fields, (2) derive the trace outcome from result.Allowed — call
evaluation.SetResult(trace.ResultOK, result.Message) when result.Allowed is true
and evaluation.SetResult(trace.ResultBlocked, result.Message) when false — and
(3) keep setting the same attributes and then End() the evaluation.
| func TestInMemoryStore_ConcurrentWrites(t *testing.T) { | ||
| store := NewInMemoryStore() | ||
| ctx := context.Background() | ||
|
|
||
| var wg sync.WaitGroup | ||
| numGoroutines := 10 | ||
|
|
||
| // Write spans concurrently | ||
| for i := 0; i < numGoroutines; i++ { | ||
| wg.Add(1) | ||
| go func(id int) { | ||
| defer wg.Done() | ||
|
|
||
| rt := NewReconcileTarget("workspace-1", "test") | ||
| planning := rt.StartPlanning() | ||
| planning.End() | ||
| rt.Complete(StatusCompleted) | ||
| spans := rt.exporter.getSpans() | ||
|
|
||
| if err := store.WriteSpans(ctx, spans); err != nil { | ||
| t.Errorf("goroutine %d: WriteSpans failed: %v", id, err) | ||
| } | ||
| }(i) | ||
| } | ||
|
|
||
| wg.Wait() | ||
|
|
||
| // All spans should be stored | ||
| allSpans := store.GetSpans() | ||
|
|
||
| // We expect at least numGoroutines * 2 spans (root + planning per goroutine) | ||
| expectedMin := numGoroutines * 2 | ||
| if len(allSpans) < expectedMin { | ||
| t.Errorf("expected at least %d spans from concurrent writes, got %d", expectedMin, len(allSpans)) | ||
| } | ||
| } |
There was a problem hiding this comment.
Fix unsafe concurrent access to testing.T.
Line 181 calls t.Errorf inside a goroutine, which is unsafe. The testing.T methods are not goroutine-safe and must only be called from the main test goroutine.
Apply this pattern to collect errors safely:
func TestInMemoryStore_ConcurrentWrites(t *testing.T) {
store := NewInMemoryStore()
ctx := context.Background()
var wg sync.WaitGroup
+ var mu sync.Mutex
+ var errors []error
numGoroutines := 10
// Write spans concurrently
for i := 0; i < numGoroutines; i++ {
wg.Add(1)
go func(id int) {
defer wg.Done()
rt := NewReconcileTarget("workspace-1", "test")
planning := rt.StartPlanning()
planning.End()
rt.Complete(StatusCompleted)
spans := rt.exporter.getSpans()
if err := store.WriteSpans(ctx, spans); err != nil {
- t.Errorf("goroutine %d: WriteSpans failed: %v", id, err)
+ mu.Lock()
+ errors = append(errors, fmt.Errorf("goroutine %d: WriteSpans failed: %w", id, err))
+ mu.Unlock()
}
}(i)
}
wg.Wait()
+
+ for _, err := range errors {
+ t.Error(err)
+ }
// All spans should be stored
allSpans := store.GetSpans()📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| func TestInMemoryStore_ConcurrentWrites(t *testing.T) { | |
| store := NewInMemoryStore() | |
| ctx := context.Background() | |
| var wg sync.WaitGroup | |
| numGoroutines := 10 | |
| // Write spans concurrently | |
| for i := 0; i < numGoroutines; i++ { | |
| wg.Add(1) | |
| go func(id int) { | |
| defer wg.Done() | |
| rt := NewReconcileTarget("workspace-1", "test") | |
| planning := rt.StartPlanning() | |
| planning.End() | |
| rt.Complete(StatusCompleted) | |
| spans := rt.exporter.getSpans() | |
| if err := store.WriteSpans(ctx, spans); err != nil { | |
| t.Errorf("goroutine %d: WriteSpans failed: %v", id, err) | |
| } | |
| }(i) | |
| } | |
| wg.Wait() | |
| // All spans should be stored | |
| allSpans := store.GetSpans() | |
| // We expect at least numGoroutines * 2 spans (root + planning per goroutine) | |
| expectedMin := numGoroutines * 2 | |
| if len(allSpans) < expectedMin { | |
| t.Errorf("expected at least %d spans from concurrent writes, got %d", expectedMin, len(allSpans)) | |
| } | |
| } | |
| func TestInMemoryStore_ConcurrentWrites(t *testing.T) { | |
| store := NewInMemoryStore() | |
| ctx := context.Background() | |
| var wg sync.WaitGroup | |
| var mu sync.Mutex | |
| var errors []error | |
| numGoroutines := 10 | |
| // Write spans concurrently | |
| for i := 0; i < numGoroutines; i++ { | |
| wg.Add(1) | |
| go func(id int) { | |
| defer wg.Done() | |
| rt := NewReconcileTarget("workspace-1", "test") | |
| planning := rt.StartPlanning() | |
| planning.End() | |
| rt.Complete(StatusCompleted) | |
| spans := rt.exporter.getSpans() | |
| if err := store.WriteSpans(ctx, spans); err != nil { | |
| mu.Lock() | |
| errors = append(errors, fmt.Errorf("goroutine %d: WriteSpans failed: %w", id, err)) | |
| mu.Unlock() | |
| } | |
| }(i) | |
| } | |
| wg.Wait() | |
| for _, err := range errors { | |
| t.Error(err) | |
| } | |
| // All spans should be stored | |
| allSpans := store.GetSpans() | |
| // We expect at least numGoroutines * 2 spans (root + planning per goroutine) | |
| expectedMin := numGoroutines * 2 | |
| if len(allSpans) < expectedMin { | |
| t.Errorf("expected at least %d spans from concurrent writes, got %d", expectedMin, len(allSpans)) | |
| } | |
| } |
🤖 Prompt for AI Agents
In apps/workspace-engine/pkg/workspace/releasemanager/trace/memory_store_test.go
around lines 161 to 196, the test calls t.Errorf from inside goroutines which is
unsafe; change the test to collect errors from goroutines and report them from
the main test goroutine: create a buffered error channel (or slice protected by
a mutex), have each goroutine send an error value (or nil) into the channel
instead of calling t.Errorf, close the channel after wg.Wait, then iterate the
collected errors on the main goroutine and call t.Errorf for each non-nil error
(or aggregate into a single failure) so all testing.T calls occur only on the
main test goroutine.
| func TestInMemoryStore_ConcurrentReads(t *testing.T) { | ||
| store := NewInMemoryStore() | ||
| ctx := context.Background() | ||
|
|
||
| // Write initial spans | ||
| rt := NewReconcileTarget("workspace-1", "test") | ||
| planning := rt.StartPlanning() | ||
| planning.End() | ||
| rt.Complete(StatusCompleted) | ||
| spans := rt.exporter.getSpans() | ||
|
|
||
| if err := store.WriteSpans(ctx, spans); err != nil { | ||
| t.Fatalf("WriteSpans failed: %v", err) | ||
| } | ||
|
|
||
| var wg sync.WaitGroup | ||
| numReaders := 20 | ||
|
|
||
| // Read concurrently | ||
| for i := 0; i < numReaders; i++ { | ||
| wg.Add(1) | ||
| go func(id int) { | ||
| defer wg.Done() | ||
|
|
||
| retrieved := store.GetSpans() | ||
| if len(retrieved) == 0 { | ||
| t.Errorf("reader %d: expected non-empty spans", id) | ||
| } | ||
| }(i) | ||
| } | ||
|
|
||
| wg.Wait() | ||
| } |
There was a problem hiding this comment.
Fix unsafe concurrent access to testing.T.
Line 224 calls t.Errorf inside a goroutine, which is unsafe.
Apply the same error collection pattern as suggested for TestInMemoryStore_ConcurrentWrites.
🤖 Prompt for AI Agents
In apps/workspace-engine/pkg/workspace/releasemanager/trace/memory_store_test.go
around lines 198 to 230, the test spawns goroutines that call t.Errorf which is
unsafe; change to collect per-goroutine error messages into a buffered channel
(or a slice guarded by a mutex) instead of calling t.Errorf inside the
goroutine, send a descriptive error string from each failing goroutine, wait for
all goroutines, close the channel, then iterate the collected errors on the main
test goroutine and fail the test (t.Fatalf or t.Errorf) if any errors were
reported.
| func TestInMemoryStore_ConcurrentReadWrite(t *testing.T) { | ||
| store := NewInMemoryStore() | ||
| ctx := context.Background() | ||
|
|
||
| var wg sync.WaitGroup | ||
| numWriters := 5 | ||
| numReaders := 10 | ||
|
|
||
| // Concurrent writers | ||
| for i := 0; i < numWriters; i++ { | ||
| wg.Add(1) | ||
| go func(id int) { | ||
| defer wg.Done() | ||
|
|
||
| rt := NewReconcileTarget("workspace-1", "test") | ||
| rt.Complete(StatusCompleted) | ||
| spans := rt.exporter.getSpans() | ||
|
|
||
| if err := store.WriteSpans(ctx, spans); err != nil { | ||
| t.Errorf("writer %d: WriteSpans failed: %v", id, err) | ||
| } | ||
| }(i) | ||
| } | ||
|
|
||
| // Concurrent readers | ||
| for i := 0; i < numReaders; i++ { | ||
| wg.Add(1) | ||
| go func(id int) { | ||
| defer wg.Done() | ||
|
|
||
| // Just read, don't validate count since writes are happening | ||
| _ = store.GetSpans() | ||
| }(i) | ||
| } | ||
|
|
||
| wg.Wait() | ||
|
|
||
| // Final verification - all writes should be present | ||
| finalSpans := store.GetSpans() | ||
| if len(finalSpans) < numWriters { | ||
| t.Errorf("expected at least %d spans after concurrent read/write, got %d", numWriters, len(finalSpans)) | ||
| } | ||
| } |
There was a problem hiding this comment.
Fix unsafe concurrent access to testing.T.
Line 251 calls t.Errorf inside a goroutine, which is unsafe.
Apply the same error collection pattern as suggested for TestInMemoryStore_ConcurrentWrites.
🤖 Prompt for AI Agents
In apps/workspace-engine/pkg/workspace/releasemanager/trace/memory_store_test.go
around lines 232 to 274, the test calls t.Errorf inside goroutines which is
unsafe; change the test to collect writer errors from goroutines into a channel
or a mutex-protected slice (same pattern used in
TestInMemoryStore_ConcurrentWrites), have each goroutine send any error instead
of calling t.Errorf, close/wait for all goroutines, then range over collected
errors on the main test goroutine and call t.Errorf once per error (or fail the
test) and perform the final verification after processing collected errors.
| func TestInMemoryStore_NilContext(t *testing.T) { | ||
| store := NewInMemoryStore() | ||
|
|
||
| rt := NewReconcileTarget("workspace-1", "test") | ||
| rt.Complete(StatusCompleted) | ||
| spans := rt.exporter.getSpans() | ||
|
|
||
| // WriteSpans should handle nil context | ||
| err := store.WriteSpans(context.Background(), spans) | ||
| if err != nil { | ||
| t.Fatalf("WriteSpans with nil context failed: %v", err) | ||
| } | ||
|
|
||
| // Verify spans were still stored | ||
| if len(store.GetSpans()) != len(spans) { | ||
| t.Error("spans should be stored even with nil context") | ||
| } | ||
| } |
There was a problem hiding this comment.
Test doesn't verify nil context handling as intended.
The comment on line 468 states "WriteSpans should handle nil context," but line 469 passes context.Background() instead of nil. The test name also suggests it should test nil context.
Apply this diff to test actual nil context:
// WriteSpans should handle nil context
- err := store.WriteSpans(context.Background(), spans)
+ err := store.WriteSpans(nil, spans)
if err != nil {
t.Fatalf("WriteSpans with nil context failed: %v", err)
}Alternatively, if WriteSpans doesn't support nil context, update the test name and comment to reflect that it tests with a valid context.
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In apps/workspace-engine/pkg/workspace/releasemanager/trace/memory_store_test.go
around lines 461 to 478, the test named TestInMemoryStore_NilContext claims
"WriteSpans should handle nil context" but calls
WriteSpans(context.Background(), ...) instead of passing nil; update the test to
pass a nil context (call store.WriteSpans(nil, spans)) and adjust the inline
comment if needed to reflect that we're testing nil context handling, or if
WriteSpans does not accept nil, rename the test and update the comment to
indicate it's testing with a valid context.
| byTraceId: protectedProcedure | ||
| .input( | ||
| z.object({ | ||
| workspaceId: z.uuid(), | ||
| traceId: z.string(), | ||
| }), | ||
| ) | ||
| .query(async ({ ctx, input }) => { | ||
| const spans = await ctx.db | ||
| .select() | ||
| .from(schema.deploymentTraceSpan) | ||
| .where( | ||
| and( | ||
| eq(schema.deploymentTraceSpan.workspaceId, input.workspaceId), | ||
| eq(schema.deploymentTraceSpan.traceId, input.traceId), | ||
| ), | ||
| ) | ||
| .orderBy(schema.deploymentTraceSpan.startTime); | ||
|
|
||
| return spans; | ||
| }), | ||
|
|
||
| byReleaseId: protectedProcedure | ||
| .input( | ||
| z.object({ | ||
| workspaceId: z.string().uuid(), | ||
| releaseId: z.string(), | ||
| limit: z.number().min(1).max(1000).default(100), | ||
| offset: z.number().min(0).default(0), | ||
| }), | ||
| ) | ||
| .query(async ({ ctx, input }) => { | ||
| const spans = await ctx.db | ||
| .select() | ||
| .from(schema.deploymentTraceSpan) | ||
| .where( | ||
| and( | ||
| eq(schema.deploymentTraceSpan.workspaceId, input.workspaceId), | ||
| eq(schema.deploymentTraceSpan.releaseId, input.releaseId), | ||
| ), | ||
| ) | ||
| .orderBy(desc(schema.deploymentTraceSpan.createdAt)) | ||
| .limit(input.limit) | ||
| .offset(input.offset); | ||
|
|
||
| return spans; | ||
| }), | ||
|
|
||
| byReleaseTargetKey: protectedProcedure | ||
| .input( | ||
| z.object({ | ||
| workspaceId: z.string().uuid(), | ||
| releaseTargetKey: z.string(), | ||
| limit: z.number().min(1).max(1000).default(100), | ||
| offset: z.number().min(0).default(0), | ||
| }), | ||
| ) | ||
| .query(async ({ ctx, input }) => { | ||
| const spans = await ctx.db | ||
| .select() | ||
| .from(schema.deploymentTraceSpan) | ||
| .where( | ||
| and( | ||
| eq(schema.deploymentTraceSpan.workspaceId, input.workspaceId), | ||
| eq( | ||
| schema.deploymentTraceSpan.releaseTargetKey, | ||
| input.releaseTargetKey, | ||
| ), | ||
| ), | ||
| ) | ||
| .orderBy(desc(schema.deploymentTraceSpan.createdAt)) | ||
| .limit(input.limit) | ||
| .offset(input.offset); | ||
|
|
||
| return spans; | ||
| }), | ||
|
|
||
| byJobId: protectedProcedure | ||
| .input( | ||
| z.object({ | ||
| workspaceId: z.string().uuid(), | ||
| jobId: z.string(), | ||
| limit: z.number().min(1).max(1000).default(100), | ||
| offset: z.number().min(0).default(0), | ||
| }), | ||
| ) | ||
| .query(async ({ ctx, input }) => { | ||
| const spans = await ctx.db | ||
| .select() | ||
| .from(schema.deploymentTraceSpan) | ||
| .where( | ||
| and( | ||
| eq(schema.deploymentTraceSpan.workspaceId, input.workspaceId), | ||
| eq(schema.deploymentTraceSpan.jobId, input.jobId), | ||
| ), | ||
| ) | ||
| .orderBy(desc(schema.deploymentTraceSpan.createdAt)) | ||
| .limit(input.limit) | ||
| .offset(input.offset); | ||
|
|
||
| return spans; | ||
| }), |
There was a problem hiding this comment.
Enforce deployment trace permissions on all “by” queries*
The new DeploymentTraceGet/DeploymentTraceList permissions never fire for byTraceId, byReleaseId, byReleaseTargetKey, or byJobId because these procedures omit an .meta(...authorizationCheck...). As written, any workspace member can call these endpoints and enumerate traces regardless of the new authorization policy—this is a security regression. Please gate each procedure with the appropriate permission before we ship.
byTraceId: protectedProcedure
+ .meta({
+ authorizationCheck: ({ canUser, input }) =>
+ canUser
+ .perform(Permission.DeploymentTraceGet)
+ .on({ type: "workspace", id: input.workspaceId }),
+ })
.input(
z.object({
workspaceId: z.uuid(),
traceId: z.string(),
}),
)
@@
byReleaseId: protectedProcedure
+ .meta({
+ authorizationCheck: ({ canUser, input }) =>
+ canUser
+ .perform(Permission.DeploymentTraceList)
+ .on({ type: "workspace", id: input.workspaceId }),
+ })
.input(
z.object({
workspaceId: z.string().uuid(),
releaseId: z.string(),
@@
byReleaseTargetKey: protectedProcedure
+ .meta({
+ authorizationCheck: ({ canUser, input }) =>
+ canUser
+ .perform(Permission.DeploymentTraceList)
+ .on({ type: "workspace", id: input.workspaceId }),
+ })
.input(
z.object({
workspaceId: z.string().uuid(),
releaseTargetKey: z.string(),
@@
byJobId: protectedProcedure
+ .meta({
+ authorizationCheck: ({ canUser, input }) =>
+ canUser
+ .perform(Permission.DeploymentTraceList)
+ .on({ type: "workspace", id: input.workspaceId }),
+ })
.input(
z.object({
workspaceId: z.string().uuid(),
jobId: z.string(),🤖 Prompt for AI Agents
packages/trpc/src/routes/deployment-traces.ts lines 10-111: each of the four
procedures (byTraceId, byReleaseId, byReleaseTargetKey, byJobId) is missing the
authorization meta so the new DeploymentTraceGet/DeploymentTraceList checks
never run; fix by attaching the appropriate .meta(authorizationCheck(...)) to
each procedure — use DeploymentTraceGet for byTraceId and DeploymentTraceList
for the list-style endpoints (byReleaseId, byReleaseTargetKey, byJobId), and
ensure the authorizationCheck is supplied the workspaceId and the relevant
identifier (traceId, releaseId, releaseTargetKey, or jobId) so the permission
check can evaluate the correct resource context before the query runs.
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (5)
apps/workspace-engine/pkg/workspace/releasemanager/trace/memory_store_test.go (4)
161-196: Fix unsafe concurrent access to testing.T.Line 181 calls
t.Errorfinside a goroutine, which is unsafe. Thetesting.Tmethods are not goroutine-safe and must only be called from the main test goroutine.Collect errors from goroutines and report them on the main test goroutine:
func TestInMemoryStore_ConcurrentWrites(t *testing.T) { store := NewInMemoryStore() ctx := context.Background() var wg sync.WaitGroup + errCh := make(chan error, 10) numGoroutines := 10 // Write spans concurrently for i := 0; i < numGoroutines; i++ { wg.Add(1) go func(id int) { defer wg.Done() rt := NewReconcileTarget("workspace-1", "test") planning := rt.StartPlanning() planning.End() rt.Complete(StatusCompleted) spans := rt.exporter.getSpans() if err := store.WriteSpans(ctx, spans); err != nil { - t.Errorf("goroutine %d: WriteSpans failed: %v", id, err) + errCh <- fmt.Errorf("goroutine %d: WriteSpans failed: %w", id, err) } }(i) } wg.Wait() + close(errCh) + + for err := range errCh { + t.Error(err) + } // All spans should be stored allSpans := store.GetSpans()
198-230: Fix unsafe concurrent access to testing.T.Line 224 calls
t.Errorfinside a goroutine, which is unsafe.Apply the same error collection pattern:
func TestInMemoryStore_ConcurrentReads(t *testing.T) { store := NewInMemoryStore() ctx := context.Background() // Write initial spans rt := NewReconcileTarget("workspace-1", "test") planning := rt.StartPlanning() planning.End() rt.Complete(StatusCompleted) spans := rt.exporter.getSpans() if err := store.WriteSpans(ctx, spans); err != nil { t.Fatalf("WriteSpans failed: %v", err) } var wg sync.WaitGroup + errCh := make(chan error, 20) numReaders := 20 // Read concurrently for i := 0; i < numReaders; i++ { wg.Add(1) go func(id int) { defer wg.Done() retrieved := store.GetSpans() if len(retrieved) == 0 { - t.Errorf("reader %d: expected non-empty spans", id) + errCh <- fmt.Errorf("reader %d: expected non-empty spans", id) } }(i) } wg.Wait() + close(errCh) + + for err := range errCh { + t.Error(err) + } }
232-274: Fix unsafe concurrent access to testing.T.Line 251 calls
t.Errorfinside a goroutine, which is unsafe.Apply the same error collection pattern:
func TestInMemoryStore_ConcurrentReadWrite(t *testing.T) { store := NewInMemoryStore() ctx := context.Background() var wg sync.WaitGroup + errCh := make(chan error, 5) numWriters := 5 numReaders := 10 // Concurrent writers for i := 0; i < numWriters; i++ { wg.Add(1) go func(id int) { defer wg.Done() rt := NewReconcileTarget("workspace-1", "test") rt.Complete(StatusCompleted) spans := rt.exporter.getSpans() if err := store.WriteSpans(ctx, spans); err != nil { - t.Errorf("writer %d: WriteSpans failed: %v", id, err) + errCh <- fmt.Errorf("writer %d: WriteSpans failed: %w", id, err) } }(i) } // Concurrent readers for i := 0; i < numReaders; i++ { wg.Add(1) go func(id int) { defer wg.Done() // Just read, don't validate count since writes are happening _ = store.GetSpans() }(i) } wg.Wait() + close(errCh) + + for err := range errCh { + t.Error(err) + } // Final verification - all writes should be present finalSpans := store.GetSpans()
461-478: Test doesn't verify nil context handling as intended.The test name and comment claim to test nil context handling, but line 469 passes
context.Background()instead ofnil.Apply this diff to test actual nil context:
// WriteSpans should handle nil context - err := store.WriteSpans(context.Background(), spans) + err := store.WriteSpans(nil, spans) if err != nil { t.Fatalf("WriteSpans with nil context failed: %v", err) }Alternatively, if
WriteSpansdoesn't support nil context, update the test name and comment to reflect that it tests with a valid context.apps/workspace-engine/pkg/workspace/releasemanager/deployment/executor.go (1)
87-116: Persist the ExternalId before writing the job
Line 88 still upserts the job before we populatenewJob.ExternalId, so the stored record never contains the trace token. External executors that read back the job will miss the token entirely, breaking the flow this block is adding. Please assign the token (or re-upsert immediately afterwards) before the firstJobs.Upsertso the persisted job includes the ExternalId.@@ - span.AddEvent("Persisting job to store") - e.store.Jobs.Upsert(ctx, newJob) - span.SetAttributes( - attribute.Bool("job.created", true), - attribute.String("job.id", newJob.Id), - attribute.String("job.status", string(newJob.Status)), - ) - - // Set job ID on trace recorder so all subsequent spans are associated with this job - if recorder != nil { - recorder.SetJobID(newJob.Id) - } - - // Generate trace token for external executors - if recorder != nil && createJobAction != nil { + // Set job ID on trace recorder so all subsequent spans are associated with this job + if recorder != nil { + recorder.SetJobID(newJob.Id) + } + + // Generate trace token for external executors + if recorder != nil && createJobAction != nil { traceToken := trace.GenerateDefaultTraceToken(recorder.RootTraceID(), newJob.Id) createJobAction.AddMetadata("trace_token", traceToken) createJobAction.AddMetadata("job_id", newJob.Id) createJobAction.AddStep("Generate trace token", trace.StepResultPass, "Token generated for external executor") @@ newJob.ExternalId = &traceToken } } + span.AddEvent("Persisting job to store") + e.store.Jobs.Upsert(ctx, newJob) + span.SetAttributes( + attribute.Bool("job.created", true), + attribute.String("job.id", newJob.Id), + attribute.String("job.status", string(newJob.Status)), + ) +
🧹 Nitpick comments (4)
apps/workspace-engine/pkg/workspace/releasemanager/policy/evaluator/skipdeployed/skipdeplyed.go (1)
93-93: Consider clarifying which version is shown in the message.The message "previous deployment was for a different release (version: %s)" could be misread as showing the previous release's version, when it actually shows the current/new release version being deployed.
Consider rephrasing for clarity:
- fmt.Sprintf("New release version detected - previous deployment was for a different release (version: %s)", release.Version.Tag), + fmt.Sprintf("Allowing new release version %s - previous deployment was for a different release", release.Version.Tag),apps/web/app/components/EngineProvider.tsx (1)
29-37: Simplify the nested ternary for better readability.The nested ternary is complex and uses loose equality (
==on line 31). Consider refactoring for clarity:Apply this diff to simplify the logic:
- const { theme, systemTheme } = useTheme(); - const color = - theme == "system" - ? systemTheme === "dark" - ? "white" - : "black" - : theme === "dark" - ? "white" - : "black"; + const { theme, systemTheme } = useTheme(); + const effectiveTheme = theme === "system" ? systemTheme : theme; + const color = effectiveTheme === "dark" ? "white" : "black";This approach:
- Uses strict equality (
===)- Reduces nesting for easier comprehension
- Makes the intent clearer
apps/web/app/routes/ws/deployments/_components/trace-utils.ts (1)
21-21: Consider replacinganywithunknownor a more specific type.The use of
anyfor attribute records reduces type safety. Consider usingunknownor defining a more specific type (e.g.,Record<string, string | number | boolean | null>) to maintain type checking benefits while handling dynamic attributes.Apply this diff to improve type safety:
- attributes: Record<string, any> | null; + attributes: Record<string, unknown> | null; events: Array<{ name: string; timestamp: string; - attributes: Record<string, any>; + attributes: Record<string, unknown>; }> | null;Also applies to: 25-25
apps/workspace-engine/pkg/workspace/releasemanager/deployment/planner.go (1)
137-199: Record the successful planning decision in the trace.We only call
planning.MakeDecisionon the two rejection paths. When a deployable version is found, the planning trace terminates without a decision node, so the traces UI can’t show which version was approved. Emit the positive decision as soon as we identify the deployable version so the trace remains complete.@@ span.SetAttributes( attribute.String("deployable_version.id", deployableVersion.Id), attribute.String("deployable_version.tag", deployableVersion.Tag), attribute.String("deployable_version.created_at", deployableVersion.CreatedAt.Format("2006-01-02T15:04:05Z")), ) + if planning != nil { + display := deployableVersion.Name + if display == "" { + display = deployableVersion.Tag + } + + planning.MakeDecision(fmt.Sprintf("Deploy version %s", display), trace.DecisionApproved) + }
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (14)
apps/web/app/components/EngineProvider.tsx(3 hunks)apps/web/app/routes/ws/deployments/_components/trace-utils.ts(1 hunks)apps/web/app/routes/ws/jobs/_components/JobActions.tsx(4 hunks)apps/workspace-engine/pkg/workspace/releasemanager/deployment/executor.go(5 hunks)apps/workspace-engine/pkg/workspace/releasemanager/deployment/planner.go(9 hunks)apps/workspace-engine/pkg/workspace/releasemanager/policy/evaluator/skipdeployed/skipdeplyed.go(1 hunks)apps/workspace-engine/pkg/workspace/releasemanager/trace/memory_store_test.go(1 hunks)apps/workspace-engine/pkg/workspace/releasemanager/trace/phases.go(4 hunks)apps/workspace-engine/pkg/workspace/releasemanager/trace/phases_test.go(1 hunks)apps/workspace-engine/pkg/workspace/releasemanager/trace/recorder.go(5 hunks)apps/workspace-engine/pkg/workspace/workspace.go(2 hunks)apps/workspace-engine/test/e2e/engine_trace_test.go(1 hunks)apps/workspace-engine/test/integration/trace_helpers.go(1 hunks)packages/db/drizzle/meta/_journal.json(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
- packages/db/drizzle/meta/_journal.json
- apps/web/app/routes/ws/jobs/_components/JobActions.tsx
- apps/workspace-engine/test/e2e/engine_trace_test.go
🧰 Additional context used
📓 Path-based instructions (4)
apps/workspace-engine/**/*.go
📄 CodeRabbit inference engine (apps/workspace-engine/CLAUDE.md)
apps/workspace-engine/**/*.go: Do not add extraneous inline comments that state the obvious
Do not add comments that simply restate what the code does
Do not add comments for standard Go patterns (e.g., noting WaitGroup or semaphore usage)
Write comments that explain why, document complex logic/algorithms, provide non-obvious context, include TODO/FIXME, and document exported functions/types/methods
Files:
apps/workspace-engine/pkg/workspace/releasemanager/deployment/executor.goapps/workspace-engine/pkg/workspace/workspace.goapps/workspace-engine/pkg/workspace/releasemanager/trace/recorder.goapps/workspace-engine/pkg/workspace/releasemanager/trace/phases.goapps/workspace-engine/pkg/workspace/releasemanager/trace/phases_test.goapps/workspace-engine/pkg/workspace/releasemanager/trace/memory_store_test.goapps/workspace-engine/pkg/workspace/releasemanager/policy/evaluator/skipdeployed/skipdeplyed.goapps/workspace-engine/pkg/workspace/releasemanager/deployment/planner.goapps/workspace-engine/test/integration/trace_helpers.go
**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.{ts,tsx}: Use TypeScript with explicit types (prefer interfaces for public APIs)
Import styles: Use named imports, group imports by source (std lib > external > internal)
Consistent type imports:import type { Type } from "module"
Prefer async/await over raw promises
Handle errors explicitly (use try/catch and typed error responses)
Files:
apps/web/app/components/EngineProvider.tsxapps/web/app/routes/ws/deployments/_components/trace-utils.ts
⚙️ CodeRabbit configuration file
**/*.{ts,tsx}: Note on Error Handling:
Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error handling. These are acceptable as long as they maintain clarity and predictability.
Files:
apps/web/app/components/EngineProvider.tsxapps/web/app/routes/ws/deployments/_components/trace-utils.ts
**/*.{js,jsx,ts,tsx,json,md,yml,yaml}
📄 CodeRabbit inference engine (CLAUDE.md)
Formatting: Prettier is used with
@ctrlplane/prettier-config
Files:
apps/web/app/components/EngineProvider.tsxapps/web/app/routes/ws/deployments/_components/trace-utils.ts
apps/workspace-engine/**/*_test.go
📄 CodeRabbit inference engine (apps/workspace-engine/CLAUDE.md)
Follow the existing test structure used in *_test.go files
Files:
apps/workspace-engine/pkg/workspace/releasemanager/trace/phases_test.goapps/workspace-engine/pkg/workspace/releasemanager/trace/memory_store_test.go
🧠 Learnings (9)
📓 Common learnings
Learnt from: adityachoudhari26
Repo: ctrlplanedev/ctrlplane PR: 696
File: apps/workspace-engine/pkg/workspace/releasemanager/deployment/executor.go:63-67
Timestamp: 2025-10-24T00:02:29.723Z
Learning: In workspace-engine replay mode (apps/workspace-engine/pkg/workspace/releasemanager/deployment/executor.go), jobs are created with Pending status even during replay because job updates are still accepted and processed. If a job remains Pending after replay completes, it genuinely is still pending because no updates occurred during replay.
📚 Learning: 2025-10-24T00:02:29.723Z
Learnt from: adityachoudhari26
Repo: ctrlplanedev/ctrlplane PR: 696
File: apps/workspace-engine/pkg/workspace/releasemanager/deployment/executor.go:63-67
Timestamp: 2025-10-24T00:02:29.723Z
Learning: In workspace-engine replay mode (apps/workspace-engine/pkg/workspace/releasemanager/deployment/executor.go), jobs are created with Pending status even during replay because job updates are still accepted and processed. If a job remains Pending after replay completes, it genuinely is still pending because no updates occurred during replay.
Applied to files:
apps/workspace-engine/pkg/workspace/releasemanager/deployment/executor.go
📚 Learning: 2025-08-12T18:13:54.630Z
Learnt from: CR
Repo: ctrlplanedev/ctrlplane PR: 0
File: apps/workspace-engine/CLAUDE.md:0-0
Timestamp: 2025-08-12T18:13:54.630Z
Learning: Applies to apps/workspace-engine/**/*.go : Write comments that explain why, document complex logic/algorithms, provide non-obvious context, include TODO/FIXME, and document exported functions/types/methods
Applied to files:
apps/workspace-engine/pkg/workspace/workspace.goapps/workspace-engine/test/integration/trace_helpers.go
📚 Learning: 2025-08-12T18:13:54.630Z
Learnt from: CR
Repo: ctrlplanedev/ctrlplane PR: 0
File: apps/workspace-engine/CLAUDE.md:0-0
Timestamp: 2025-08-12T18:13:54.630Z
Learning: Applies to apps/workspace-engine/**/*.go : Do not add comments for standard Go patterns (e.g., noting WaitGroup or semaphore usage)
Applied to files:
apps/workspace-engine/pkg/workspace/workspace.goapps/workspace-engine/test/integration/trace_helpers.go
📚 Learning: 2025-08-12T18:13:54.630Z
Learnt from: CR
Repo: ctrlplanedev/ctrlplane PR: 0
File: apps/workspace-engine/CLAUDE.md:0-0
Timestamp: 2025-08-12T18:13:54.630Z
Learning: Applies to apps/workspace-engine/**/*.go : Do not add comments that simply restate what the code does
Applied to files:
apps/workspace-engine/pkg/workspace/workspace.go
📚 Learning: 2025-08-12T18:13:54.630Z
Learnt from: CR
Repo: ctrlplanedev/ctrlplane PR: 0
File: apps/workspace-engine/CLAUDE.md:0-0
Timestamp: 2025-08-12T18:13:54.630Z
Learning: Applies to apps/workspace-engine/**/*.go : Do not add extraneous inline comments that state the obvious
Applied to files:
apps/workspace-engine/pkg/workspace/workspace.go
📚 Learning: 2025-08-12T18:13:54.630Z
Learnt from: CR
Repo: ctrlplanedev/ctrlplane PR: 0
File: apps/workspace-engine/CLAUDE.md:0-0
Timestamp: 2025-08-12T18:13:54.630Z
Learning: Applies to apps/workspace-engine/**/*_test.go : Follow the existing test structure used in *_test.go files
Applied to files:
apps/workspace-engine/pkg/workspace/releasemanager/trace/phases_test.goapps/workspace-engine/pkg/workspace/releasemanager/trace/memory_store_test.goapps/workspace-engine/pkg/workspace/releasemanager/deployment/planner.goapps/workspace-engine/test/integration/trace_helpers.go
📚 Learning: 2025-08-12T18:13:54.630Z
Learnt from: CR
Repo: ctrlplanedev/ctrlplane PR: 0
File: apps/workspace-engine/CLAUDE.md:0-0
Timestamp: 2025-08-12T18:13:54.630Z
Learning: Applies to apps/workspace-engine/pkg/model/selector/**/*_test.go : Write comprehensive, data-driven tests for new condition types
Applied to files:
apps/workspace-engine/pkg/workspace/releasemanager/trace/phases_test.goapps/workspace-engine/pkg/workspace/releasemanager/trace/memory_store_test.go
📚 Learning: 2025-08-12T18:13:54.630Z
Learnt from: CR
Repo: ctrlplanedev/ctrlplane PR: 0
File: apps/workspace-engine/CLAUDE.md:0-0
Timestamp: 2025-08-12T18:13:54.630Z
Learning: Applies to apps/workspace-engine/pkg/model/selector/**/*_test.go : Use table-driven tests for all condition types
Applied to files:
apps/workspace-engine/pkg/workspace/releasemanager/trace/phases_test.go
🧬 Code graph analysis (8)
apps/workspace-engine/pkg/workspace/releasemanager/deployment/executor.go (7)
apps/workspace-engine/pkg/oapi/oapi.gen.go (7)
Release(493-500)Job(318-330)Id(75-75)ExternalId(74-74)Status(81-81)JobStatusInvalidJobAgent(64-64)UpdatedAt(82-82)apps/workspace-engine/pkg/workspace/releasemanager/trace/recorder.go (1)
ReconcileTarget(15-37)apps/workspace-engine/pkg/workspace/releasemanager/trace/phases.go (3)
Job(293-298)ExecutionPhase(196-200)Action(324-328)apps/workspace-engine/pkg/workspace/releasemanager/trace/types.go (3)
StepResultFail(56-56)StepResultPass(55-55)Status(68-68)apps/workspace-engine/pkg/workspace/releasemanager/trace/token.go (1)
GenerateDefaultTraceToken(42-44)apps/workspace-engine/pkg/workspace/releasemanager/deployment/jobs/dispatcher.go (1)
ErrUnsupportedJobAgent(25-25)apps/workspace-engine/pkg/workspace/store/jobs.go (1)
Jobs(17-20)
apps/web/app/components/EngineProvider.tsx (1)
apps/web/app/components/ThemeProvider.tsx (1)
useTheme(65-68)
apps/workspace-engine/pkg/workspace/workspace.go (4)
apps/workspace-engine/pkg/workspace/releasemanager/manager.go (2)
New(41-57)PersistenceStore(37-37)apps/workspace-engine/pkg/workspace/opts.go (1)
WorkspaceOption(11-11)apps/workspace-engine/pkg/workspace/releasemanager/trace/memory_store.go (1)
NewInMemoryStore(17-21)apps/workspace-engine/pkg/workspace/releasemanager/trace/types.go (1)
PersistenceStore(78-80)
apps/workspace-engine/pkg/workspace/releasemanager/trace/phases.go (1)
apps/workspace-engine/pkg/workspace/releasemanager/trace/types.go (3)
PhaseExecution(19-19)NodeTypeAction(32-32)StatusRunning(71-71)
apps/workspace-engine/pkg/workspace/releasemanager/trace/phases_test.go (3)
apps/workspace-engine/pkg/workspace/releasemanager/trace/recorder.go (1)
NewReconcileTarget(40-42)apps/workspace-engine/pkg/workspace/releasemanager/trace/types.go (16)
ResultAllowed(39-39)StatusCompleted(72-72)NodeTypeEvaluation(29-29)ResultBlocked(40-40)DecisionApproved(63-63)DecisionRejected(64-64)CheckResultPass(47-47)NodeTypeCheck(30-30)CheckResultFail(48-48)StepResultPass(55-55)StepResultFail(56-56)PhaseAction(20-20)PhaseReconciliation(16-16)PhasePlanning(17-17)PhaseEligibility(18-18)PhaseExecution(19-19)apps/workspace-engine/pkg/workspace/releasemanager/trace/token.go (1)
ParseTraceToken(103-109)
apps/workspace-engine/pkg/workspace/releasemanager/trace/memory_store_test.go (3)
apps/workspace-engine/pkg/workspace/releasemanager/trace/memory_store.go (1)
NewInMemoryStore(17-21)apps/workspace-engine/pkg/workspace/releasemanager/trace/recorder.go (1)
NewReconcileTarget(40-42)apps/workspace-engine/pkg/workspace/releasemanager/trace/types.go (2)
StatusCompleted(72-72)ResultAllowed(39-39)
apps/workspace-engine/pkg/workspace/releasemanager/deployment/planner.go (4)
apps/workspace-engine/pkg/workspace/releasemanager/trace/recorder.go (1)
ReconcileTarget(15-37)apps/workspace-engine/pkg/workspace/releasemanager/trace/phases.go (1)
PlanningPhase(12-16)apps/workspace-engine/pkg/workspace/releasemanager/trace/types.go (3)
DecisionRejected(64-64)ResultAllowed(39-39)ResultBlocked(40-40)apps/workspace-engine/pkg/oapi/oapi.gen.go (3)
DeploymentVersion(196-207)RuleEvaluation(584-608)Id(75-75)
apps/workspace-engine/test/integration/trace_helpers.go (2)
apps/workspace-engine/test/integration/workspace.go (1)
TestWorkspace(31-36)apps/workspace-engine/pkg/workspace/releasemanager/trace/types.go (2)
Phase(13-13)NodeType(25-25)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
- GitHub Check: Typecheck
- GitHub Check: Lint
- GitHub Check: tests
- GitHub Check: build (linux/amd64)
- GitHub Check: build-and-push (linux/amd64)
- GitHub Check: build (linux/amd64)
🔇 Additional comments (13)
apps/workspace-engine/pkg/workspace/workspace.go (1)
7-29: LGTM! Default in-memory store prevents nil panic.The initialization sequence now provides a safe default
trace.NewInMemoryStore()and applies options before creating the release manager. This addresses the previous concern about potential niltraceStorepanics.Also applies to: 39-39
apps/web/app/components/EngineProvider.tsx (2)
46-46: LGTM!The color prop is correctly passed to the Grid component and will dynamically adapt the loading spinner to the current theme.
7-7: No changes needed to the import statement.The import from
next-themesis correct and necessary. EngineProvider requires thesystemThemeproperty returned by the next-themes hook to handle the "system" theme option (line 30-32). The local ThemeProvider only provides{ theme, setTheme }and does not support system theme detection, so it cannot be substituted here. The apparent inconsistency in the codebase is intentional: the local ThemeProvider provides a simple light/dark toggle, while next-themes is used where system theme detection is needed.Likely an incorrect or invalid review comment.
apps/web/app/routes/ws/deployments/_components/trace-utils.ts (10)
37-83: LGTM! Tree building logic is solid.The function correctly builds a tree from flat spans, handles orphaned nodes gracefully, and sorts children appropriately. The defensive
new Date()calls on line 74 are acceptable for handling data that might not perfectly match the type contract.
88-93: LGTM!Duration calculation correctly handles in-progress spans by returning null when
endTimeis not set.
98-100: LGTM!Clean implementation with appropriate user-facing messaging for in-progress spans.
120-136: LGTM!Relative time formatting handles edge cases well, including future timestamps.
141-163: LGTM!Status-to-color mapping is intuitive and covers the main status categories appropriately.
168-186: LGTM!Phase color mapping follows a logical color scheme that helps differentiate deployment phases.
191-203: LGTM!Recursive traversal correctly collects all descendants from a given node.
208-214: LGTM!Recursive span counting is straightforward and correct.
219-235: LGTM!Proper type guards ensure safe extraction of context attributes.
240-294: LGTM!The function comprehensively extracts display information from spans, handling various attribute naming conventions and preventing duplicate information in the display text.
| return new Intl.DateTimeFormat("en-US", { | ||
| month: "short", | ||
| day: "numeric", | ||
| hour: "2-digit", | ||
| minute: "2-digit", | ||
| second: "2-digit", | ||
| hour12: false, | ||
| }).format(date); |
There was a problem hiding this comment.
Consider user locale for internationalization.
The hard-coded "en-US" locale may not align with user preferences. Consider using the browser's locale or a user-configurable setting for better internationalization support.
For now, you could use the browser's default locale:
- return new Intl.DateTimeFormat("en-US", {
+ return new Intl.DateTimeFormat(undefined, {
month: "short",
day: "numeric",
hour: "2-digit",
minute: "2-digit",
second: "2-digit",
hour12: false,
}).format(date);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| return new Intl.DateTimeFormat("en-US", { | |
| month: "short", | |
| day: "numeric", | |
| hour: "2-digit", | |
| minute: "2-digit", | |
| second: "2-digit", | |
| hour12: false, | |
| }).format(date); | |
| return new Intl.DateTimeFormat(undefined, { | |
| month: "short", | |
| day: "numeric", | |
| hour: "2-digit", | |
| minute: "2-digit", | |
| second: "2-digit", | |
| hour12: false, | |
| }).format(date); |
🤖 Prompt for AI Agents
In apps/web/app/routes/ws/deployments/_components/trace-utils.ts around lines
107 to 114, the Intl.DateTimeFormat call is hard-coded to "en-US"; update it to
use the browser/user locale instead (e.g., pass undefined or navigator.language)
or accept a locale parameter/user setting so formatting respects user
preferences; ensure the function signature and callers are adjusted to
optionally accept/provide a locale and fall back to the browser default when
none is provided.
Summary by CodeRabbit
New Features
API
Documentation