feat: trigger eval from referenced variable update#565
feat: trigger eval from referenced variable update#565adityachoudhari26 merged 8 commits intomainfrom
Conversation
|
""" WalkthroughThis update implements comprehensive propagation and tracking of resource variable changes throughout the system. It introduces new helpers and workflow logic to detect, enqueue, and process variable update jobs after resource insertions, updates, and deletions. The changes span API handlers, workers, rule engine utilities, and end-to-end tests to ensure variable state changes trigger correct downstream evaluations. Changes
Sequence Diagram(s)sequenceDiagram
participant API as API Handler
participant DB as Database
participant JD as Job Dispatcher
participant EW as Event Worker
participant RE as Rule Engine
API->>DB: Fetch resource (with metadata, variables)
API->>DB: Upsert resource
API->>JD: Enqueue UpdatedResource job
API->>JD: Enqueue UpdateResourceVariable job(s) for affected variables
JD->>EW: Dispatch UpdateResourceVariable job(s)
EW->>DB: Query dependent child resources
EW->>RE: Dispatch evaluation jobs for affected release targets
Possibly related PRs
Suggested reviewers
Poem
Tip ⚡️ Faster reviews with caching
Enjoy the performance boost—your workflow just got faster. 📜 Recent review detailsConfiguration used: .coderabbit.yaml 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
⏰ Context from checks skipped due to timeout of 90000ms (4)
✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
There was a problem hiding this comment.
Actionable comments posted: 11
🧹 Nitpick comments (6)
apps/event-worker/src/workers/update-resource-variable.ts (2)
18-20: Remove noisy debug prints of relationship graphs
dependentResourcescan grow large and include nested objects – dumping it floods logs and again may surface sensitive metadata.
Either delete the statement or log only counts / identifiers at debug level.- console.log(dependentResources); + logger.debug({ resourceId, deps: dependentResources.length }, "resolved dependent resources");
21-41: Skip DB round-trip when no dependent resources & useaddBulkfor large result sets
inArray(col, [])usually compiles toFALSE, but some ORMs emit invalid SQL (WHERE col IN ()). Early-return to avoid an unnecessary query.- The loop inside
dispatchEvaluateJobscallsq.addone by one. WhenaffectedReleaseTargetsis large we perform N network calls to Redis. Construct an array of{ name, data }objects and callq.addBulk, mirroring the surrounding codebase patterns (e.g. delete-resource worker). This cuts latency dramatically.Example patch:
+ if (dependentResources.length === 0) return; ... - dispatchEvaluateJobs(affectedReleaseTargets); + await dispatchEvaluateJobs(affectedReleaseTargets);(You’ll also need to expose/implement an
addBulkvariant insidedispatchEvaluateJobs.)
[performance]packages/db/src/queries/relationships/get-resource-children.ts (2)
12-18: Return type isany; expose an explicit interfacePublic helpers in
packages/dbshould communicate clear contracts. Consider
declaring a type such as:export interface ResourceChild { ruleId: string; type: DependencyType; // import from schema source: typeof schema.resource.$inferSelect; reference: string; }and annotate the promise:
export const getResourceChildren = async ( tx: Tx, resourceId: string, ): Promise<ResourceChild[]> => { … }This improves IntelliSense, narrows accidental misuse, and surfaces breaking
changes at compile-time.
122-128: Prefer a structured logger overconsole.logRaw
console.logstatements:
- Bypass your existing logging framework (if any) – you lose correlation IDs,
log levels, redaction rules, etc.- Risk leaking PII to stdout in cloud environments.
Use a structured logger (e.g.
pino,winston) and respect log levels:logger.debug({ targetMeta }, "getResourceChildren: target metadata");…and remove them in the hot path when not required.
e2e/tests/api/resource-variables.spec.ts (2)
528-543: Avoid fixed sleeps – poll for the desired condition
page.waitForTimeout(5_000)introduces brittle timing-based flakiness; CI
environments with high load can still fail. Replace with polling helpers, e.g.await expect.poll( async () => { const { data } = await api.GET( "/v1/resources/{resourceId}/release-targets", { params: { path: { resourceId: sourceResourceId! } } }, ); return data?.length ?? 0; }, { timeout: 30_000, message: "release targets should appear" }, ).toBeGreaterThan(0);This keeps the suite reliable without inflating maximum runtime.
183-240: Minor: randomreferencemay collide across parallel test runs
faker.string.alphanumeric(10).toLowerCase()gives only ≈ 60 B possible
combinations; in highly parallel CI jobs the chance of re-using the same
reference across test cases or other branches rises, which can create noisy
data races in shared staging back-ends.Appending the Playwright worker index or a timestamp helps:
const reference = `${faker.string.alphanumeric(10).toLowerCase()}-${Date.now()}`;
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (17)
apps/event-worker/src/workers/delete-resource.ts(2 hunks)apps/event-worker/src/workers/update-resource-variable.ts(1 hunks)apps/webservice/src/app/api/v1/resources/[resourceId]/route.ts(2 hunks)apps/webservice/src/app/api/v1/resources/route.ts(3 hunks)e2e/tests/api/resource-variables.spec.ts(8 hunks)packages/api/src/router/resource-variables.ts(4 hunks)packages/db/src/queries/index.ts(1 hunks)packages/db/src/queries/relationships/get-resource-children.ts(1 hunks)packages/db/src/queries/relationships/get-resource-parents.ts(1 hunks)packages/db/src/queries/relationships/index.ts(1 hunks)packages/db/src/upsert-resources.ts(2 hunks)packages/events/src/types.ts(1 hunks)packages/job-dispatch/package.json(1 hunks)packages/job-dispatch/src/resource/handle-provider-scan.ts(4 hunks)packages/rule-engine/src/index.ts(1 hunks)packages/rule-engine/src/manager/variables/affected-variables.ts(1 hunks)packages/rule-engine/src/manager/variables/index.ts(1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
`**/*.{ts,tsx}`: **Note on Error Handling:** Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error...
**/*.{ts,tsx}: Note on Error Handling:
Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error handling. These are acceptable as long as they maintain clarity and predictability.
packages/rule-engine/src/manager/variables/index.tspackages/db/src/queries/relationships/get-resource-parents.tspackages/db/src/upsert-resources.tspackages/db/src/queries/index.tsapps/webservice/src/app/api/v1/resources/route.tspackages/events/src/types.tspackages/rule-engine/src/manager/variables/affected-variables.tspackages/api/src/router/resource-variables.tspackages/rule-engine/src/index.tspackages/db/src/queries/relationships/index.tsapps/event-worker/src/workers/delete-resource.tsapps/webservice/src/app/api/v1/resources/[resourceId]/route.tspackages/job-dispatch/src/resource/handle-provider-scan.tsapps/event-worker/src/workers/update-resource-variable.tspackages/db/src/queries/relationships/get-resource-children.tse2e/tests/api/resource-variables.spec.ts
🧬 Code Graph Analysis (3)
apps/event-worker/src/workers/delete-resource.ts (3)
packages/db/src/common.ts (1)
Tx(22-22)packages/db/src/schema/resource.ts (1)
resource(59-87)packages/events/src/index.ts (1)
getQueue(28-34)
apps/event-worker/src/workers/update-resource-variable.ts (4)
packages/events/src/index.ts (1)
createWorker(10-25)packages/db/src/queries/relationships/get-resource-children.ts (1)
getResourceChildren(12-191)packages/db/src/client.ts (1)
db(15-15)apps/event-worker/src/utils/dispatch-evaluate-jobs.ts (1)
dispatchEvaluateJobs(5-20)
packages/db/src/queries/relationships/get-resource-children.ts (1)
packages/db/src/common.ts (1)
Tx(22-22)
⏰ Context from checks skipped due to timeout of 90000ms (4)
- GitHub Check: Lint
- GitHub Check: Typecheck
- GitHub Check: build (linux/amd64)
- GitHub Check: build (linux/amd64)
🔇 Additional comments (17)
packages/db/src/queries/relationships/get-resource-parents.ts (1)
5-6: Correct import path adjustments.The import paths have been updated to reflect the new file location in the relationships directory structure.
packages/job-dispatch/package.json (1)
31-31: Appropriate dependency addition.The addition of the
@ctrlplane/rule-engineworkspace dependency aligns well with the PR's objective to trigger evaluations from referenced variable updates.packages/db/src/queries/index.ts (1)
1-1: Good reorganization of relationship queries.Consolidating the relationship queries exports through a central index file improves code organization and maintainability.
packages/db/src/upsert-resources.ts (1)
175-175: Good enhancement to include variables in query results.Including
variables: truealongside metadata in the query results is essential for the PR's objective of tracking resource variable changes throughout the system.packages/db/src/queries/relationships/index.ts (1)
1-2: Clean and organized export structureThis new index file provides a clean, centralized entry point for all relationship query functions, which improves modularity and makes imports more maintainable across the codebase. The approach follows established patterns for organizing related functionality.
packages/rule-engine/src/manager/variables/index.ts (1)
1-2: Well-structured variable utilities exportThis index file effectively consolidates variable-related utilities, providing a single import point that simplifies access to both the new
getAffectedVariablesfunction and existing reference variable resolution logic. This organization supports the PR's goal of implementing comprehensive variable change tracking and propagation.packages/rule-engine/src/index.ts (1)
8-8: Export updated to use new variable utilities indexThis change correctly updates the export path to use the newly created variables index module, ensuring that both the
getAffectedVariablesfunction and the previously exported reference variable utilities are now available through the rule engine's public API.apps/event-worker/src/workers/delete-resource.ts (1)
56-56: Consider transaction consistency for variable update jobsThe variable update jobs are dispatched outside the transaction that handles the resource deletion. While this works, it means that if dispatching the jobs fails, the resource would still be deleted without the necessary downstream variable updates being processed.
Consider whether this is the intended behavior or if the job dispatching should be included within the transaction for atomicity. If the current approach is intentional (to avoid rolling back deletion if post-processing fails), ensure this design decision is documented.
packages/api/src/router/resource-variables.ts (4)
39-40: Implemented variable change propagation for resource creation.The added code now properly enqueues each newly created resource variable for asynchronous processing, which will trigger downstream evaluations based on this variable.
64-65: Implemented variable change propagation for reference variable creation.Similar to direct variables, referenced variables now properly trigger the update workflow. This ensures consistency in how all variable types are handled.
99-100: Implemented variable change propagation for variable updates.Updates to variables now correctly trigger downstream evaluations, which is essential for maintaining consistency when variable values change.
127-128: Implemented variable change propagation for variable deletion.The system now properly handles variable deletions by enqueuing update jobs, ensuring that dependent systems are notified when variables are removed.
packages/events/src/types.ts (1)
55-55: Updated variable type to use inferred database schema type.The type for
UpdateResourceVariablechannel now correctly uses$inferSelectfrom the schema definition, which aligns with the actual database-returned objects being enqueued in the system.apps/webservice/src/app/api/v1/resources/route.ts (3)
10-10: Added import for variable change detection utility.The import of
getAffectedVariablesbrings in the necessary functionality to identify which variables have changed during resource updates.
79-79: Enhanced resource query to include variables.The query now fetches the resource's variables alongside metadata, which is necessary for the subsequent variable change detection logic.
101-111: Implemented variable change detection and propagation for resources.The added code correctly:
- Computes which variables are affected by comparing previous and new variables
- Enqueues update jobs for each affected variable
This ensures that changes to resources properly trigger evaluations for dependent systems that rely on these variables.
packages/rule-engine/src/manager/variables/affected-variables.ts (1)
1-28: Well-implemented utility for detecting variable changes.The
getAffectedVariablesfunction efficiently identifies three types of variable changes:
- Deleted variables
- Created variables
- Updated variables (using deep equality comparison)
Good use of Map for efficient lookups and lodash's isEqual for deep comparison of objects.
| const dispatchVariableUpdateJobs = async ( | ||
| tx: Tx, | ||
| resource: SCHEMA.Resource, | ||
| ) => { | ||
| const variables = await tx.query.resourceVariable.findMany({ | ||
| where: eq(SCHEMA.resourceVariable.resourceId, resource.id), | ||
| }); | ||
|
|
||
| for (const variable of variables) | ||
| getQueue(Channel.UpdateResourceVariable).add(variable.id, variable); | ||
| }; |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Add error handling to the variable update job dispatching
The function correctly queries and dispatches update jobs for resource variables, but lacks error handling. If the database query fails or job enqueueing encounters an issue, this could lead to unhandled promise rejections.
Consider adding try/catch blocks to handle potential errors:
const dispatchVariableUpdateJobs = async (
tx: Tx,
resource: SCHEMA.Resource,
) => {
+ try {
const variables = await tx.query.resourceVariable.findMany({
where: eq(SCHEMA.resourceVariable.resourceId, resource.id),
});
for (const variable of variables)
getQueue(Channel.UpdateResourceVariable).add(variable.id, variable);
+ } catch (error) {
+ console.error(`Failed to dispatch variable update jobs for resource ${resource.id}:`, error);
+ // Consider whether to rethrow or handle the error differently based on your requirements
+ }
};📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const dispatchVariableUpdateJobs = async ( | |
| tx: Tx, | |
| resource: SCHEMA.Resource, | |
| ) => { | |
| const variables = await tx.query.resourceVariable.findMany({ | |
| where: eq(SCHEMA.resourceVariable.resourceId, resource.id), | |
| }); | |
| for (const variable of variables) | |
| getQueue(Channel.UpdateResourceVariable).add(variable.id, variable); | |
| }; | |
| const dispatchVariableUpdateJobs = async ( | |
| tx: Tx, | |
| resource: SCHEMA.Resource, | |
| ) => { | |
| try { | |
| const variables = await tx.query.resourceVariable.findMany({ | |
| where: eq(SCHEMA.resourceVariable.resourceId, resource.id), | |
| }); | |
| for (const variable of variables) | |
| getQueue(Channel.UpdateResourceVariable).add(variable.id, variable); | |
| } catch (error) { | |
| console.error( | |
| `Failed to dispatch variable update jobs for resource ${resource.id}:`, | |
| error, | |
| ); | |
| // Consider whether to rethrow or handle the error differently based on your requirements | |
| } | |
| }; |
| const { data: variable } = job; | ||
| console.log(variable); | ||
| const { resourceId, key } = variable; | ||
|
|
There was a problem hiding this comment.
🛠️ Refactor suggestion
Avoid leaking variable payloads – replace console.log with structured, redacted logging
variable might contain the raw value of a resource variable, including secrets (sensitive = true). Plain console.log outputs it straight to stdout where it is usually collected by centralised log pipelines. This is a substantial PII / secrets-leak risk.
- console.log(variable);
+ logger.debug({ id: variable.id, key: variable.key, resourceId }, "processing UpdateResourceVariable job");Use your existing @ctrlplane/logger (or similar) and NEVER include the decrypted value.
Committable suggestion skipped: line range outside the PR's diff.
| const affectedVariables = getAffectedVariables( | ||
| resource.variables, | ||
| res.variables, | ||
| ); | ||
|
|
||
| for (const variable of affectedVariables) | ||
| await getQueue(Channel.UpdateResourceVariable).add( | ||
| variable.id, | ||
| variable, | ||
| ); |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Batch-enqueue variable jobs instead of await-ing each call
Inside the for … of loop we perform sequential await getQueue().add, which is O(N) network round-trips. Gather the jobs and use addBulk — consistent with the surrounding insert / update / delete logic.
- for (const variable of affectedVariables)
- await getQueue(Channel.UpdateResourceVariable).add(
- variable.id,
- variable,
- );
+ if (affectedVariables.length) {
+ await getQueue(Channel.UpdateResourceVariable).addBulk(
+ affectedVariables.map((v) => ({ name: v.id, data: v })),
+ );
+ }Reduces latency and contention under large updates.
[performance]
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const affectedVariables = getAffectedVariables( | |
| resource.variables, | |
| res.variables, | |
| ); | |
| for (const variable of affectedVariables) | |
| await getQueue(Channel.UpdateResourceVariable).add( | |
| variable.id, | |
| variable, | |
| ); | |
| const affectedVariables = getAffectedVariables( | |
| resource.variables, | |
| res.variables, | |
| ); | |
| if (affectedVariables.length) { | |
| await getQueue(Channel.UpdateResourceVariable).addBulk( | |
| affectedVariables.map((v) => ({ name: v.id, data: v })), | |
| ); | |
| } |
| console.log(body); | ||
|
|
||
| const all = await upsertResources(db, resource.workspaceId, [ | ||
| _.merge(resource, body), | ||
| ]); | ||
| const res = all.at(0); | ||
|
|
||
| if (res == null) throw new Error("Failed to update resource"); | ||
|
|
||
| console.log(res.metadata); | ||
|
|
||
| console.log( | ||
| Object.fromEntries(res.metadata.map((m) => [m.key, m.value])), | ||
| ); | ||
|
|
There was a problem hiding this comment.
Debug prints expose request bodies & metadata – switch to logger and redact
console.log(body) and subsequent metadata dumps can reveal secrets or internal structure in prod logs. Replace with structured logger at debug level and avoid printing values for keys flagged sensitive.
- console.log(body);
...
- console.log(res.metadata);
- console.log(
- Object.fromEntries(res.metadata.map((m) => [m.key, m.value])),
- );
+ log.debug({ bodyKeys: Object.keys(body) }, "patch body received");
+ log.debug(
+ { metadataKeys: res.metadata.map((m) => m.key) },
+ "patched metadata keys",
+ );[security]
| const { variables } = resource; | ||
| for (const variable of variables) | ||
| await getQueue(Channel.UpdateResourceVariable).add( | ||
| variable.id, | ||
| variable, | ||
| ); | ||
| } | ||
|
|
There was a problem hiding this comment.
🛠️ Refactor suggestion
Use bulk enqueue for variables on newly inserted resources
The nested for loops await every add, causing N×M sequential Redis calls. Bundle them:
- for (const resource of insertedResources) {
- const { variables } = resource;
- for (const variable of variables)
- await getQueue(Channel.UpdateResourceVariable).add(
- variable.id,
- variable,
- );
- }
+ const insertVariableJobs = insertedResources.flatMap((r) =>
+ r.variables.map((v) => ({ name: v.id, data: v })),
+ );
+ await getQueue(Channel.UpdateResourceVariable).addBulk(insertVariableJobs);[performance]
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const { variables } = resource; | |
| for (const variable of variables) | |
| await getQueue(Channel.UpdateResourceVariable).add( | |
| variable.id, | |
| variable, | |
| ); | |
| } | |
| const insertVariableJobs = insertedResources.flatMap((r) => | |
| r.variables.map((v) => ({ name: v.id, data: v })), | |
| ); | |
| await getQueue(Channel.UpdateResourceVariable).addBulk(insertVariableJobs); |
| for (const resource of updatedResources) { | ||
| const { variables } = resource; | ||
| const previousVars = previousVariables[resource.identifier] ?? []; | ||
|
|
||
| const affectedVariables = getAffectedVariables(previousVars, variables); | ||
| for (const variable of affectedVariables) | ||
| await getQueue(Channel.UpdateResourceVariable).add( | ||
| variable.id, | ||
| variable, | ||
| ); | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Duplicate sequential enqueuing – replace with addBulk and reuse helper
Same performance problem for updated resources; recommend extracting a enqueueVariableJobs(variables[]) utility to keep code DRY.
- for (const resource of updatedResources) {
- ...
- for (const variable of affectedVariables)
- await getQueue(Channel.UpdateResourceVariable).add(
- variable.id,
- variable,
- );
- }
+ const updateVariableJobs = updatedResources.flatMap((r) => {
+ const prev = previousVariables[r.identifier] ?? [];
+ return getAffectedVariables(prev, r.variables).map((v) => ({
+ name: v.id,
+ data: v,
+ }));
+ });
+ await getQueue(Channel.UpdateResourceVariable).addBulk(updateVariableJobs);[performance] [maintainability]
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| for (const resource of updatedResources) { | |
| const { variables } = resource; | |
| const previousVars = previousVariables[resource.identifier] ?? []; | |
| const affectedVariables = getAffectedVariables(previousVars, variables); | |
| for (const variable of affectedVariables) | |
| await getQueue(Channel.UpdateResourceVariable).add( | |
| variable.id, | |
| variable, | |
| ); | |
| } | |
| // replace per-item enqueuing with a single bulk enqueue | |
| const updateVariableJobs = updatedResources.flatMap((r) => { | |
| const prev = previousVariables[r.identifier] ?? []; | |
| return getAffectedVariables(prev, r.variables).map((v) => ({ | |
| name: v.id, | |
| data: v, | |
| })); | |
| }); | |
| await getQueue(Channel.UpdateResourceVariable).addBulk(updateVariableJobs); |
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
apps/event-worker/src/workers/delete-resource.ts (1)
34-51: 🛠️ Refactor suggestionAdd error handling to the resource relationship processing.
The function correctly queries and dispatches evaluation jobs for affected release targets, but lacks error handling. If database queries fail, this could lead to unhandled promise rejections.
Consider adding try/catch blocks to handle potential errors:
const dispatchAffectedTargetJobs = async ( tx: Tx, resource: SCHEMA.Resource, ) => { + try { const affectedResources = await getResourceChildren(tx, resource.id); const affectedReleaseTargets = await db .selectDistinctOn([SCHEMA.releaseTarget.id]) .from(SCHEMA.releaseTarget) .where( inArray( SCHEMA.releaseTarget.resourceId, affectedResources.map((r) => r.source.id), ), ); await dispatchEvaluateJobs(affectedReleaseTargets); + } catch (error) { + console.error(`Failed to dispatch jobs for affected targets of resource ${resource.id}:`, error); + // Consider whether to rethrow or handle the error differently based on your requirements + } };
🧹 Nitpick comments (3)
apps/event-worker/src/workers/delete-resource.ts (1)
66-66: Pass transaction to maintain consistency.The
dispatchAffectedTargetJobsfunction is called with the globaldbinstance after the transaction has completed. While this works, it might be preferable to use the transaction inside the block for consistency, especially if you later need to add more operations inside the transaction.Consider moving this call inside the transaction block:
await db.transaction(async (tx) => { await softDeleteResource(tx, resource); await deleteComputedResources(tx, resource); const rts = await deleteReleaseTargets(tx, resource); for (const rt of rts) getQueue(Channel.DeletedReleaseTarget).add(rt.id, rt, { deduplication: { id: rt.id }, }); + await dispatchAffectedTargetJobs(tx, resource); }); - await dispatchAffectedTargetJobs(db, resource);packages/db/src/queries/relationships/get-resource-children.ts (2)
19-52: Consider extracting complex subquery conditions into helper functions.The complex
notExistssubqueries forisMetadataMatchSatisfiedandisMetadataEqualsSatisfiedmake the code harder to read and maintain. These could be extracted into separate helper functions to improve readability.Example refactoring:
const createMetadataMatchCondition = (tx: Tx, sourceResource, targetResource) => { const sourceMetadata = alias(schema.resourceMetadata, "sourceMetadata"); const targetMetadata = alias(schema.resourceMetadata, "targetMetadata"); return notExists( // ... existing query logic ); }; // Then in getResourceChildren: const isMetadataMatchSatisfied = createMetadataMatchCondition(tx, sourceResource, targetResource);Also applies to: 54-85
104-150: Consider adding pagination for large resource graphs.For workspaces with many resources and complex relationships, this query could return a large dataset. Consider implementing pagination or limiting the result set size to prevent potential performance issues.
const relationships = await tx .selectDistinctOn([targetResource.id, schema.resourceRelationshipRule.id], { ruleId: schema.resourceRelationshipRule.id, type: schema.resourceRelationshipRule.dependencyType, source: sourceResource, reference: schema.resourceRelationshipRule.reference, }) .from(sourceResource) // ... existing joins and conditions + .limit(1000); // Add a reasonable limit to prevent excessive result sets
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (8)
apps/event-worker/src/workers/delete-resource.ts(3 hunks)apps/event-worker/src/workers/update-resource-variable.ts(1 hunks)apps/event-worker/src/workers/updated-resources/index.ts(2 hunks)apps/webservice/src/app/api/v1/resources/[resourceId]/route.ts(3 hunks)e2e/tests/api/resource-variables.spec.ts(9 hunks)packages/db/src/queries/relationships/get-resource-children.ts(1 hunks)packages/db/src/upsert-resources.ts(1 hunks)packages/rule-engine/src/manager/variable-manager.ts(2 hunks)
🚧 Files skipped from review as they are similar to previous changes (4)
- packages/db/src/upsert-resources.ts
- apps/webservice/src/app/api/v1/resources/[resourceId]/route.ts
- apps/event-worker/src/workers/update-resource-variable.ts
- e2e/tests/api/resource-variables.spec.ts
🧰 Additional context used
📓 Path-based instructions (1)
`**/*.{ts,tsx}`: **Note on Error Handling:** Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error...
**/*.{ts,tsx}: Note on Error Handling:
Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error handling. These are acceptable as long as they maintain clarity and predictability.
packages/rule-engine/src/manager/variable-manager.tsapps/event-worker/src/workers/delete-resource.tspackages/db/src/queries/relationships/get-resource-children.tsapps/event-worker/src/workers/updated-resources/index.ts
🧬 Code Graph Analysis (2)
packages/db/src/queries/relationships/get-resource-children.ts (1)
packages/db/src/common.ts (1)
Tx(22-22)
apps/event-worker/src/workers/updated-resources/index.ts (3)
packages/db/src/queries/relationships/get-resource-children.ts (1)
getResourceChildren(12-153)packages/db/src/client.ts (1)
db(15-15)apps/event-worker/src/utils/dispatch-evaluate-jobs.ts (1)
dispatchEvaluateJobs(5-20)
⏰ Context from checks skipped due to timeout of 90000ms (4)
- GitHub Check: Typecheck
- GitHub Check: Lint
- GitHub Check: build (linux/amd64)
- GitHub Check: build (linux/amd64)
🔇 Additional comments (5)
packages/rule-engine/src/manager/variable-manager.ts (2)
57-57: Consistent handling of null values improves data integrity.Normalizing null values to empty strings during variable snapshot insertion ensures consistent data representation in the database.
73-76: Good consistency - matching null handling in both insertion and querying.This change correctly applies the same null-to-empty-string conversion logic when querying for variable snapshots as is used during insertion, ensuring consistent behavior across database operations.
packages/db/src/queries/relationships/get-resource-children.ts (3)
7-11: Well-documented function with clear purpose.The JSDoc comments clearly explain the purpose of the function and the parameters it takes, which is excellent for maintainability.
13-17: Good use of table aliases to prevent naming conflicts.Creating aliases for tables that are joined multiple times helps prevent naming conflicts and makes the query logic clearer.
139-146: Good documentation of implementation details.The comment clearly explains why deleted target resources aren't filtered out, which is important for understanding the function's behavior when called after resource deletion.
| const affectedReleaseTargets = await getAffectedReleaseTargets(resource.id); | ||
| await dispatchEvaluateJobs(affectedReleaseTargets); |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Add error handling for job dispatching operations.
If getAffectedReleaseTargets or dispatchEvaluateJobs fails, it could prevent the worker from completing successfully. Adding error handling here ensures the worker can continue even if these operations fail.
- const affectedReleaseTargets = await getAffectedReleaseTargets(resource.id);
- await dispatchEvaluateJobs(affectedReleaseTargets);
+ try {
+ const affectedReleaseTargets = await getAffectedReleaseTargets(resource.id);
+ await dispatchEvaluateJobs(affectedReleaseTargets);
+ } catch (error) {
+ span.recordException(error);
+ console.error(`Failed to process affected release targets for resource ${resource.id}:`, error);
+ // Worker continues execution despite this error
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const affectedReleaseTargets = await getAffectedReleaseTargets(resource.id); | |
| await dispatchEvaluateJobs(affectedReleaseTargets); | |
| try { | |
| const affectedReleaseTargets = await getAffectedReleaseTargets(resource.id); | |
| await dispatchEvaluateJobs(affectedReleaseTargets); | |
| } catch (error) { | |
| span.recordException(error); | |
| console.error(`Failed to process affected release targets for resource ${resource.id}:`, error); | |
| // Worker continues execution despite this error | |
| } |
| const getAffectedReleaseTargets = async (resourceId: string) => { | ||
| const resourceChildren = await getResourceChildren(db, resourceId); | ||
| const releaseTargets = await db | ||
| .selectDistinctOn([schema.releaseTarget.id]) | ||
| .from(schema.releaseTarget) | ||
| .where( | ||
| and( | ||
| inArray( | ||
| schema.releaseTarget.resourceId, | ||
| resourceChildren.map((dr) => dr.source.id), | ||
| ), | ||
| ), | ||
| ); | ||
|
|
||
| return releaseTargets; | ||
| }; |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Add error handling to prevent unhandled promise rejections.
The getAffectedReleaseTargets function handles database queries without error handling. This could lead to unhandled promise rejections if database operations fail.
Consider adding try/catch blocks:
const getAffectedReleaseTargets = async (resourceId: string) => {
+ try {
const resourceChildren = await getResourceChildren(db, resourceId);
const releaseTargets = await db
.selectDistinctOn([schema.releaseTarget.id])
.from(schema.releaseTarget)
.where(
and(
inArray(
schema.releaseTarget.resourceId,
resourceChildren.map((dr) => dr.source.id),
),
),
);
return releaseTargets;
+ } catch (error) {
+ console.error(`Failed to get affected release targets for resource ${resourceId}:`, error);
+ return [];
+ }
};📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const getAffectedReleaseTargets = async (resourceId: string) => { | |
| const resourceChildren = await getResourceChildren(db, resourceId); | |
| const releaseTargets = await db | |
| .selectDistinctOn([schema.releaseTarget.id]) | |
| .from(schema.releaseTarget) | |
| .where( | |
| and( | |
| inArray( | |
| schema.releaseTarget.resourceId, | |
| resourceChildren.map((dr) => dr.source.id), | |
| ), | |
| ), | |
| ); | |
| return releaseTargets; | |
| }; | |
| const getAffectedReleaseTargets = async (resourceId: string) => { | |
| try { | |
| const resourceChildren = await getResourceChildren(db, resourceId); | |
| const releaseTargets = await db | |
| .selectDistinctOn([schema.releaseTarget.id]) | |
| .from(schema.releaseTarget) | |
| .where( | |
| and( | |
| inArray( | |
| schema.releaseTarget.resourceId, | |
| resourceChildren.map((dr) => dr.source.id), | |
| ), | |
| ), | |
| ); | |
| return releaseTargets; | |
| } catch (error) { | |
| console.error( | |
| `Failed to get affected release targets for resource ${resourceId}:`, | |
| error, | |
| ); | |
| return []; | |
| } | |
| }; |
There was a problem hiding this comment.
Actionable comments posted: 1
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
e2e/tests/api/resource-variables.spec.ts(9 hunks)packages/db/drizzle/0101_mysterious_redwing.sql(1 hunks)packages/db/drizzle/meta/_journal.json(1 hunks)packages/db/src/schema/release.ts(1 hunks)packages/rule-engine/src/manager/variable-manager.ts(2 hunks)
✅ Files skipped from review due to trivial changes (2)
- packages/db/drizzle/0101_mysterious_redwing.sql
- packages/db/drizzle/meta/_journal.json
🚧 Files skipped from review as they are similar to previous changes (2)
- packages/rule-engine/src/manager/variable-manager.ts
- e2e/tests/api/resource-variables.spec.ts
🧰 Additional context used
📓 Path-based instructions (1)
`**/*.{ts,tsx}`: **Note on Error Handling:** Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error...
**/*.{ts,tsx}: Note on Error Handling:
Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error handling. These are acceptable as long as they maintain clarity and predictability.
packages/db/src/schema/release.ts
⏰ Context from checks skipped due to timeout of 90000ms (4)
- GitHub Check: build (linux/amd64)
- GitHub Check: Lint
- GitHub Check: Typecheck
- GitHub Check: build (linux/amd64)
| .references(() => workspace.id, { onDelete: "cascade" }), | ||
|
|
||
| value: jsonb("value").$type<any>().notNull(), | ||
| value: jsonb("value").$type<any>(), |
There was a problem hiding this comment.
💡 Verification agent
🧩 Analysis chain
Removed the notNull constraint from the value field
This change allows the value field in the variableValueSnapshot table to accept null values, which aligns with the PR objective to trigger evaluations from referenced variable updates. Make sure to verify that:
- All code that accesses this field properly handles null values
- The uniqueness index on line 110 (which includes
t.value) properly handles null values - There's a database migration to accompany this schema change
🏁 Script executed:
#!/bin/bash
# Search for places that might need to handle null values in the variable value
echo "Checking usages of variableValueSnapshot.value that might need null handling:"
rg -A 3 "variableValueSnapshot.*value" --type tsLength of output: 2229
🏁 Script executed:
#!/bin/bash
echo "----- Schema excerpt (packages/db/src/schema/release.ts lines 80-130) -----"
sed -n '80,130p' packages/db/src/schema/release.ts || echo "Schema file not found"
echo
echo "----- create-release-job.ts snippet around value usage -----"
grep -R --include '*.ts' -n "value: v\\.variableValueSnapshot\\.value" -A5 packages/db/src/queries/create-release-job.ts || echo "Snippet not found"
echo
echo "----- Listing potential migration files -----"
find . -type f | grep -i migration || echo "No migration files detected"
echo
echo "----- Searching for null constraint removal in migrations -----"
grep -R -H -n -E "DROP NOT NULL|SET NOT NULL" . || echo "No ALTER COLUMN null constraints found"Length of output: 4671
Validate nullability changes for variableValueSnapshot.value
Please confirm the following now that the value column is nullable:
-
Code reads/writes handle
nullcorrectly:
• packages/db/src/queries/create-release-job.ts (≈ line 67):value: v.variableValueSnapshot.value– update TypeScript types or add null checks if needed.
• packages/rule-engine/src/manager/variable-manager.ts already coversisNull(...)vseq(...).
• apps/webservice/src/app/api/v1/.../releases/route.ts will emit JSONvalue: null– verify clients tolerate this. -
Unique index on
(workspace_id, key, value)in packages/db/src/schema/release.ts treatsNULLas distinct in PostgreSQL. If you need to enforce uniqueness forNULLtoo, consider a functional or partial index (e.g. coalesce). -
Migration file packages/db/drizzle/0101_mysterious_redwing.sql includes
ALTER TABLE "variable_value_snapshot" ALTER COLUMN "value" DROP NOT NULL;
Ensure it’s in your deployment pipeline.
Summary by CodeRabbit
New Features
Bug Fixes
Tests
Chores