Skip to content

refactor: move dispatch hooks into worker#535

Merged
adityachoudhari26 merged 5 commits intomainfrom
cleanup-dispatch-hooks
May 1, 2025
Merged

refactor: move dispatch hooks into worker#535
adityachoudhari26 merged 5 commits intomainfrom
cleanup-dispatch-hooks

Conversation

@adityachoudhari26
Copy link
Copy Markdown
Member

@adityachoudhari26 adityachoudhari26 commented May 1, 2025

Summary by CodeRabbit

  • New Features

    • Introduced new background workers for deleting deployments, environments, and processing deleted release targets asynchronously.
    • Added a new event channel dedicated to handling deleted release targets.
  • Refactor

    • Delegated deletion of deployments, environments, and resources from direct database calls to asynchronous job queues.
    • Removed inline event dispatching and resource selector change detection from API endpoints and workers.
    • Simplified update and delete mutations by deferring complex logic to background jobs.
    • Centralized post-deletion processing within dedicated workers to enhance reliability.
  • Bug Fixes

    • Enhanced deduplication and retry mechanisms during deletion to avoid race conditions and duplicate processing.
  • Chores

    • Removed unused functions, imports, and deprecated event triggers related to resource and environment deletions.
    • Updated event action identifiers to use shared constants for improved consistency.
    • Adjusted API schema and tests to support new event querying and deletion workflows.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented May 1, 2025

Walkthrough

This set of changes refactors the deletion and event-handling workflows for deployments, environments, and resources to use asynchronous job queues rather than immediate, inline logic. New workers are introduced to handle deletion events for deployments, environments, and release targets, with dedicated channels and deduplication. The code removes direct event dispatching and complex selector-difference logic from API routes and worker functions, delegating these responsibilities to the new queue-driven workers. The event types and channel mappings are updated to support these changes, and related triggers and resource deletion utilities are removed or refactored to align with the new architecture.

Changes

File(s) Change Summary
apps/event-worker/src/workers/compute-systems-release-targets.ts Refactored transaction to return both matched and deleted release targets; enqueues deleted targets for further processing via a dedicated queue.
apps/event-worker/src/workers/delete-deployment.ts
apps/event-worker/src/workers/delete-environment.ts
Added new workers for handling deployment and environment deletions, using transactions and enqueuing deleted release targets for post-processing with retry on lock conflicts.
apps/event-worker/src/workers/delete-resource.ts Removed computed release target deletions and exit hook dispatches; now enqueues deleted release targets for asynchronous handling.
apps/event-worker/src/workers/deleted-release-target.ts Introduced a new worker for handling deleted release targets, generating and dispatching DeploymentResourceRemoved events.
apps/event-worker/src/workers/index.ts Registered new workers for deployment, environment, and release target deletion channels.
apps/event-worker/src/workers/update-deployment.ts
apps/event-worker/src/workers/update-environment.ts
Removed exit hook dispatching and selector-difference logic; workers now only enqueue recompute jobs as needed.
apps/event-worker/src/workers/updated-resources/index.ts Removed unused, commented-out exit hook dispatch function.
apps/pty-proxy/src/controller/sockets.ts Changed resource deletion from direct call to enqueuing a deletion job via the queue.
packages/api/src/router/deployment.ts
packages/api/src/router/environment.ts
Refactored mutations to enqueue deletion jobs instead of performing direct deletions and event dispatches; simplified update logic.
packages/events/src/types.ts Added DeletedReleaseTarget channel to the event enum and map; clarified payload types for deletion channels.
packages/job-dispatch/src/deployment-update.ts Removed function and logic for handling deployment filter changes and related exit hooks.
packages/job-dispatch/src/events/handlers/resource-removed.ts
packages/job-dispatch/src/events/triggers/deployment-removed.ts
Replaced string action literals with HookAction.DeploymentResourceRemoved enum for consistency.
packages/job-dispatch/src/events/triggers/environment-deleted.ts
packages/job-dispatch/src/events/triggers/resource-deleted.ts
Deleted triggers for generating events on environment and resource deletion.
packages/job-dispatch/src/events/triggers/index.ts Removed exports for deleted trigger modules.
packages/job-dispatch/src/resource/delete.ts Deleted resource deletion utility, previously handled direct resource deletion and event dispatching.
packages/job-dispatch/src/resource/index.ts Removed export of resource deletion module.
apps/webservice/src/app/api/v1/environments/[environmentId]/route.ts Changed environment deletion to enqueue deletion job instead of direct DB deletion; updated status code usage.
e2e/api/schema.ts Added new API endpoint to query events by action; introduced Event schema; renamed some schema properties; adjusted response formats.
e2e/tests/api/deployment-remove-event.spec.ts Added end-to-end tests verifying that deployment resource removal events are triggered by various delete and update actions.
e2e/tests/api/deployment-remove-event.spec.yaml Added YAML test configuration defining a system entity for release target deletion testing.
e2e/tests/api/deployments.spec.ts Added delay before assertions in deployment deletion test to allow processing.
e2e/tests/api/environments.spec.ts Added delay after environment deletion call to allow processing before assertions.

Sequence Diagram(s)

sequenceDiagram
    participant API
    participant Queue
    participant Worker
    participant DB
    participant EventHandler

    API->>Queue: Enqueue DeleteDeployment/DeleteEnvironment/DeleteResource job
    Queue->>Worker: Deliver job (with deduplication)
    Worker->>DB: Begin transaction, lock rows, delete entity
    Worker->>DB: Fetch associated ReleaseTargets
    Worker->>DB: Commit transaction
    Worker->>Queue: Enqueue DeletedReleaseTarget jobs for each deleted target
    Queue->>Worker: Deliver DeletedReleaseTarget job
    Worker->>DB: Fetch deployment and resource by IDs
    Worker->>EventHandler: Dispatch DeploymentResourceRemoved event
Loading

Possibly related PRs

Suggested reviewers

  • jsbroks

Poem

In the warren, queues now hop,
Deletions handled, pop by pop.
No more scurrying with hooks in haste,
Each target's fate now neatly placed.
Rabbits cheer as jobs align,
Async magic—oh, how fine!
🐇✨


📜 Recent review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3883736 and 05c9c5d.

📒 Files selected for processing (8)
  • apps/event-worker/src/workers/deleted-release-target.ts (1 hunks)
  • apps/webservice/src/app/api/v1/environments/[environmentId]/route.ts (3 hunks)
  • e2e/api/schema.ts (5 hunks)
  • e2e/tests/api/deployment-remove-event.spec.ts (1 hunks)
  • e2e/tests/api/deployment-remove-event.spec.yaml (1 hunks)
  • e2e/tests/api/deployments.spec.ts (1 hunks)
  • e2e/tests/api/environments.spec.ts (1 hunks)
  • packages/job-dispatch/src/events/handlers/resource-removed.ts (2 hunks)
✅ Files skipped from review due to trivial changes (1)
  • e2e/tests/api/deployment-remove-event.spec.yaml
🚧 Files skipped from review as they are similar to previous changes (2)
  • packages/job-dispatch/src/events/handlers/resource-removed.ts
  • apps/event-worker/src/workers/deleted-release-target.ts
🧰 Additional context used
📓 Path-based instructions (1)
`**/*.{ts,tsx}`: **Note on Error Handling:** Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error...

**/*.{ts,tsx}: Note on Error Handling:
Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error handling. These are acceptable as long as they maintain clarity and predictability.

  • e2e/tests/api/deployments.spec.ts
  • e2e/tests/api/environments.spec.ts
  • apps/webservice/src/app/api/v1/environments/[environmentId]/route.ts
  • e2e/api/schema.ts
  • e2e/tests/api/deployment-remove-event.spec.ts
🧬 Code Graph Analysis (1)
apps/webservice/src/app/api/v1/environments/[environmentId]/route.ts (1)
packages/events/src/index.ts (1)
  • getQueue (28-34)
⏰ Context from checks skipped due to timeout of 90000ms (4)
  • GitHub Check: build (linux/amd64)
  • GitHub Check: build (linux/amd64)
  • GitHub Check: Typecheck
  • GitHub Check: Lint
🔇 Additional comments (16)
e2e/tests/api/deployments.spec.ts (1)

174-175: Good addition of delay to accommodate asynchronous deletion

The 5-second delay after the deployment deletion API call is appropriate given the architectural changes to use asynchronous job queues instead of immediate, inline deletion logic. This ensures the test has sufficient time for the DeleteDeployment worker to process the job before checking results.

e2e/tests/api/environments.spec.ts (1)

403-404: Good addition of delay for asynchronous processing

Adding a 5-second delay after the environment deletion API call is appropriate given the shift to asynchronous deletion processing through the job queue. This ensures the test has sufficient time for the DeleteEnvironment worker to process the deletion job before assertions are checked.

apps/webservice/src/app/api/v1/environments/[environmentId]/route.ts (3)

2-2: Appropriate imports for the refactored implementation

Good use of the HTTP status code constant and event queue imports to support the refactoring from synchronous to asynchronous processing.

Also applies to: 6-6


31-31: Consistent usage of status code constants

Using the imported NOT_FOUND constant instead of hardcoded status codes improves consistency and maintainability.

Also applies to: 59-59


62-68: Good implementation of asynchronous deletion

The code correctly implements the refactoring to use an asynchronous job queue for environment deletion instead of direct database operations. This follows good practices by:

  1. Using the environment ID for both job ID and deduplication
  2. Returning the previously fetched environment instead of waiting for deletion to complete
  3. Properly handling the job queue interaction

This aligns with the broader architectural shift to move complex operations like deletion and event dispatching into dedicated workers.

e2e/tests/api/deployment-remove-event.spec.ts (6)

1-28: Well-structured test file setup

The test file is appropriately set up with proper imports, configuration, and lifecycle hooks for test data management.


29-113: Comprehensive test for resource deletion events

This test correctly validates that deleting a resource triggers the expected "deployment.resource.removed" event. The test follows good practices:

  1. Properly sets up test entities (environment, deployment, resource)
  2. Uses appropriate wait times for asynchronous operations
  3. Verifies the event contains the correct resource and deployment identifiers
  4. Uses clear assertions with descriptive expectations

115-201: Good test coverage for environment deletion

This test properly validates the event generation flow when an environment is deleted. The structure follows the same pattern as the resource deletion test, maintaining consistency throughout the test file.


203-290: Well-designed test for unmatching via resource update

This test effectively validates that updating a resource's name to unmatch it from an environment's selector triggers the appropriate deployment.resource.removed event. The test is thorough and follows the established pattern in this file.


292-376: Good coverage for environment selector changes

This test validates that changing an environment's resource selector to unmatch resources correctly triggers deployment.resource.removed events. The approach is consistent with other tests in this file.


378-471: Thorough test for deployment selector changes

This test effectively validates event generation when a deployment's resource selector is changed to unmatch resources. The test structure is consistent with other tests in this file and provides good coverage for this important workflow.

e2e/api/schema.ts (5)

709-729: LGTM: New events endpoint properly defined.

The new endpoint /v1/workspaces/{workspaceId}/events/{action} is well-structured with proper path parameters for both workspace ID and action. This endpoint will allow clients to query events by action type, which supports the new event-driven architecture for handling operations asynchronously.


918-918: Property rename provides better consistency.

The property has been renamed from deploymentVersion to version while maintaining the same type. This change improves naming consistency across the codebase.


1296-1305: LGTM: Event schema properly structured.

The new Event schema definition includes all necessary fields:

  • id for unique identification
  • action to categorize the event
  • payload for event-specific data
  • createdAt for timestamping

This structure properly supports the new event-driven architecture for handling deletion and update operations asynchronously.


2248-2250: Response format improvement for getAgentRunningJobs.

The response format has been changed from directly returning an array of Job objects to returning an object with a jobs property containing the array. This change provides more flexibility for adding metadata to the response in the future.


3824-3869: LGTM: getEventsByAction operation properly defined.

The new operation defines:

  • Required path parameters
  • Appropriate response status codes (200, 404, 500)
  • Well-structured response formats

This operation fully supports the endpoint definition for querying events by action type, which is essential for the new event-driven architecture.

✨ Finishing Touches
  • 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🔭 Outside diff range comments (1)
packages/api/src/router/environment.ts (1)

279-282: 🛠️ Refactor suggestion

Queue write should be awaited for consistency with the surrounding code

In create you await getQueue(...).add(...), but here the promise is fire-and-forgot.
If the Redis write fails, the API will still report success and the update event will never be processed.

-      getQueue(Channel.UpdateEnvironment).add(input.id, {
+      await getQueue(Channel.UpdateEnvironment).add(input.id, {
         ...updatedEnv,
         oldSelector: oldEnv.environment.resourceSelector,
       });

This keeps behaviour consistent with the create mutation and surfaces enqueue errors to the client.

🧹 Nitpick comments (6)
packages/events/src/types.ts (1)

30-31: Consider extracting a shared IdPayload type for DRYness

DeleteDeployment and DeleteEnvironment now share the exact { id: string } payload shape. Extracting a small alias reduces duplication and makes future refactors (e.g., switching to ULIDs) one-liner changes.

+type IdPayload = { id: string };-  [Channel.DeleteDeployment]: { id: string };
-  [Channel.DeleteEnvironment]: { id: string };
+  [Channel.DeleteDeployment]: IdPayload;
+  [Channel.DeleteEnvironment]: IdPayload;
apps/event-worker/src/workers/delete-deployment.ts (2)

21-24: Variable shadowing slightly harms readability

const releaseTargets = … inside the transaction shadows the outer variable of the same name.
Although technically correct, it makes the control flow harder to scan.

-        const releaseTargets = await tx.query.releaseTarget.findMany({
+        const targets = await tx.query.releaseTarget.findMany({-        return releaseTargets;
+        return targets;

32-35: Bulk-enqueue for better throughput

for … getQueue().add() issues one Redis round-trip per release target.
BullMQ supports addBulk, which can reduce latency and connection pressure:

-      for (const rt of releaseTargets)
-        getQueue(Channel.DeletedReleaseTarget).add(rt.id, rt, {
-          deduplication: { id: rt.id },
-        });
+      await getQueue(Channel.DeletedReleaseTarget).addBulk(
+        releaseTargets.map((rt) => ({
+          name: rt.id,
+          data: rt,
+          opts: { deduplication: { id: rt.id } },
+        })),
+      );
apps/event-worker/src/workers/delete-environment.ts (2)

21-24: Avoid inner/outer releaseTargets shadowing

Rename the inner constant to keep scopes clear (same rationale as in delete-deployment.ts).


32-35: Switch to addBulk for efficiency

Identical to the deployment worker: batching improves Redis throughput.

packages/api/src/router/environment.ts (1)

295-300: Consider explicit await or a simple ACK response for delete mutation

mutation(({ input }) => getQueue(...).add(...)) returns the raw BullMQ Job promise.
If the client only needs confirmation that the request was accepted, returning { ok: true } (after await) or awaiting and discarding the job object may be clearer and avoids leaking internal queue details.

No change is strictly required, but aligning this with the create / update patterns would improve consistency.

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 8a28983 and e660a33.

📒 Files selected for processing (20)
  • apps/event-worker/src/workers/compute-systems-release-targets.ts (3 hunks)
  • apps/event-worker/src/workers/delete-deployment.ts (1 hunks)
  • apps/event-worker/src/workers/delete-environment.ts (1 hunks)
  • apps/event-worker/src/workers/delete-resource.ts (2 hunks)
  • apps/event-worker/src/workers/deleted-release-target.ts (1 hunks)
  • apps/event-worker/src/workers/index.ts (2 hunks)
  • apps/event-worker/src/workers/update-deployment.ts (1 hunks)
  • apps/event-worker/src/workers/update-environment.ts (0 hunks)
  • apps/event-worker/src/workers/updated-resources/index.ts (0 hunks)
  • apps/pty-proxy/src/controller/sockets.ts (2 hunks)
  • packages/api/src/router/deployment.ts (2 hunks)
  • packages/api/src/router/environment.ts (3 hunks)
  • packages/events/src/types.ts (2 hunks)
  • packages/job-dispatch/src/deployment-update.ts (0 hunks)
  • packages/job-dispatch/src/events/handlers/resource-removed.ts (2 hunks)
  • packages/job-dispatch/src/events/triggers/deployment-removed.ts (2 hunks)
  • packages/job-dispatch/src/events/triggers/environment-deleted.ts (0 hunks)
  • packages/job-dispatch/src/events/triggers/index.ts (0 hunks)
  • packages/job-dispatch/src/events/triggers/resource-deleted.ts (0 hunks)
  • packages/job-dispatch/src/resource/delete.ts (0 hunks)
💤 Files with no reviewable changes (7)
  • apps/event-worker/src/workers/updated-resources/index.ts
  • packages/job-dispatch/src/events/triggers/index.ts
  • apps/event-worker/src/workers/update-environment.ts
  • packages/job-dispatch/src/events/triggers/resource-deleted.ts
  • packages/job-dispatch/src/deployment-update.ts
  • packages/job-dispatch/src/resource/delete.ts
  • packages/job-dispatch/src/events/triggers/environment-deleted.ts
🧰 Additional context used
📓 Path-based instructions (1)
`**/*.{ts,tsx}`: **Note on Error Handling:** Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error...

**/*.{ts,tsx}: Note on Error Handling:
Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error handling. These are acceptable as long as they maintain clarity and predictability.

  • packages/job-dispatch/src/events/triggers/deployment-removed.ts
  • packages/job-dispatch/src/events/handlers/resource-removed.ts
  • packages/api/src/router/deployment.ts
  • apps/pty-proxy/src/controller/sockets.ts
  • apps/event-worker/src/workers/index.ts
  • apps/event-worker/src/workers/compute-systems-release-targets.ts
  • packages/events/src/types.ts
  • apps/event-worker/src/workers/update-deployment.ts
  • apps/event-worker/src/workers/delete-environment.ts
  • packages/api/src/router/environment.ts
  • apps/event-worker/src/workers/delete-resource.ts
  • apps/event-worker/src/workers/deleted-release-target.ts
  • apps/event-worker/src/workers/delete-deployment.ts
🧬 Code Graph Analysis (7)
packages/api/src/router/deployment.ts (1)
packages/events/src/index.ts (1)
  • getQueue (28-34)
apps/event-worker/src/workers/index.ts (3)
apps/event-worker/src/workers/delete-deployment.ts (1)
  • deleteDeploymentWorker (6-46)
apps/event-worker/src/workers/delete-environment.ts (1)
  • deleteEnvironmentWorker (6-46)
apps/event-worker/src/workers/deleted-release-target.ts (1)
  • deletedReleaseTargetWorker (16-46)
apps/event-worker/src/workers/compute-systems-release-targets.ts (3)
packages/db/src/client.ts (1)
  • db (15-15)
packages/events/src/index.ts (1)
  • getQueue (28-34)
apps/event-worker/src/utils/dispatch-evaluate-jobs.ts (1)
  • dispatchEvaluateJobs (5-11)
packages/api/src/router/environment.ts (1)
packages/events/src/index.ts (1)
  • getQueue (28-34)
apps/event-worker/src/workers/delete-resource.ts (1)
packages/events/src/index.ts (1)
  • getQueue (28-34)
apps/event-worker/src/workers/deleted-release-target.ts (7)
packages/logger/src/index.ts (1)
  • logger (48-48)
packages/events/src/index.ts (1)
  • createWorker (10-25)
packages/db/src/schema/release.ts (1)
  • releaseTarget (20-42)
packages/db/src/schema/resource.ts (1)
  • resource (59-87)
packages/db/src/client.ts (1)
  • db (15-15)
packages/db/src/common.ts (1)
  • takeFirst (9-13)
packages/job-dispatch/src/events/index.ts (1)
  • handleEvent (8-9)
apps/event-worker/src/workers/delete-deployment.ts (2)
packages/events/src/index.ts (2)
  • createWorker (10-25)
  • getQueue (28-34)
packages/db/src/client.ts (1)
  • db (15-15)
⏰ Context from checks skipped due to timeout of 90000ms (4)
  • GitHub Check: Typecheck
  • GitHub Check: Lint
  • GitHub Check: build (linux/amd64)
  • GitHub Check: build (linux/amd64)
🔇 Additional comments (19)
packages/job-dispatch/src/events/triggers/deployment-removed.ts (2)

9-9: Good addition of HookAction import for standardization.

Adding the import for HookAction enum from @ctrlplane/validators/events is a good practice to standardize event action names.


42-42: Improved type safety using enum instead of string literal.

Replacing the string literal with HookAction.DeploymentResourceRemoved enum value enhances type safety and maintainability. This change is consistent with the broader refactoring to centralize event action handling.

packages/job-dispatch/src/events/handlers/resource-removed.ts (2)

6-6: Good addition of HookAction import for standardization.

Adding the import for HookAction enum from @ctrlplane/validators/events aligns with best practices for type safety.


16-16: Improved type safety using enum instead of string literal.

Replacing the string literal with HookAction.DeploymentResourceRemoved enum value enhances type safety and maintainability. This change is consistent with the broader refactoring effort to centralize event handling.

apps/pty-proxy/src/controller/sockets.ts (2)

6-6: Good import addition for queue-based operation.

Adding imports for Channel and getQueue from @ctrlplane/events supports the transition to queue-based asynchronous processing.


33-39: Well-implemented transition to queue-based resource deletion.

The code effectively transitions from direct resource deletion to an asynchronous queue-based approach, which aligns with the broader architectural refactoring. The use of resource ID as a deduplication key is a good practice to prevent duplicate deletion jobs.

This change maintains the same functionality while improving the system's architecture and scalability.

apps/event-worker/src/workers/index.ts (2)

10-13: Good addition of new worker imports.

Adding imports for the new worker modules (deleteDeploymentWorker, deleteEnvironmentWorker, and deletedReleaseTargetWorker) is consistent with the refactoring to move deletion logic into dedicated workers.


52-54: Well-structured registration of new workers.

The workers are properly registered with their corresponding channels in the exported workers object. This enables the system to process the new event types and maintains consistency with the existing pattern.

Based on the provided snippets of the worker implementations, they appear to handle database transactions appropriately and include proper error handling.

packages/api/src/router/deployment.ts (2)

17-17: Simplified imports align with worker refactoring.

The import statement has been appropriately reduced to only include the updateDeployment function, removing event-related imports that are no longer needed in this file.


322-327: Excellent refactoring of deletion logic to use job queue.

The deletion mutation has been effectively simplified to enqueue a job instead of performing the deletion directly. This change offers several benefits:

  1. Improves API responsiveness by making deletion asynchronous
  2. Centralizes deletion logic in a dedicated worker
  3. Adds automatic deduplication to prevent duplicate deletion attempts
  4. Better separates concerns between API and data manipulation logic

This architectural change aligns perfectly with the PR objective of moving dispatch hooks into workers.

apps/event-worker/src/workers/update-deployment.ts (2)

3-3: Simplified imports reflect streamlined worker responsibility.

The removal of imports like and, not, selector, and handleEvent reflects the architectural improvement of removing resource exit handling logic from this worker. This change properly focuses the worker on its core responsibility of handling deployment updates.


13-53: Well-structured worker with clear separation of concerns.

The worker now properly focuses on handling job agent changes and resource selector changes without direct event dispatching. This refactoring aligns with moving event handling to dedicated workers, making the codebase more maintainable and modular.

apps/event-worker/src/workers/compute-systems-release-targets.ts (4)

83-83: Good restructuring of transaction return value.

Changing the transaction to return both matched and deleted release targets improves the design by explicitly tracking both sets of data, enabling proper handling of each collection outside the transaction.


157-164: Properly structured return data for downstream processing.

The addition of explicitly querying and returning both matched and deleted release targets provides the downstream code with all necessary information to process both sets appropriately. This supports the new architecture where deleted targets need special handling.


167-173: Well-implemented event-driven deletion handling.

The added code correctly enqueues each deleted release target to the appropriate channel with proper deduplication. The early exit check when no matched targets exist is a good optimization. This implementation cleanly separates deletion handling from other processing.


194-194: Correctly updated to use matched targets from new structure.

The dispatchEvaluateJobs call has been properly updated to use the matched release targets from the new return structure, maintaining the existing functionality while supporting the new architecture.

apps/event-worker/src/workers/deleted-release-target.ts (2)

1-9: Well-organized imports and proper logger setup.

The imports are appropriately structured to include all necessary dependencies for the worker's functionality. Creating a scoped logger with the worker name is a good practice for easier log filtering and debugging.


11-46: Excellent implementation of the dedicated worker for release target deletion events.

This worker is well-designed with:

  1. Clear documentation explaining its purpose and role in the process
  2. Efficient concurrent fetching of deployment and resource data
  3. Proper event construction using standardized action types
  4. Integration with the existing event handling infrastructure
  5. Comprehensive error handling with logging

The implementation perfectly addresses the PR objective of moving dispatch hooks into dedicated workers, centralizing the event handling logic that was previously scattered across multiple components. This will make the codebase more maintainable and easier to debug.

packages/api/src/router/environment.ts (1)

270-276: Confirm that policy updates are intentionally disallowed

You deliberately overwrite any policyId supplied in input.data with the existing policyId (oldEnv.environment.policyId).
If callers are expected to modify the policy via this endpoint, their change will be silently ignored.

Please verify that this behaviour is desired; otherwise consider merging the provided policyId (with proper ACL checks) or throwing a validation error.

Comment on lines +37 to +41
const isRowLocked = e.code === "55P03";
if (isRowLocked) {
await getQueue(Channel.DeleteDeployment).add(job.name, job.data);
return;
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Unbounded immediate retries may cause a hot loop

When a row is locked (55P03) the job is re-queued with no delay or back-off, potentially hammering the queue every millisecond until the lock is released.

Consider exponential backoff or a small fixed delay:

-        await getQueue(Channel.DeleteDeployment).add(job.name, job.data);
+        await getQueue(Channel.DeleteDeployment).add(job.name, job.data, {
+          delay: 1000,          // 1-second cool-off
+          attempts: 5,          // optional: limit retries
+        });
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const isRowLocked = e.code === "55P03";
if (isRowLocked) {
await getQueue(Channel.DeleteDeployment).add(job.name, job.data);
return;
}
const isRowLocked = e.code === "55P03";
if (isRowLocked) {
await getQueue(Channel.DeleteDeployment).add(job.name, job.data, {
delay: 1000, // 1-second cool-off
attempts: 5, // optional: limit retries
});
return;
}

Comment on lines +37 to +41
const isRowLocked = e.code === "55P03";
if (isRowLocked) {
await getQueue(Channel.DeleteEnvironment).add(job.name, job.data);
return;
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Introduce retry back-off to prevent tight loops

Re-queuing immediately after a lock conflict can create a hammering loop under heavy load.
Add a short delay or back-off strategy (see deployment worker suggestion).

Comment on lines +38 to +41
for (const rt of rts)
getQueue(Channel.DeletedReleaseTarget).add(rt.id, rt, {
deduplication: { id: rt.id },
});
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

⚠️ Potential issue

Await the asynchronous queue writes to avoid silent job-creation failures

getQueue(...).add() returns a Promise. Because these promises are not awaited (or collected with Promise.all), the worker acknowledges success and the DB transaction commits even if a Redis/network error prevents the jobs from being enqueued.
That can leave the system with soft-deleted release targets but no follow-up DeletedReleaseTarget events, breaking downstream invariants.

Suggested fix (keeps dedup & scales with many RTs):

-      for (const rt of rts)
-        getQueue(Channel.DeletedReleaseTarget).add(rt.id, rt, {
-          deduplication: { id: rt.id },
-        });
+      await Promise.all(
+        rts.map((rt) =>
+          getQueue(Channel.DeletedReleaseTarget).add(rt.id, rt, {
+            deduplication: { id: rt.id },
+          }),
+      );

(If the list can be large, queue.addBulk() would be even more efficient.)

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
for (const rt of rts)
getQueue(Channel.DeletedReleaseTarget).add(rt.id, rt, {
deduplication: { id: rt.id },
});
await Promise.all(
rts.map((rt) =>
getQueue(Channel.DeletedReleaseTarget).add(rt.id, rt, {
deduplication: { id: rt.id },
}),
);

@adityachoudhari26 adityachoudhari26 merged commit ad567f5 into main May 1, 2025
6 of 9 checks passed
@adityachoudhari26 adityachoudhari26 deleted the cleanup-dispatch-hooks branch May 1, 2025 21:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant