-
-
Notifications
You must be signed in to change notification settings - Fork 591
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat/realtime-streams #1470
feat/realtime-streams #1470
Conversation
🦋 Changeset detectedLatest commit: 722bb45 The changes in this PR will be included in the next version bump. This PR includes changesets to release 11 packages
Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
Important Review skippedMore than 25% of the files skipped due to max files limit. The review is being skipped to prevent a low-quality review. 29 files out of 111 files are above the max files limit of 75. Please upgrade to Pro plan to get higher limits. You can disable this status message by setting the WalkthroughThis pull request introduces multiple changes across various files, primarily focusing on enhancing debugging configurations, API route handling, and real-time data streaming functionalities. Key updates include new debugging configurations for the Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant API
participant RealtimeStreams
User->>API: Request for real-time data
API->>RealtimeStreams: Stream data based on request
RealtimeStreams-->>API: Send streamed data
API-->>User: Deliver real-time data response
Possibly related PRs
Suggested reviewers
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
e313611
to
1492cb7
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 58
🧹 Outside diff range and nitpick comments (62)
packages/core/src/v3/waitUntil/types.ts (1)
1-4
: Consider adding a generic type parameter for better type safetyThe
Promise<any>
usage reduces type safety. Consider parameterizing the type to maintain type information throughout the promise chain.-export type MaybeDeferredPromise = { +export type MaybeDeferredPromise<T = unknown> = { requiresResolving(): boolean; - promise: Promise<any> | (() => Promise<any>); + promise: Promise<T> | (() => Promise<T>); };apps/webapp/app/services/httpAsyncStorage.server.ts (1)
Line range hint
3-8
: Consider adding JSDoc comments for better documentation.The type definition would benefit from JSDoc comments explaining the purpose of each property, especially for the new
method
property.Here's a suggested improvement:
+/** + * Represents the HTTP context stored for each request + */ export type HttpLocalStorage = { + /** Unique identifier for the request */ requestId: string; + /** Request path */ path: string; + /** Host header or server name */ host: string; + /** HTTP method (GET, POST, etc.) */ method: string; };packages/trigger-sdk/src/v3/waitUntil.ts (2)
11-13
: Consider adding stronger type safetyThe function accepts
Promise<any>
which could lead to type-unsafe usage. Consider using generics to preserve the promise's type information.-export function waitUntil(promise: Promise<any>) { +export function waitUntil<T>(promise: Promise<T>): Promise<T> { return core_waitUntil.register({ promise, requiresResolving: () => true }); }
3-10
: Enhance documentation with error handling detailsThe JSDoc comments are good but could be improved by documenting:
- What happens if the promise rejects
- Whether the function preserves the original promise's error
- Example usage showing both success and error cases
/** * waitUntil extends the lifetime of a task run until the provided promise settles. * You can use this function to ensure that a task run does not complete until the promise resolves or rejects. * * Useful if you need to make sure something happens but you wait to continue doing other work in the task run. * * @param promise - The promise to wait for. + * @returns A promise that resolves with the same value as the input promise + * @throws If the input promise rejects, the error is propagated + * @example + * ```typescript + * // Success case + * await waitUntil(someAsyncOperation()); + * + * // Error handling + * try { + * await waitUntil(someAsyncOperation()); + * } catch (error) { + * // Handle error + * } + * ``` */apps/webapp/app/services/realtimeStreamsGlobal.server.ts (1)
14-14
: Avoid hardcoding Redis key prefixThe Redis key prefix is hardcoded, which could make it difficult to change in different environments or for different purposes.
Consider making it configurable:
- keyPrefix: "tr:realtime:streams:", + keyPrefix: env.REDIS_KEY_PREFIX ?? "tr:realtime:streams:",references/nextjs-realtime/next.config.mjs (1)
32-35
: Consider environment-specific analyzer configurationThe current configuration might cause issues in CI/CD environments due to
openAnalyzer: true
. Consider making this configurable based on the environment.export default NextBundleAnalyzer({ enabled: process.env.ANALYZE === "true", - openAnalyzer: true, + openAnalyzer: process.env.CI !== "true" && process.env.ANALYZE === "true", })(nextConfig);This change will:
- Prevent the analyzer from opening in CI environments
- Only open the analyzer when explicitly requested via ANALYZE=true
- Improve CI/CD pipeline efficiency
packages/core/src/v3/runMetadata/types.ts (2)
11-13
: Consider adding validation constraints for numeric operationsThe new methods for manipulating metadata values could benefit from additional type safety:
- For
appendKey
, consider specifying what types of values can be appended (e.g., arrays only)- For
incrementKey
anddecrementKey
, consider:
- Adding validation to ensure the existing value is numeric
- Implementing overflow/underflow protection
- Specifying behavior when the key doesn't exist
16-16
: Well-designed streaming interfaceThe
stream
method is well-designed with:
- Generic type support for flexible data types
- Proper async handling using AsyncIterable
- Cancellation support via AbortSignal
Consider documenting the following aspects in the implementation:
- Backpressure handling
- Error propagation strategy
- Memory management for long-running streams
stream_test.js (1)
1-41
: Consider restructuring for better maintainability and testing.While the current implementation works for basic testing, consider these architectural improvements:
- Move configuration to a separate config file
- Integrate with a proper testing framework (e.g., Jest)
- Create a reusable StreamClient class for better encapsulation
- Add proper logging infrastructure instead of console.log
Would you like me to provide an example of how to restructure this into a proper test suite?
packages/core/src/v3/waitUntil/manager.ts (3)
1-4
: Consider removing the .js extension from import pathThe
.js
extension in the import path is unusual for TypeScript files. TypeScript typically handles module resolution without requiring the extension.-import { MaybeDeferredPromise, WaitUntilManager } from "./types.js"; +import { MaybeDeferredPromise, WaitUntilManager } from "./types";
6-8
: Add input validation to register methodThe method should validate the input promise to ensure it's not null/undefined before adding it to the Set.
register(promise: MaybeDeferredPromise): void { + if (!promise) { + throw new Error("Cannot register null or undefined promise"); + } this.maybeDeferredPromises.add(promise); }
27-33
: Optimize Set operations for better performanceThe current implementation converts the Set to an Array unnecessarily when only checking for existence or length.
requiresResolving(): boolean { - return this.promisesRequringResolving.length > 0; + for (const promise of this.maybeDeferredPromises) { + if (promise.requiresResolving()) { + return true; + } + } + return false; } private get promisesRequringResolving(): MaybeDeferredPromise[] { return Array.from(this.maybeDeferredPromises).filter((p) => p.requiresResolving()); }This optimization:
- Avoids unnecessary Array conversion when checking existence
- Short-circuits on first matching promise
- Keeps the array conversion only when full array is needed
packages/trigger-sdk/src/v3/tasks.ts (1)
69-70
: Add JSDoc documentation for consistencyThe
toolTask
export would benefit from JSDoc documentation, similar to thetask
export above, to maintain consistency and improve developer experience.Consider adding documentation like this:
+/** Creates a tool task that can be triggered by AI tools + * @param options - Tool task options + * @example + * + * ```ts + * import { toolTask } from "@trigger.dev/sdk/v3"; + * + * export const aiAssistant = toolTask({ + * id: "ai-assistant", + * parameters: { /* tool parameters */ }, + * run: async (payload) => { + * return { result: "processed" }; + * }, + * }); + * ``` + * + * @returns A tool task that can be triggered + */ export const toolTask = createToolTask;apps/webapp/app/routes/realtime.v1.streams.test.ts (2)
4-6
: Enhance error response with more details.The error response could be more informative to help clients understand and debug issues.
- return new Response("No body provided", { status: 400 }); + return new Response( + JSON.stringify({ error: "No body provided", status: 400 }), + { + status: 400, + headers: { "Content-Type": "application/json" } + } + );
3-43
: Consider separating business logic from route handler.The stream processing logic should be moved to a separate service class for better maintainability and testability. This would also make it easier to reuse the logic elsewhere.
Consider creating a
StreamProcessor
service:// services/streamProcessor.server.ts export class StreamProcessor { private readonly maxBufferSize: number; private readonly timeout: number; constructor(options?: { maxBufferSize?: number; timeout?: number }) { this.maxBufferSize = options?.maxBufferSize ?? 1024 * 1024; this.timeout = options?.timeout ?? 30000; } async processStream(stream: ReadableStream): Promise<void> { // Move the stream processing logic here } } // routes/realtime.v1.streams.test.ts export async function action({ request }: ActionFunctionArgs) { if (!request.body) { return new Response( JSON.stringify({ error: "No body provided", status: 400 }), { status: 400, headers: { "Content-Type": "application/json" } } ); } const processor = new StreamProcessor(); try { await processor.processStream(request.body); return new Response( JSON.stringify({ status: "success" }), { status: 200, headers: { "Content-Type": "application/json" } } ); } catch (error) { // Error handling } }packages/core/src/v3/schemas/resources.ts (1)
7-7
: LGTM! Consider adding JSDoc comments.The addition of an optional description field is well-structured. Consider adding JSDoc comments to document the purpose and expected format of the description field.
+ /** Description of the task, useful for AI tools and documentation */ description: z.string().optional(),
packages/core/src/v3/utils/globals.ts (2)
10-10
: Consider regrouping related imports.The
WaitUntilManager
import could be grouped with other manager-related imports likeRunMetadataManager
,RuntimeManager
,TimeoutManager
, andUsageManager
for better organization.import { DeserializedJson } from "../../schemas/json.js"; import { ApiClientConfiguration } from "../apiClientManager/types.js"; import { Clock } from "../clock/clock.js"; import { RunMetadataManager } from "../runMetadata/types.js"; import type { RuntimeManager } from "../runtime/manager.js"; +import { WaitUntilManager } from "../waitUntil/types.js"; import { TaskCatalog } from "../task-catalog/catalog.js"; import { TaskContext } from "../taskContext/types.js"; import { TimeoutManager } from "../timeout/types.js"; import { UsageManager } from "../usage/types.js"; -import { WaitUntilManager } from "../waitUntil/types.js"; import { _globalThis } from "./platform.js";
63-63
: LGTM! Consider future refactoring for better organization.The addition of the
wait-until
property follows the existing patterns and conventions. However, as the global API grows, consider organizing related properties into namespaced groups in the future for better maintainability.Example future structure:
type TriggerDotDevGlobalAPI = { core: { runtime?: RuntimeManager; clock?: Clock; }; tasks: { catalog?: TaskCatalog; context?: TaskContext; timeout?: TimeoutManager; waitUntil?: WaitUntilManager; }; // ... other categories };packages/core/src/v3/runMetadata/index.ts (3)
60-62
: Improve method declaration and add validationThe appendKey method needs improvement:
- Using
return
with void return type is unnecessary- Missing input validation
- Consider renaming to better reflect the append behavior (e.g.,
appendToKey
orappendValueToKey
)Apply this diff to improve the method:
- appendKey(key: string, value: DeserializedJson): void { - return this.#getManager().appendKey(key, value); + public appendToKey(key: string, value: DeserializedJson): void { + if (value === undefined || value === null) { + throw new Error("Cannot append undefined or null value"); + } + this.#getManager().appendKey(key, value); }🧰 Tools
🪛 Biome
[error] 61-61: The function should not return a value because its return type is void.
The function is here:
'void' signals the absence of value. The returned value is likely to be ignored by the caller.
(lint/correctness/noVoidTypeReturn)
68-74
: Add documentation and type constraintsThe stream method would benefit from:
- JSDoc documentation explaining the streaming behavior
- Type constraints to ensure serializability
- Input validation
Apply this diff to improve the method:
+ /** + * Streams values for a given key with support for cancellation. + * @param key The metadata key to stream values for + * @param value The source AsyncIterable to stream from + * @param signal Optional AbortSignal for cancellation + * @returns A Promise resolving to an AsyncIterable of streamed values + */ public stream<T extends DeserializedJson>( key: string, value: AsyncIterable<T>, signal?: AbortSignal ): Promise<AsyncIterable<T>> { + if (!value || typeof value[Symbol.asyncIterator] !== 'function') { + throw new Error("Value must be a valid AsyncIterable"); + } return this.#getManager().stream(key, value, signal); }
Line range hint
1-74
: Consider architectural improvementsThe implementation aligns well with the real-time streams feature, but consider these architectural improvements:
- Add error handling for cases where the global manager is unavailable
- Implement logging for debugging purposes, especially for stream operations
- Consider adding a method to check manager availability
Would you like assistance in implementing these architectural improvements?
🧰 Tools
🪛 Biome
[error] 53-53: The function should not return a value because its return type is void.
The function is here:
'void' signals the absence of value. The returned value is likely to be ignored by the caller.
(lint/correctness/noVoidTypeReturn)
[error] 57-57: The function should not return a value because its return type is void.
The function is here:
'void' signals the absence of value. The returned value is likely to be ignored by the caller.
(lint/correctness/noVoidTypeReturn)
[error] 61-61: The function should not return a value because its return type is void.
The function is here:
'void' signals the absence of value. The returned value is likely to be ignored by the caller.
(lint/correctness/noVoidTypeReturn)
[error] 65-65: The function should not return a value because its return type is void.
The function is here:
'void' signals the absence of value. The returned value is likely to be ignored by the caller.
(lint/correctness/noVoidTypeReturn)
apps/webapp/test/realtimeStreams.test.ts (1)
16-106
: Consider adding edge case tests for error scenarios.While the current test suite covers the happy path and basic error handling well, consider adding tests for:
- Redis connection failures
- Malformed stream data
- Maximum stream size limits
- Stream timeout scenarios
These additional test cases would improve the robustness of the streaming implementation.
packages/trigger-sdk/src/v3/metadata.ts (3)
28-31
: Update JSDoc for new metadata methods.The metadata object's JSDoc comment should be updated to include documentation for the newly added methods:
stream
,append
,increment
, anddecrement
.
112-118
: Add JSDoc and improve type safety.The increment/decrement functions should:
- Have JSDoc documentation explaining their purpose and parameters
- Validate that the input value is a finite number
+/** + * Increments a numeric metadata value by the specified amount. + * @param {string} key - The key of the metadata value to increment + * @param {number} value - The amount to increment by + */ function incrementMetadataKey(key: string, value: number) { + if (!Number.isFinite(value)) { + throw new Error("Increment value must be a finite number"); + } runMetadata.incrementKey(key, value); } +/** + * Decrements a numeric metadata value by the specified amount. + * @param {string} key - The key of the metadata value to decrement + * @param {number} value - The amount to decrement by + */ function decrementMetadataKey(key: string, value: number) { + if (!Number.isFinite(value)) { + throw new Error("Decrement value must be a finite number"); + } runMetadata.decrementKey(key, value); }
120-122
: Add JSDoc for appendMetadataKey function.Add documentation to explain the purpose and behavior of the append operation.
+/** + * Appends a value to an array in the metadata. If the key doesn't exist, it creates a new array. + * @param {string} key - The key of the metadata array + * @param {DeserializedJson} value - The value to append + */ function appendMetadataKey(key: string, value: DeserializedJson) { runMetadata.appendKey(key, value); }packages/core/test/standardMetadataManager.test.ts (3)
8-10
: Consider adding type annotations for better type safetyThe test setup looks well-structured. Consider adding explicit type annotations for better type safety:
- let metadataUpdates: Array<Record<string, any>> = []; + let metadataUpdates: Array<Record<string, unknown>> = [];
44-53
: Consider adding edge cases for JSON path handlingWhile the basic JSON path test is good, consider adding test cases for:
- Invalid JSON paths
- Deeply nested paths
- Paths with array indices
- Empty path segments
86-103
: Consider adding tests for array operations in nested objectsThe nested object tests look good. Consider adding test cases for:
- Array operations within nested objects
- Mixed nested arrays and objects
- Deep equality checks for complex nested structures
packages/core/src/v3/utils/ioSerialization.ts (1)
Line range hint
40-228
: Consider documenting the client parameter usage pattern.The introduction of the optional
ApiClient
parameter across these functions represents a significant architectural change in how IO operations are handled. This change appears to support the real-time streams functionality mentioned in the PR objectives by allowing more flexible client management.Consider adding JSDoc comments to document:
- When to provide a custom client vs using the default
- The relationship between these functions and real-time streams
- The client fallback behavior
packages/cli-v3/src/entryPoints/dev-run-worker.ts (1)
88-91
: Consider extracting the default URL to a constantThe initialization looks good, but the hardcoded URL
"https://api.trigger.dev"
should be extracted to a named constant for better maintainability.+const DEFAULT_TRIGGER_API_URL = "https://api.trigger.dev"; + const runMetadataManager = new StandardMetadataManager( apiClientManager.clientOrThrow(), - getEnvVar("TRIGGER_STREAM_URL", getEnvVar("TRIGGER_API_URL")) ?? "https://api.trigger.dev" + getEnvVar("TRIGGER_STREAM_URL", getEnvVar("TRIGGER_API_URL")) ?? DEFAULT_TRIGGER_API_URL );apps/webapp/app/env.server.ts (1)
35-35
: Consider enhancing the STREAM_ORIGIN configurationWhile the addition aligns with the real-time streams feature, consider these improvements:
- Add URL format validation to ensure valid stream origins
- Add JSDoc documentation to explain its purpose and expected format
- Consider if a default value should be provided (like
APP_ORIGIN
)Here's a suggested enhancement:
+ /** Origin URL for real-time streams functionality + * @example "http://localhost:3040" + */ - STREAM_ORIGIN: z.string().optional(), + STREAM_ORIGIN: z.string().url("STREAM_ORIGIN must be a valid URL").optional(),apps/webapp/app/v3/services/createBackgroundWorker.server.ts (2)
Line range hint
249-254
: Consider enhancing error contextWhile the error class implementation is correct, consider enhancing it with additional context properties to aid in debugging and error handling.
Here's a suggested enhancement:
export class CreateDeclarativeScheduleError extends Error { - constructor(message: string) { + constructor( + message: string, + public readonly scheduleId?: string, + public readonly context?: Record<string, unknown> + ) { super(message); this.name = "CreateDeclarativeScheduleError"; } }Usage example in syncDeclarativeSchedules:
-throw new CreateDeclarativeScheduleError( - `Missing instance for declarative schedule ${schedule.id}` +throw new CreateDeclarativeScheduleError( + `Missing instance for declarative schedule ${schedule.id}`, + schedule.id, + { taskId: task.id, environmentId: environment.id } );
Line range hint
208-243
: Consider implementing error recovery strategyThe error handling is comprehensive, but consider implementing a recovery strategy for non-critical errors to improve resilience.
Consider tracking failed tasks and implementing a retry mechanism:
export async function createBackgroundTasks( tasks: TaskResource[], worker: BackgroundWorker, environment: AuthenticatedEnvironment, prisma: PrismaClientOrTransaction, tasksToBackgroundFiles?: Map<string, string> ) { + const failedTasks: Array<{ task: TaskResource; error: Error }> = []; for (const task of tasks) { try { // ... existing task creation code ... } catch (error) { if (error instanceof Prisma.PrismaClientKnownRequestError) { if (error.code === "P2002") { logger.warn("Task already exists", { task, worker, }); } else { + failedTasks.push({ task, error: error }); logger.error("Prisma Error creating background worker task", { error: { code: error.code, message: error.message, }, task, worker, }); } } else if (error instanceof Error) { + failedTasks.push({ task, error: error }); logger.error("Error creating background worker task", { error: { name: error.name, message: error.message, stack: error.stack, }, task, worker, }); } } } + if (failedTasks.length > 0) { + // Implement retry logic or throw an aggregated error + throw new AggregateError( + failedTasks.map(({ error }) => error), + `Failed to create ${failedTasks.length} tasks` + ); + } }packages/core/src/v3/workers/taskExecutor.ts (1)
498-514
: Add documentation and improve timeout handlingThe implementation looks solid but could benefit from these improvements:
- Add JSDoc comment explaining the method's purpose and behavior
- Consider making the timeout configurable rather than hard-coded
- Add explicit error handling for timeout scenarios
Consider this enhanced implementation:
+ /** + * Blocks execution until all waitUntil promises are settled or timeout is reached. + * @param timeoutMs Optional timeout in milliseconds. Defaults to 30 seconds. + * @throws {TimeoutError} When promises don't settle within the timeout period + */ async #blockForWaitUntil() { if (!waitUntil.requiresResolving()) { return; } + const timeoutMs = this._importedConfig?.waitUntilTimeout ?? 30_000; return this._tracer.startActiveSpan( "waitUntil", async (span) => { + try { - return await waitUntil.blockUntilSettled(30_000); + return await waitUntil.blockUntilSettled(timeoutMs); + } catch (error) { + span.setStatus({ code: SpanStatusCode.ERROR }); + throw error; + } }, { attributes: { [SemanticInternalAttributes.STYLE_ICON]: "clock", + timeoutMs, }, } ); }internal-packages/database/prisma/schema.prisma (1)
1613-1614
: LGTM! Consider enhancing documentation.The addition of an optional description field to the BackgroundWorkerTask model is a good improvement for maintainability. This will help developers understand the purpose of each background worker task.
Consider adding a comment above the field to document:
- The intended usage of this field
- Any formatting expectations or conventions
- Whether this description is for internal use or exposed in any UI/API
model BackgroundWorkerTask { + /// Description of the task's purpose and behavior. + /// This is displayed in the UI and API responses. description String?apps/webapp/app/routes/api.v1.packets.$.ts (1)
49-51
: Add validation for the 'filename' parameterWhile
ParamsSchema
ensures thatfilename
is a string, consider adding additional validation to prevent potential security issues like path traversal or invalid characters.You could enhance validation by enforcing a specific filename pattern or sanitizing the
filename
parameter.references/nextjs-realtime/src/trigger/ai.ts (6)
37-37
: Replaceconsole.log
withlogger
for consistent loggingUse the
logger
module instead ofconsole.log
to maintain consistent logging practices across the application.Apply this diff to update the logging statement:
- console.log("Calling getWeather tool with args", { args: part.chunk.args }); + logger.info("Calling getWeather tool with args", { args: part.chunk.args });
45-45
: Replaceconsole.log
withlogger
for consistent loggingUse the
logger
module instead ofconsole.log
to maintain consistent logging practices across the application.Apply this diff to update the logging statement:
- console.log("Received getWeather tool result", { result: part.chunk.result }); + logger.info("Received getWeather tool result", { result: part.chunk.result });
62-62
: Usewait.for
instead ofsetTimeout
for consistent waitingReplace
await setTimeout(30_000);
withawait wait.for({ seconds: 30 });
to utilize thewait
utility for consistency across the codebase.Apply this diff:
- await setTimeout(30_000); + await wait.for({ seconds: 30 });
106-106
: Usewait.for
instead ofsetTimeout
for consistent waitingReplace
await setTimeout(1000);
withawait wait.for({ milliseconds: 1000 });
to utilize thewait
utility for consistency across the codebase.Apply this diff:
- await setTimeout(1000); + await wait.for({ milliseconds: 1000 });
21-53
: Add default cases to switch statements to handle unexpected typesConsider adding default cases in the switch statements to handle unexpected
part.type
values. This improves robustness and helps in debugging unforeseen issues.Apply this diff:
switch (part.type) { case "run": { logger.info("Received run chunk", { run: part.run }); break; } case "openai": { logger.info("Received OpenAI chunk", { chunk: part.chunk, run: part.run }); // existing logic break; } + default: { + logger.warn("Received unknown part type", { type: part.type }); + break; + } }Similarly, consider adding default cases in inner switch statements.
29-50
: Add default cases to inner switch statements to handle unexpected chunk typesWithin the inner switch on
part.chunk.type
, consider adding default cases to handle unexpected chunk types, enhancing error handling and debugging.Apply this diff:
switch (part.chunk.type) { case "text-delta": { openaiCompletion += part.chunk.textDelta; break; } case "tool-call": { // existing logic break; } case "tool-result": { // existing logic break; } + default: { + logger.warn("Received unknown chunk type", { type: part.chunk.type }); + break; + } }stream_fetch_test.js (4)
117-119
: Uncomment the stream consumption codeThe code to consume the
MetadataStream
is commented out. If you intend to process the stream data, uncomment this section.- // for await (const value of stream) { - // console.log("Consumer received:", value, new Date().toISOString()); - // } + for await (const value of stream) { + console.log("Consumer received:", value, new Date().toISOString()); + }
69-71
: Handle potential errors inwait
methodThe
wait
method does not catch errors fromthis.streamPromise
. Ensure that any errors are properly handled to prevent unhandled promise rejections.Modify the
wait
method to handle errors:async wait() { - return this.streamPromise.then(() => void 0); + try { + await this.streamPromise; + } catch (error) { + console.error("Error in wait:", error); + } }
80-91
: Add error handling in the sample data generatorIncluding error handling in the
generateSampleData
function can make your tests more robust by simulating error conditions.Example:
async function* generateSampleData() { const items = [ { type: "start", timestamp: Date.now() }, { type: "progress", value: 25 }, { type: "progress", value: 50 }, { type: "progress", value: 75 }, { type: "complete", timestamp: Date.now() }, + // Uncomment to simulate an error + // null, ]; for (const item of items) { await new Promise((resolve) => setTimeout(resolve, 1000)); + if (item === null) { + throw new Error("Simulated error in data stream"); + } yield item; } }
1-76
: Ensure proper error handling and resource cleanup inMetadataStream
Consider implementing error handling and cleanup mechanisms in the
MetadataStream
class to handle scenarios like stream errors or abrupt terminations.
- Implement event listeners for error and close events.
- Ensure the
AbortController
is used to cancel fetch requests when needed.- Add try-catch blocks where asynchronous operations are used.
packages/core/src/v3/runMetadata/manager.ts (3)
14-14
: Avoid using 'any' in type parameters for better type safetyUsing
MetadataStream<any>
introduces theany
type, which can weaken type safety. Consider specifying a more precise type parameter to maintain strict type checks and prevent potential runtime errors.
36-38
: Handle undefinedrunId
with an error or warningIn the
setKey
method, ifrunId
is undefined, the function silently returns. This might mask issues wheresetKey
is called beforerunId
is properly initialized. Consider logging a warning or throwing an error to help with debugging.
66-68
: Handle undefinedrunId
with an error or warningSimilarly, in the
deleteKey
method, silently returning whenrunId
is undefined might hide potential issues. Adding a log or error can aid in debugging unexpected behavior.packages/core/src/v3/apiClient/runStream.ts (1)
174-174
: Simplify the default value assignment for_closeOnComplete
You can simplify the assignment of
_closeOnComplete
using the nullish coalescing operator??
for better readability.Suggested change:
- this._closeOnComplete = - typeof options.closeOnComplete === "undefined" ? true : options.closeOnComplete; + this._closeOnComplete = options.closeOnComplete ?? true;packages/cli-v3/src/entryPoints/deploy-run-worker.ts (1)
106-109
: Simplify environment variable retrieval logic for clarityThe current nested
getEnvVar
calls can be simplified for better readability and to avoid potential confusion. Using the nullish coalescing operator directly makes the intention clearer.Consider applying this diff:
const runMetadataManager = new StandardMetadataManager( apiClientManager.clientOrThrow(), - getEnvVar("TRIGGER_STREAM_URL", getEnvVar("TRIGGER_API_URL")) ?? "https://api.trigger.dev" + getEnvVar("TRIGGER_STREAM_URL") ?? getEnvVar("TRIGGER_API_URL") ?? "https://api.trigger.dev" );packages/core/test/runStream.test.ts (2)
575-581
: Avoid exporting functions from test filesExporting the
convertAsyncIterableToArray
function from a test file is discouraged since test files should remain self-contained. If this function is only used within this test file, consider removing theexport
keyword. If it's needed elsewhere, consider moving it to a shared utility module.Apply this diff to remove the export:
-export async function convertAsyncIterableToArray<T>(iterable: AsyncIterable<T>): Promise<T[]> { +async function convertAsyncIterableToArray<T>(iterable: AsyncIterable<T>): Promise<T[]> {🧰 Tools
🪛 Biome
[error] 574-581: Do not export from a test file.
(lint/suspicious/noExportsInTest)
75-81
: Refactor asyncsetInterval
to prevent overlapping executionsUsing an
async
function insidesetInterval
can lead to overlapping executions if the asynchronous operation takes longer than the interval period. This may cause race conditions or unexpected behavior in your tests.Consider refactoring by using
setTimeout
to schedule the next call only after the async operation completes. Here's a suggested modification:async onShape(callback: (shape: SubscribeRunRawShape) => Promise<void>): Promise<() => void> { // Only emit the first shape immediately if (this.shapes.length > 0) { await callback(this.shapes[this.currentShapeIndex++]!); } const emitNextShape = async () => { if (this.unsubscribed || this.currentShapeIndex >= this.shapes.length) { return; } await callback(this.shapes[this.currentShapeIndex++]!); setTimeout(emitNextShape, 100); }; emitNextShape(); return () => { this.unsubscribed = true; }; }packages/core/src/v3/apiClient/index.ts (2)
Line range hint
631-638
: Enhancement: Define a stricter schema forgenerateJWTClaims
responseCurrently, the
generateJWTClaims
method usesz.record(z.any())
as the response schema, which is very permissive. To improve type safety and validation, consider defining a more specific Zod schema that accurately reflects the expected structure of the JWT claims returned by the API. This will help in catching unexpected data shapes and enhance code reliability.Apply this diff to define a stricter schema:
+const JWTClaimsResponse = z.object({ + // Define the expected structure of the JWT claims here + // For example: + exp: z.number(), + iat: z.number(), + scopes: z.array(z.string()), + // Add other expected properties +}); async generateJWTClaims(requestOptions?: ZodFetchOptions): Promise<Record<string, any>> { return zodfetch( - z.record(z.any()), + JWTClaimsResponse, `${this.baseUrl}/api/v1/auth/jwt/claims`, { method: "POST", headers: this.#getHeaders(false), }, mergeRequestOptions(this.defaultRequestOptions, requestOptions) ); }
Line range hint
601-628
: Suggestion: Refactor subscription methods to reduce code duplicationThe methods
subscribeToRun
,subscribeToRunsWithTag
, andsubscribeToBatch
have similar implementations that callrunShapeStream
with slightly different parameters. To adhere to the DRY (Don't Repeat Yourself) principle and improve maintainability, consider abstracting the common logic into a private helper method or refactoring these methods to reduce duplication.For example, you could create a private method
_subscribeToStream
:private _subscribeToStream<TRunTypes extends AnyRunTypes>( url: string, options: { closeOnComplete: boolean } ) { return runShapeStream<TRunTypes>(url, { ...options, headers: this.#getRealtimeHeaders(), client: this, }); } subscribeToRun<TRunTypes extends AnyRunTypes>(runId: string) { return this._subscribeToStream<TRunTypes>( `${this.baseUrl}/realtime/v1/runs/${runId}`, { closeOnComplete: true } ); } subscribeToRunsWithTag<TRunTypes extends AnyRunTypes>(tag: string | string[]) { const searchParams = createSearchQueryForSubscribeToRuns({ tags: tag }); const url = `${this.baseUrl}/realtime/v1/runs${searchParams ? `?${searchParams}` : ""}`; return this._subscribeToStream<TRunTypes>(url, { closeOnComplete: false }); } subscribeToBatch<TRunTypes extends AnyRunTypes>(batchId: string) { return this._subscribeToStream<TRunTypes>( `${this.baseUrl}/realtime/v1/batches/${batchId}`, { closeOnComplete: false } ); }packages/core/src/v3/types/tasks.ts (6)
156-157
: Add documentation for thedescription
property inCommonTaskOptions
The
description
property has been added to theCommonTaskOptions
type but lacks a JSDoc comment. Including documentation will improve code readability and help developers understand its purpose and usage.
431-432
: Include JSDoc comments for thedescription
property inTask
interfaceSimilar to the previous comment, the
description
property in theTask
interface is missing documentation. Adding a JSDoc comment will enhance clarity and maintain consistency across the codebase.
514-514
: Document thedescription
property withinToolTask.tool
The optional
description
field inside thetool
property of theToolTask
interface is currently undocumented. Providing a JSDoc comment will help other developers understand its role and how to use it effectively.
345-353
: Ensure correct type inference inTaskWithToolOptions
In the
TaskWithToolOptions
type definition, verify thatinferToolParameters<TParameters>
accurately infers the payload type fromTParameters
. This ensures type safety and that the task behaves as expected when used with tool parameters.
499-506
: Add documentation for theTaskWithSchema
interfaceThe
TaskWithSchema
interface extendsTask
and introduces aschema
property. Including JSDoc comments for this interface and theschema
property will improve maintainability and assist developers in understanding how to implement and use this interface properly.
507-518
: Provide documentation for theToolTask
interface and its propertiesThe
ToolTask
interface introduces thetool
property withparameters
,description
, and anexecute
method. Adding JSDoc comments to theToolTask
interface and its nested properties will enhance code clarity and assist developers in correctly utilizing this interface.apps/webapp/app/services/realtimeStreams.server.ts (1)
62-64
: Uselogger.error
instead ofconsole.error
for consistent loggingTo maintain consistent logging practices across the application, replace
console.error
withlogger.error
.Apply this diff:
- console.error("Error reading from Redis stream:", error); + logger.error("Error reading from Redis stream:", { error }); ... - console.error("Fatal error in stream processing:", error); + logger.error("Fatal error in stream processing:", { error }); ... - console.error("Error in ingestData:", error); + logger.error("Error in ingestData:", { error });Also applies to: 69-70, 166-166
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
⛔ Files ignored due to path filters (1)
pnpm-lock.yaml
is excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (58)
.vscode/launch.json
(1 hunks)apps/webapp/app/env.server.ts
(1 hunks)apps/webapp/app/routes/api.v1.packets.$.ts
(2 hunks)apps/webapp/app/routes/realtime.v1.streams.$runId.$streamId.ts
(1 hunks)apps/webapp/app/routes/realtime.v1.streams.test.ts
(1 hunks)apps/webapp/app/services/httpAsyncStorage.server.ts
(1 hunks)apps/webapp/app/services/realtimeStreams.server.ts
(1 hunks)apps/webapp/app/services/realtimeStreamsGlobal.server.ts
(1 hunks)apps/webapp/app/v3/environmentVariables/environmentVariablesRepository.server.ts
(1 hunks)apps/webapp/app/v3/services/createBackgroundWorker.server.ts
(1 hunks)apps/webapp/server.ts
(2 hunks)apps/webapp/test/realtimeStreams.test.ts
(1 hunks)apps/webapp/test/utils/streams.ts
(1 hunks)internal-packages/database/prisma/migrations/20241112111156_add_description_to_background_worker_task/migration.sql
(1 hunks)internal-packages/database/prisma/schema.prisma
(1 hunks)packages/cli-v3/src/entryPoints/deploy-run-worker.ts
(4 hunks)packages/cli-v3/src/entryPoints/dev-run-worker.ts
(4 hunks)packages/core/package.json
(1 hunks)packages/core/src/v3/apiClient/index.ts
(3 hunks)packages/core/src/v3/apiClient/runStream.ts
(5 hunks)packages/core/src/v3/index.ts
(1 hunks)packages/core/src/v3/runMetadata/index.ts
(1 hunks)packages/core/src/v3/runMetadata/manager.ts
(6 hunks)packages/core/src/v3/runMetadata/metadataStream.ts
(1 hunks)packages/core/src/v3/runMetadata/noopManager.ts
(1 hunks)packages/core/src/v3/runMetadata/types.ts
(1 hunks)packages/core/src/v3/runtime/devRuntimeManager.ts
(1 hunks)packages/core/src/v3/schemas/resources.ts
(1 hunks)packages/core/src/v3/schemas/schemas.ts
(1 hunks)packages/core/src/v3/types/index.ts
(1 hunks)packages/core/src/v3/types/tasks.ts
(5 hunks)packages/core/src/v3/types/tools.ts
(1 hunks)packages/core/src/v3/utils/getEnv.ts
(1 hunks)packages/core/src/v3/utils/globals.ts
(2 hunks)packages/core/src/v3/utils/ioSerialization.ts
(4 hunks)packages/core/src/v3/wait-until-api.ts
(1 hunks)packages/core/src/v3/waitUntil/index.ts
(1 hunks)packages/core/src/v3/waitUntil/manager.ts
(1 hunks)packages/core/src/v3/waitUntil/types.ts
(1 hunks)packages/core/src/v3/workers/index.ts
(1 hunks)packages/core/src/v3/workers/taskExecutor.ts
(3 hunks)packages/core/test/runStream.test.ts
(1 hunks)packages/core/test/standardMetadataManager.test.ts
(1 hunks)packages/trigger-sdk/package.json
(2 hunks)packages/trigger-sdk/src/v3/index.ts
(1 hunks)packages/trigger-sdk/src/v3/metadata.ts
(3 hunks)packages/trigger-sdk/src/v3/shared.ts
(8 hunks)packages/trigger-sdk/src/v3/tasks.ts
(2 hunks)packages/trigger-sdk/src/v3/waitUntil.ts
(1 hunks)references/nextjs-realtime/next.config.mjs
(2 hunks)references/nextjs-realtime/package.json
(2 hunks)references/nextjs-realtime/src/app/runs/[id]/ClientRunDetails.tsx
(1 hunks)references/nextjs-realtime/src/components/RunDetails.tsx
(2 hunks)references/nextjs-realtime/src/trigger/ai.ts
(1 hunks)references/nextjs-realtime/src/trigger/example.ts
(1 hunks)stream_fetch_test.js
(1 hunks)stream_test.js
(1 hunks)stream_test.sh
(1 hunks)
✅ Files skipped from review due to trivial changes (3)
- internal-packages/database/prisma/migrations/20241112111156_add_description_to_background_worker_task/migration.sql
- packages/core/src/v3/runtime/devRuntimeManager.ts
- packages/core/src/v3/wait-until-api.ts
🧰 Additional context used
🪛 Biome
packages/core/src/v3/runMetadata/index.ts
[error] 53-53: The function should not return a value because its return type is void.
The function is here:
'void' signals the absence of value. The returned value is likely to be ignored by the caller.
(lint/correctness/noVoidTypeReturn)
[error] 57-57: The function should not return a value because its return type is void.
The function is here:
'void' signals the absence of value. The returned value is likely to be ignored by the caller.
(lint/correctness/noVoidTypeReturn)
[error] 61-61: The function should not return a value because its return type is void.
The function is here:
'void' signals the absence of value. The returned value is likely to be ignored by the caller.
(lint/correctness/noVoidTypeReturn)
[error] 65-65: The function should not return a value because its return type is void.
The function is here:
'void' signals the absence of value. The returned value is likely to be ignored by the caller.
(lint/correctness/noVoidTypeReturn)
packages/core/src/v3/runMetadata/metadataStream.ts
[error] 17-17: void is confusing inside a union type.
Unsafe fix: Use undefined instead.
(lint/suspicious/noConfusingVoidType)
[error] 45-45: void is confusing inside a union type.
Unsafe fix: Use undefined instead.
(lint/suspicious/noConfusingVoidType)
packages/core/src/v3/waitUntil/index.ts
[error] 28-28: Using this in a static context can be confusing.
this refers to the class.
Unsafe fix: Use the class name instead.
(lint/complexity/noThisInStatic)
[error] 29-29: Using this in a static context can be confusing.
this refers to the class.
Unsafe fix: Use the class name instead.
(lint/complexity/noThisInStatic)
[error] 32-32: Using this in a static context can be confusing.
this refers to the class.
Unsafe fix: Use the class name instead.
(lint/complexity/noThisInStatic)
[error] 44-44: The function should not return a value because its return type is void.
The function is here:
'void' signals the absence of value. The returned value is likely to be ignored by the caller.
(lint/correctness/noVoidTypeReturn)
packages/core/test/runStream.test.ts
[error] 574-581: Do not export from a test file.
(lint/suspicious/noExportsInTest)
🪛 Shellcheck
stream_test.sh
[error] 1-1: ShellCheck only supports sh/bash/dash/ksh scripts. Sorry!
(SC1071)
🔇 Additional comments (52)
packages/core/src/v3/waitUntil/types.ts (1)
6-10
: 🛠️ Refactor suggestion
Add JSDoc documentation and consider timeout validation
The interface would benefit from documentation explaining its purpose and usage. Additionally, the blockUntilSettled
method should validate the timeout parameter.
+/**
+ * Manages a collection of promises that may need to be resolved before proceeding.
+ * Used to extend the lifetime of task runs until specified promises settle.
+ */
export interface WaitUntilManager {
+ /** Registers a promise for tracking */
register(promise: MaybeDeferredPromise): void;
+ /**
+ * Blocks execution until all registered promises are settled or timeout is reached
+ * @param timeout Maximum time to wait in milliseconds
+ * @throws If timeout is less than or equal to 0
+ */
blockUntilSettled(timeout: number): Promise<void>;
+ /** Returns true if any registered promises require resolution */
requiresResolving(): boolean;
}
Let's verify if there are any existing timeout validations in the implementation:
apps/webapp/app/services/httpAsyncStorage.server.ts (1)
7-7
: LGTM! Verify impact of the required method
property.
The addition of the method
property to HttpLocalStorage
is a good enhancement for better HTTP context tracking.
Let's verify that all usages of HttpLocalStorage
have been updated to include the new required property:
✅ Verification successful
All runWithHttpContext calls include the 'method' property.
The verification confirms that all active runWithHttpContext
calls have been updated to include the method
property.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Find all locations where HttpLocalStorage type is used or where runWithHttpContext is called
# to ensure the method property is properly provided
# Search for HttpLocalStorage type usage
echo "Checking HttpLocalStorage type usage:"
rg "HttpLocalStorage" -A 5
# Search for runWithHttpContext calls
echo -e "\nChecking runWithHttpContext calls:"
rg "runWithHttpContext" -A 5
Length of output: 3405
Script:
#!/bin/bash
# Description: Verify that all active runWithHttpContext calls include the 'method' property
echo "Checking for active runWithHttpContext calls without the 'method' property:"
# Find all runWithHttpContext calls that are not commented out
rg "runWithHttpContext\s*\(\s*{" --glob "!*.ts" --no-filename | grep -v "^//" | grep -Pv 'method\s*:'
# If no output, all calls include 'method'
if [ $? -eq 1 ]; then
echo "All active runWithHttpContext calls include the 'method' property."
else
echo "Some runWithHttpContext calls are missing the 'method' property."
fi
Length of output: 555
packages/trigger-sdk/src/v3/waitUntil.ts (1)
11-13
: Verify error propagation behavior
Let's verify how errors are propagated through the core_waitUntil.register
function to ensure proper error handling.
apps/webapp/app/services/realtimeStreamsGlobal.server.ts (1)
19-19
: LGTM: Singleton pattern implementation
The singleton pattern is correctly implemented using the utility function, ensuring only one instance of RealtimeStreams is created.
packages/core/src/v3/utils/getEnv.ts (2)
1-1
: LGTM! Well-structured environment variable handling.
The changes to getEnvVar
are well-implemented:
- Proper TypeScript typing with optional parameter
- Environment-agnostic implementation
- Consistent fallback behavior
Also applies to: 4-4, 7-7
Line range hint 10-24
: Verify the behavior with the updated getEnvVar function.
The getNumberEnvVar
function's implementation remains robust with the updated getEnvVar
. However, let's verify its behavior across different environments and edge cases.
✅ Verification successful
Verification Successful: All usages of getNumberEnvVar
appropriately handle default values and edge cases. No additional error handling found necessary.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify usage patterns of getNumberEnvVar across the codebase
# to ensure it handles all edge cases with the updated getEnvVar.
# Search for all usages of getNumberEnvVar
echo "Searching for getNumberEnvVar usage patterns..."
rg "getNumberEnvVar\(" -A 2 -B 2
# Look for any error handling or type checking around these calls
echo "Checking error handling patterns..."
ast-grep --pattern 'try {
$$$
getNumberEnvVar($$$)
$$$
} catch'
Length of output: 1436
references/nextjs-realtime/next.config.mjs (1)
2-2
: LGTM: Clean import statement
The import statement follows standard ES module syntax and is appropriately placed at the top of the file.
references/nextjs-realtime/src/trigger/example.ts (1)
26-29
: 🛠️ Refactor suggestion
Consider optimizing the large payload generation
While this large payload might be useful for testing real-time streams, there are several concerns to address:
- Memory Usage: Creating a 1MB string in memory at once could be inefficient.
- Example Code: This might not be the best practice to demonstrate in example code.
- Performance: Large payloads could impact streaming performance.
Consider these alternatives:
- // Generate a return payload that is more than 128KB
- const bigPayload = new Array(100000).fill("a".repeat(10)).join("");
-
- return { message: bigPayload };
+ // Generate a more reasonable payload for example purposes
+ const payload = {
+ message: "Example payload",
+ timestamp: Date.now(),
+ // For testing large payloads, consider using a smaller size
+ // or implementing a streaming generator function
+ data: "a".repeat(1000) // 1KB is sufficient for examples
+ };
+ return payload;
For testing large payloads:
- Consider implementing a streaming generator function that yields chunks of data
- Add configuration options to control payload size
- Document the intended use case and performance implications
Let's check if this large payload might cause issues with any size limits in the codebase:
✅ Verification successful
The payload size is within the allowed limit
The generated payload (~1MB) does not exceed the TASK_PAYLOAD_MAXIMUM_SIZE
of 3MB. No optimization is necessary.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for any payload size limits or configurations
rg -i "size.*limit|max.*size|payload.*limit" --type ts
Length of output: 11624
packages/core/src/v3/workers/index.ts (1)
18-18
: LGTM! The export addition is well-structured.
The new export follows the established patterns in the barrel file and aligns with the PR's real-time streams functionality.
Let's verify the StandardWaitUntilManager implementation:
✅ Verification successful
Verification Successful.
The tools.ts file exists and contains the necessary type definitions for the toolTask feature.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the tools.js file exists and examine its contents
# Expected: Find type definitions related to toolTask feature
# Check if the tools.js file exists in the same directory
fd -e ts -e js "tools" "packages/core/src/v3/types"
# Examine the contents if found
if [ $? -eq 0 ]; then
echo "File contents:"
cat $(fd -e ts -e js "tools" "packages/core/src/v3/types")
fi
Length of output: 1192
packages/core/src/v3/waitUntil/manager.ts (1)
3-34
: Verify integration with real-time streams feature
Since this PR introduces real-time streams support, let's verify how this class integrates with that feature.
references/nextjs-realtime/package.json (1)
13-13
: Verify AI-related dependencies versions
The AI-related dependencies might need updates:
@ai-sdk/openai
is at a very early version (0.0.72)openai
package has newer versions available
Let's check for the latest versions and any known vulnerabilities:
Also applies to: 21-21, 26-26
✅ Verification successful
All dependencies are up-to-date and compatible.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check latest versions and security advisories for AI packages
# Check latest versions from npm
echo "Latest versions from npm:"
for pkg in "@ai-sdk/openai" "ai" "openai"; do
echo -n "$pkg: "
npm view $pkg version
done
# Check for known vulnerabilities
echo -e "\nChecking for vulnerabilities:"
for pkg in "@ai-sdk/openai" "ai" "openai"; do
echo "=== $pkg ==="
gh api graphql -f query='
{
securityVulnerabilities(first: 5, ecosystem: NPM, package: "'$pkg'") {
nodes {
advisory {
summary
severity
publishedAt
}
vulnerableVersionRange
firstPatchedVersion {
identifier
}
}
}
}'
done
Length of output: 1834
Script:
#!/bin/bash
# Description: Verify compatibility between Next.js and @next/bundle-analyzer versions
echo "Next.js version:"
npm view next version
echo "Current @next/bundle-analyzer version:"
npm view @next/bundle-analyzer version
echo "Latest compatible @next/bundle-analyzer versions for Next.js 14.x:"
npm view @next/bundle-analyzer versions --json | jq '.[] | select(. | startswith("14."))'
Length of output: 8459
packages/trigger-sdk/src/v3/tasks.ts (1)
6-6
: LGTM!
The import of createToolTask
is correctly placed and maintains the alphabetical order.
apps/webapp/app/routes/realtime.v1.streams.test.ts (1)
1-1
: Verify if this is a test file or implementation file.
The filename ends with .test.ts
suggesting this is a test file, but it contains implementation code. Consider renaming the file to better reflect its purpose.
packages/core/src/v3/runMetadata/noopManager.ts (2)
6-14
: LGTM: Numeric operation methods properly implemented
The new metadata operation methods (appendKey
, incrementKey
, decrementKey
) are correctly implemented following the no-op pattern, maintaining consistency with the rest of the class.
15-17
: LGTM: Stream method aligns with real-time functionality
The stream
method is properly implemented with correct TypeScript generics and async return type, supporting the PR's objective of adding real-time stream functionality.
references/nextjs-realtime/src/app/runs/[id]/ClientRunDetails.tsx (2)
5-5
: LGTM! Import statement updated for real-time functionality.
The import change aligns with the PR's objective to support real-time streaming functionality.
9-9
: Verify error handling in real-time implementation.
The switch from polling-based useRun
to real-time useRealtimeRun
looks good and should improve UI responsiveness. However, let's verify the error handling implementation in the real-time hook.
✅ Verification successful
Error handling in the real-time implementation is properly managed via ErrorBoundaries.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify error handling implementation in useRealtimeRun hook
# Search for error handling in the hook implementation
ast-grep --pattern 'export function useRealtimeRun($_) {
$$$
catch($_) {
$$$
}
$$$
}'
# Search for error handling tests
rg -l "useRealtimeRun.*error" -g "*.test.*"
rg -l "useRealtimeRun.*throw" -g "*.test.*"
Length of output: 191
Script:
#!/bin/bash
# Description: Further verify error handling in useRealtimeRun hook and its usage
# Search for try-catch blocks with different structures in useRealtimeRun
ast-grep --pattern 'export function useRealtimeRun($_) {
$$$
try {
$$$
} catch($_) {
$$$
}
$$$
}'
# Search for error state handling in components using useRealtimeRun
rg -A 5 'useRealtimeRun\(.*\).*(error|err)' -g "*.tsx"
# Search for error boundary implementations that might handle errors from useRealtimeRun
rg -l 'ErrorBoundary' -g "*.tsx"
Length of output: 502
packages/core/src/v3/index.ts (1)
15-15
: LGTM! The export follows established patterns.
The new export follows the consistent naming convention and export style used throughout the file.
Let's verify the exported module exists and contains the expected functionality:
✅ Verification successful
Verified: wait-until-api.ts
exports WaitUntilAPI
correctly.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the wait-until-api.js module exists and exports WaitUntilAPI
# Check if the file exists
fd -e ts -e js "wait-until-api" packages/core/src/v3
# Check the exports
ast-grep --pattern 'export class WaitUntilAPI {
$$$
}'
# Check for any references to this API in the codebase
rg -l "WaitUntilAPI" packages/
Length of output: 260
packages/trigger-sdk/package.json (2)
75-79
: Consider updating the zod devDependency version.
While moving zod to peerDependencies with ^3.0.0
is good practice, the devDependency version 3.22.3
seems outdated. Consider:
- Updating the devDependency to a newer version
- Aligning the devDependency version with the minimum peer dependency version requirement
✅ Verification successful
zod devDependency is up-to-date
The current devDependency version 3.22.3
addresses the reported security vulnerability and aligns with the peerDependency range ^3.0.0
.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check latest zod versions and potential breaking changes
# Get the latest version
curl -s https://registry.npmjs.org/zod | jq '.["dist-tags"].latest'
# Check for any security advisories
gh api graphql -f query='
{
securityVulnerabilities(first: 5, ecosystem: NPM, package: "zod") {
nodes {
advisory {
summary
severity
publishedAt
}
vulnerableVersionRange
firstPatchedVersion {
identifier
}
}
}
}'
Length of output: 619
69-69
: LGTM! Verify AI package compatibility.
The addition of the ai
package aligns with the PR's objective to support AI tool integration. The caret version ^3.4.33
allows for minor updates while maintaining compatibility.
✅ Verification successful
LGTM! AI package compatibility and zod versioning verified.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify ai package compatibility with other dependencies
# Check for any known compatibility issues or breaking changes
# Get the latest version and release info
curl -s https://registry.npmjs.org/ai | jq '.["dist-tags"].latest, .versions[.["dist-tags"].latest].date'
# Check for any deprecation notices
npm view ai deprecated
Length of output: 146
Script:
#!/bin/bash
# Description: Extract zod versions from devDependencies and peerDependencies
# Get zod version in devDependencies
jq '.devDependencies.zod' packages/trigger-sdk/package.json
# Get zod version in peerDependencies
jq '.peerDependencies.zod' packages/trigger-sdk/package.json
# Check for multiple zod versions in the dependency tree
fd zod | xargs -n1 dirname | xargs -I{} sh -c 'npm ls zod --prefix "{}" --depth=0'
Length of output: 597
.vscode/launch.json (1)
16-24
: LGTM! Configuration follows VSCode debugging best practices.
The new debug configuration for RealtimeStreams tests is well-structured and consistent with existing configurations. It correctly sets up the environment for debugging the new real-time streaming functionality.
Let's verify the test command works as expected:
✅ Verification successful
Verified: Test configuration is correctly set up and functional.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the test command and file existence
# Check if the test file exists
fd "realtimeStreams.test.ts" "apps/webapp"
# Check if the test command is properly configured in package.json
rg "\"test\":.+\"" "apps/webapp/package.json"
Length of output: 167
apps/webapp/test/realtimeStreams.test.ts (3)
1-14
: LGTM! Test configuration is well structured.
The test setup includes appropriate timeout settings for async Redis operations and proper logger mocking. The imports are well organized and include all necessary testing utilities.
41-73
: LGTM! Excellent concurrent streams implementation.
The test effectively validates multiple concurrent streams using Promise.all for both ingestion and verification. The approach ensures proper handling of parallel operations while maintaining data integrity.
17-39
: LGTM! Basic streaming test is well implemented.
The test effectively validates the end-to-end flow from producer to consumer. Consider verifying that the stream is properly closed after completion.
packages/trigger-sdk/src/v3/metadata.ts (1)
Line range hint 28-149
: Overall implementation looks good!
The new metadata operations are well-integrated and support the real-time streaming functionality effectively. The implementation is clean, focused, and maintains consistency with the existing codebase. The suggested improvements around documentation and error handling will help ensure robust usage of these new features.
references/nextjs-realtime/src/components/RunDetails.tsx (2)
5-5
: LGTM! Type import aligns with real-time streaming support.
The switch to TaskRunShape
from the SDK v3 aligns well with the PR's objective of adding real-time streams support.
20-20
: Verify field compatibility with TaskRunShape.
The component appears to access various fields from the record prop (id, status, tags, timing fields, etc.). While the change to TaskRunShape
looks good, we should verify that all accessed fields are guaranteed to exist in the new type.
packages/core/test/standardMetadataManager.test.ts (1)
105-178
: 🛠️ Refactor suggestion
Add boundary tests for array operations
While the append tests are comprehensive, consider adding:
- Tests for maximum array size limits to prevent memory issues
- Tests for appending complex objects
- Tests for appending null/undefined values
- Performance tests for large arrays
Also, verify if there should be a maximum array size limit in the implementation to prevent memory issues in production.
packages/core/src/v3/schemas/schemas.ts (1)
152-152
: LGTM! Consider adding JSDoc documentation.
The addition of the optional description
field maintains backward compatibility and aligns with the schema's versioning guidelines. This change supports the PR's objective of enhancing AI tool integration by allowing descriptive metadata for tasks.
Consider adding JSDoc documentation to describe the purpose and usage of the description
field:
const taskMetadata = {
id: z.string(),
+ /** Optional description of the task, useful for AI tools and documentation */
description: z.string().optional(),
queue: QueueOptions.optional(),
Let's verify the impact on dependent schemas:
✅ Verification successful
Verified! The addition of the optional description
field is consistent across the codebase and maintains backward compatibility. No issues were found with dependent schemas.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for any direct imports or usage of TaskMetadata or TaskManifest schemas
# that might be affected by the new description field
# Search for imports and usage
rg "TaskMetadata|TaskManifest" --type ts
# Search for any existing description field usage
rg "description.*string.*optional" --type ts
Length of output: 9892
apps/webapp/server.ts (2)
132-132
:
Security concern: Rate limiting disabled.
Removing rate limiting exposes the API to potential abuse. Consider:
- Implementing separate rate limiting for streaming vs non-streaming endpoints
- Using a token bucket algorithm for streaming endpoints
- Adding request size/frequency limits specific to streaming
#!/bin/bash
# Check for other rate limiting implementations
rg -l 'RateLimit' --type ts
# Look for other security middleware
ast-grep --pattern 'app.use($middleware)'
124-129
: Verify impact of removing request ID tracking.
Removing request ID generation affects observability and request tracing. If this is intentional:
- Document why request IDs are no longer needed
- Ensure this doesn't break existing logging/monitoring
- Consider keeping request IDs for non-streaming routes
#!/bin/bash
# Check for other uses of requestId in the codebase
rg "requestId" --type ts
# Check for logging/monitoring dependencies on request context
ast-grep --pattern 'runWithHttpContext($args)'
packages/core/src/v3/utils/ioSerialization.ts (2)
10-10
: LGTM: Import addition is appropriate.
The import of ApiClient
is necessary for the new optional client parameter type and follows the project's import conventions.
40-46
: LGTM: Clean implementation of client parameter propagation.
The optional client parameter is correctly propagated while maintaining backward compatibility.
Let's verify the usage of this function to ensure all callers are updated appropriately:
✅ Verification successful
Verified: Client parameter is correctly utilized in all existing usages.
No issues found with the propagation of the optional client parameter.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for all calls to conditionallyImportAndParsePacket
ast-grep --pattern 'conditionallyImportAndParsePacket($$$)'
Length of output: 202
packages/cli-v3/src/entryPoints/dev-run-worker.ts (1)
18-19
: LGTM: New imports support streaming functionality
The addition of waitUntil
and apiClientManager
imports is appropriate for implementing the real-time streaming feature.
apps/webapp/app/v3/services/createBackgroundWorker.server.ts (1)
153-153
: LGTM: Task description field addition
The addition of the description field is well-integrated into the task creation process and maintains type safety through the TaskResource interface.
packages/core/src/v3/workers/taskExecutor.ts (2)
6-6
: LGTM: Clean import addition
The waitUntil
import is correctly added to the existing named imports from "../index.js".
226-226
: Verify impact on task execution timeouts
The addition of #blockForWaitUntil
in the finally block ensures proper cleanup, but could extend task execution time. Ensure that this additional wait time is accounted for in any task timeout configurations.
packages/core/package.json (2)
208-209
: LGTM: Dependencies reordering.
The reordering of zod
and zod-validation-error
appears to be purely organizational and doesn't affect functionality.
200-200
: Verify the newly added dependencies.
The new dependencies align well with the PR objectives:
eventsource-parser
for handling real-time streams@ai-sdk/provider-utils
andai
for AI tool integration@epic-web/test-server
for testing real-time functionality
Also applies to: 212-214, 218-218
✅ Verification successful
New dependencies are secure and well-maintained.
No issues found with the newly added dependencies.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for potential security vulnerabilities and compatibility issues in new dependencies
# Check for known vulnerabilities in new packages
gh api graphql -f query='
{
securityVulnerabilities(first: 5, ecosystem: NPM, package: "eventsource-parser") {
nodes {
advisory {
summary
severity
publishedAt
}
vulnerableVersionRange
firstPatchedVersion {
identifier
}
}
}
}
'
# Check if the packages are well-maintained (last publish date, download stats)
for pkg in "eventsource-parser" "@ai-sdk/provider-utils" "@epic-web/test-server" "ai"; do
curl -s "https://registry.npmjs.org/$pkg" | jq '{
"name": .name,
"latest_version": .["dist-tags"].latest,
"last_publish": .[.["dist-tags"].latest].time,
"maintainers": .maintainers
}'
done
Length of output: 3342
Script:
#!/bin/bash
# Description: Retrieve the latest publish dates for new dependencies using npm view
for pkg in "eventsource-parser" "@ai-sdk/provider-utils" "@epic-web/test-server" "ai"; do
npm view "$pkg" time --json | jq '{
"name": env.pkg,
"latest_version_time": ."latest"
}'
done
Length of output: 994
Script:
#!/bin/bash
# Description: Retrieve the latest publish dates for new dependencies using npm view
for pkg in "eventsource-parser" "@ai-sdk/provider-utils" "@epic-web/test-server" "ai"; do
latest_version=$(npm view "$pkg" version --json)
publish_time=$(npm view "$pkg" time."$latest_version" --json)
jq -n --arg name "$pkg" --arg version "$latest_version" --arg time "$publish_time" '{
name: $name,
latest_version: $version,
latest_publish_time: $time
}'
done
Length of output: 1927
Script:
#!/bin/bash
# Description: Retrieve the latest publish dates for new dependencies using npm view and parse them correctly
for pkg in "eventsource-parser" "@ai-sdk/provider-utils" "@epic-web/test-server" "ai"; do
latest_version=$(npm view "$pkg" version --json | tr -d '"')
time_info=$(npm view "$pkg" time --json)
latest_publish_time=$(echo "$time_info" | jq -r --arg ver "$latest_version" '.[$ver]')
if [ "$latest_publish_time" != "null" ] && [ -n "$latest_publish_time" ]; then
jq -n --arg name "$pkg" --arg version "$latest_version" --arg time "$latest_publish_time" '{
name: $name,
latest_version: $version,
latest_publish_time: $time
}'
else
jq -n --arg name "$pkg" --arg version "$latest_version" --arg time "Unavailable" '{
name: $name,
latest_version: $version,
latest_publish_time: $time
}'
fi
done
Length of output: 31727
apps/webapp/app/v3/environmentVariables/environmentVariablesRepository.server.ts (1)
737-740
: LGTM! Clear fallback chain for stream URL resolution.
The implementation follows a logical progression for resolving the stream URL:
STREAM_ORIGIN
(dedicated stream endpoint)API_ORIGIN
(API endpoint fallback)APP_ORIGIN
(application endpoint fallback)
packages/core/src/v3/types/tools.ts (3)
1-3
: Imports are correctly structured
The import statements correctly bring in necessary modules without any issues.
5-5
: Definition of ToolTaskParameters
is appropriate
The type alias ToolTaskParameters
correctly combines z.ZodTypeAny
and AISchema<any>
, allowing flexibility for tool task parameters.
7-12
: Ensure correct type inference in inferToolParameters
In the type utility inferToolParameters
, accessing PARAMETERS["_type"]
assumes that AISchema
types have a _type
property. Please verify that this property exists in AISchema
and is the correct way to infer parameter types.
To confirm, you can check if AISchema
includes a _type
property or adjust the inference accordingly.
apps/webapp/app/routes/api.v1.packets.$.ts (1)
43-66
: Loader function implementation looks good
The new loader
function is correctly implemented using createLoaderApiRoute
. The presigned URL generation and error handling appear appropriate.
packages/core/src/v3/runMetadata/manager.ts (1)
80-123
: Confirm compatibility of structuredClone
or provide a fallback
The use of structuredClone()
may not be supported in all environments or Node.js versions. If targeting environments where structuredClone
is not available, consider using a polyfill or an alternative deep cloning method to ensure compatibility.
To verify if structuredClone
is supported in the target environment, you can check the Node.js version requirements.
packages/cli-v3/src/entryPoints/deploy-run-worker.ts (2)
111-117
: Ensure proper handling of stream synchronization with waitUntilManager
The integration of waitUntilManager
and the waitUntil.register
method effectively waits for all active streams before completing the run. This approach looks correct and should help manage stream lifecycles properly.
324-325
: Set runMetadataManager.runId
before starting periodic flush
Assigning runMetadataManager.runId
prior to initiating the periodic flush ensures that metadata is correctly associated with the current run. This is important for accurate tracking and logging.
packages/core/src/v3/apiClient/index.ts (3)
Line range hint 601-605
: Confirmation: Addition of client: this
in subscribeToRun
Adding client: this
to the options passed into runShapeStream
in the subscribeToRun
method ensures that the stream has access to the API client's context, which is necessary for proper operation. This change is appropriate and looks good.
Line range hint 615-619
: Confirmation: Addition of client: this
in subscribeToRunsWithTag
Including client: this
in the options for runShapeStream
within the subscribeToRunsWithTag
method aligns with the pattern used in subscribeToRun
. This ensures that the stream maintains the correct context from the API client. The implementation is correct.
Line range hint 624-628
: Confirmation: Addition of client: this
in subscribeToBatch
The addition of client: this
in the options of runShapeStream
inside the subscribeToBatch
method is consistent with other subscription methods. This provides the required client context to the stream, ensuring correct functionality. This change is acceptable.
packages/trigger-sdk/src/v3/shared.ts (1)
121-121
: Ensure params.description
is defined or provide a default value
In multiple locations, params.description
is assigned without checking if it's defined. If description
is an optional property in TaskOptions
, there is a risk of passing undefined
where a string is expected, potentially leading to runtime errors.
Consider providing a default value or updating the type definitions to make description
a required field to prevent unintended behavior.
Also applies to: 194-194, 263-263, 338-338
export function createToolTask< | ||
TIdentifier extends string, | ||
TParameters extends ToolTaskParameters, | ||
TOutput = unknown, | ||
TInitOutput extends InitOutput = any, | ||
>( | ||
params: TaskWithToolOptions<TIdentifier, TParameters, TOutput, TInitOutput> | ||
): ToolTask<TIdentifier, TParameters, TOutput> { | ||
const task = createSchemaTask({ | ||
...params, | ||
schema: convertToolParametersToSchema(params.parameters), | ||
}); | ||
|
||
return { | ||
...task, | ||
tool: { | ||
parameters: params.parameters, | ||
description: params.description, | ||
execute: async (args: any) => { | ||
return task.triggerAndWait(args).unwrap(); | ||
}, | ||
}, | ||
}; | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Improve type safety in the execute
method by specifying parameter types
Currently, the execute
method accepts args: any
, which bypasses TypeScript's type checking. To enhance type safety and catch potential errors at compile time, consider specifying the parameter type based on the inferred schema from the task.
Apply this diff to improve the parameter typing:
- execute: async (args: any) => {
+ execute: async (args: inferSchemaIn<typeof task.schema>) => {
This change ensures that the args
passed to execute
conform to the expected schema derived from params.parameters
, leveraging TypeScript's type system for better reliability.
Committable suggestion skipped: line range outside the PR's diff.
@trigger.dev/build
@trigger.dev/core
@trigger.dev/react-hooks
trigger.dev
@trigger.dev/sdk
@trigger.dev/rsc
commit: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (1)
packages/cli-v3/src/config.ts (1)
145-149
: LGTM! Consider adding a code comment explaining the working directory resolution precedence.The working directory resolution logic is well-structured with a clear fallback mechanism. However, a brief comment explaining the precedence (config file dir → package.json dir → cwd) would help future maintainers understand the intended behavior more quickly.
+ // Resolve working directory with the following precedence: + // 1. Directory of the config file (if exists) + // 2. Directory of package.json (if exists) + // 3. Current working directory const workingDir = result.configFile ? dirname(result.configFile) : packageJsonPath ? dirname(packageJsonPath) : cwd;
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (2)
packages/build/src/extensions/core.ts
(1 hunks)packages/cli-v3/src/config.ts
(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- packages/build/src/extensions/core.ts
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 14
🧹 Outside diff range and nitpick comments (12)
references/nextjs-realtime/src/components/TriggerButton.tsx (3)
10-10
: Remove debugging console.log statementThis appears to be debugging code that should be removed before merging.
- console.log(handle);
16-16
: Simplify button stylingThe button styling combines
bg-transparent
withhover:bg-transparent
which is redundant. Consider simplifying the className.- className="p-0 bg-transparent hover:bg-transparent hover:text-gray-200 text-gray-400" + className="p-0 bg-transparent hover:text-gray-200 text-gray-400"
18-22
: Consider making the prompt configurableThe weather-related prompt is hardcoded. Consider making it a prop to make the component more reusable.
- onClick={() => { - submit({ - model: "gpt-4o-mini", - prompt: "What's the weather like in San Francisco today?", - }); - }} + onClick={() => { + submit({ + model: "gpt-4", + prompt: props.prompt ?? "What's the weather like in San Francisco today?", + }); + }}packages/react-hooks/src/hooks/useTaskTrigger.ts (1)
41-41
: Consider caching idempotency key generationThe
makeIdempotencyKey
is called on every task trigger. Consider memoizing this operation if the options object is stable.+ const memoizedIdempotencyKey = React.useMemo( + () => makeIdempotencyKey(options?.idempotencyKey), + [options?.idempotencyKey] + ); // ... - idempotencyKey: await makeIdempotencyKey(options?.idempotencyKey), + idempotencyKey: await memoizedIdempotencyKey,apps/webapp/app/services/authorization.server.ts (1)
Line range hint
19-35
: Update JSDoc examples to include write permission scenariosWhile the implementation correctly handles the new "write" action type, the documentation examples only demonstrate "read" scenarios. Consider adding write examples for completeness.
Add write examples to the documentation:
* @example * * ```typescript * import { checkAuthorization } from "./authorization.server"; * * const entity = { * type: "PUBLIC", * scope: ["read:runs:run_1234", "read:tasks"] * }; * * checkAuthorization(entity, "read", { runs: "run_1234" }); // Returns true * checkAuthorization(entity, "read", { runs: "run_5678" }); // Returns false * checkAuthorization(entity, "read", { tasks: "task_1234" }); // Returns true * checkAuthorization(entity, "read", { tasks: ["task_5678"] }); // Returns true +* +* const entityWithWrite = { +* type: "PUBLIC", +* scope: ["write:tasks:task_1234", "read:tasks"] +* }; +* +* checkAuthorization(entityWithWrite, "write", { tasks: "task_1234" }); // Returns true +* checkAuthorization(entityWithWrite, "write", { tasks: "task_5678" }); // Returns false * ```packages/core/src/v3/types/tasks.ts (2)
156-157
: Consider adding JSDoc for the description fieldThe addition of the optional description field enhances task documentation. Consider adding JSDoc comments to provide guidance on the expected format and content of descriptions.
type CommonTaskOptions< TIdentifier extends string, TPayload = void, TOutput = unknown, TInitOutput extends InitOutput = any, > = { /** An id for your task. This must be unique inside your project and not change between versions. */ id: TIdentifier; + /** A human-readable description of what the task does. + * This helps other developers understand the purpose and behavior of the task. + */ description?: string;Also applies to: 429-430
345-352
: Well-designed tool task integration typesThe new types
TaskWithToolOptions
andToolTask
provide a robust foundation for AI tool integration:
- Type-safe parameter handling using
ToolTaskParameters
- Clear execution interface with
execute
function- Optional description for better documentation
Consider adding examples in JSDoc comments to demonstrate typical usage patterns of these types with AI tools.
Also applies to: 505-515
packages/core/src/v3/apiClient/index.ts (1)
Line range hint
636-645
: Consider using a more specific type for JWT claimsThe current implementation uses
z.record(z.any())
which is very permissive. Consider defining a specific schema for the expected JWT claims to improve type safety and maintainability.async generateJWTClaims(requestOptions?: ZodFetchOptions): Promise<Record<string, any>> { return zodfetch( - z.record(z.any()), + z.object({ + // Define expected claim fields here + sub: z.string().optional(), + scopes: z.array(z.string()).optional(), + // Add other expected claims + }), `${this.baseUrl}/api/v1/auth/jwt/claims`, { method: "POST", headers: this.#getHeaders(false), }, mergeRequestOptions(this.defaultRequestOptions, requestOptions) ); }packages/core/src/v3/idempotencyKeys.ts (1)
71-88
: Remove unreachablebreak
statements afterreturn
In the
injectScope
function, thebreak
statements following thereturn
statements are unreachable code and can be safely removed to clean up the code.Apply this diff to remove the redundant
break
statements:function injectScope(scope: "run" | "attempt" | "global"): string[] { switch (scope) { case "run": { if (taskContext?.ctx) { return [taskContext.ctx.run.id]; } - break; } case "attempt": { if (taskContext?.ctx) { return [taskContext.ctx.attempt.id]; } - break; } } return []; }apps/webapp/app/routes/api.v1.tasks.$taskId.trigger.ts (3)
55-55
: Use standard comment syntax '//'.In line 55, the comment uses
///
, which is unconventional in JavaScript/TypeScript. Use//
for single-line comments.Apply this diff to fix the comment style:
- traceparent && isFromWorker /// If the request is from a worker, we should pass the trace context + traceparent && isFromWorker // If the request is from a worker, we should pass the trace context
95-104
: Evaluate appropriate status codes for error responsesIn the error handling block, generic
Error
instances result in a400 Bad Request
. Typically, a500 Internal Server Error
is more appropriate for unexpected server errors, as400
suggests a client-side issue.Consider updating the status code for generic errors:
} else if (error instanceof Error) { - return json({ error: error.message }, { status: 400 }); + return json({ error: error.message }, { status: 500 }); }
142-142
: Remove unnecessary export of 'loader'At line 142, both
action
andloader
are exported, but onlyaction
is implemented in this file. Unlessloader
is defined elsewhere or needed for future implementation, consider removing it to keep the code clean.Apply this diff to update the export statement:
-export { action, loader }; +export { action };
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (15)
apps/webapp/app/routes/api.v1.tasks.$taskId.trigger.ts
(2 hunks)apps/webapp/app/services/authorization.server.ts
(1 hunks)apps/webapp/app/services/routeBuiilders/apiBuilder.server.ts
(8 hunks)packages/core/src/v3/apiClient/index.ts
(5 hunks)packages/core/src/v3/idempotencyKeys.ts
(1 hunks)packages/core/src/v3/index.ts
(1 hunks)packages/core/src/v3/types/tasks.ts
(5 hunks)packages/core/src/v3/utils/ioSerialization.ts
(5 hunks)packages/react-hooks/src/hooks/useTaskTrigger.ts
(1 hunks)packages/react-hooks/src/index.ts
(1 hunks)packages/trigger-sdk/src/v3/auth.ts
(1 hunks)packages/trigger-sdk/src/v3/idempotencyKeys.ts
(1 hunks)packages/trigger-sdk/src/v3/shared.ts
(11 hunks)references/nextjs-realtime/src/app/page.tsx
(2 hunks)references/nextjs-realtime/src/components/TriggerButton.tsx
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- packages/core/src/v3/index.ts
- packages/core/src/v3/utils/ioSerialization.ts
🔇 Additional comments (36)
packages/react-hooks/src/index.ts (2)
7-7
: LGTM! The new export maintains consistency and supports PR objectives.
The addition of useTaskTrigger
hook export follows the established pattern and aligns with the PR's goal of supporting AI tool integration.
7-7
: Verify the hook's integration with real-time features.
Since this PR focuses on real-time streams, let's verify how this new hook integrates with the real-time functionality.
✅ Verification successful
Verified: The useTaskTrigger
hook integrates with real-time features through its usage with the "openai-streaming"
parameter and related real-time utilities.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check the implementation of useTaskTrigger and its interaction with real-time features
# Check the hook's implementation
ast-grep --pattern 'export function useTaskTrigger($_) {
$$$
}'
# Look for real-time related imports/usage in the hook
rg -l "useRealtimeRun|useRealtimeBatch|stream" "packages/react-hooks/src/hooks/useTaskTrigger.ts"
# Check for WebSocket or real-time related code
rg "WebSocket|EventSource|stream" "packages/react-hooks/src/hooks/useTaskTrigger.ts"
Length of output: 250
Script:
#!/bin/bash
# Description: Identify all uses of useTaskTrigger and check for real-time integrations
# Find all files importing useTaskTrigger
import_files=$(rg "import.*useTaskTrigger" --files-with-matches "packages/react-hooks/src/**/*.{ts,tsx}")
# For each file that imports useTaskTrigger, search for real-time related keywords
for file in $import_files; do
echo "Checking real-time integrations in $file"
rg -H -A5 "useTaskTrigger" "$file" | rg "useRealtimeRun|useRealtimeBatch|stream|WebSocket|EventSource|socket"
done
# Additionally, search for any other hooks or utilities that might be related to real-time features
rg -l "useRealtime|useSocket|useStream" "packages/react-hooks/src/**/*.{ts,tsx}"
Length of output: 349
Script:
#!/bin/bash
# Description: Locate the useTaskTrigger hook and verify its integration with real-time features
# Step 1: Locate the useTaskTrigger hook definition
use_task_trigger_file=$(rg --files-with-matches 'export function useTaskTrigger' --glob '*.ts*' .)
# Check if the hook was found
if [ -z "$use_task_trigger_file" ]; then
echo "useTaskTrigger hook not found in the repository."
exit 1
fi
echo "useTaskTrigger hook found at: $use_task_trigger_file"
# Step 2: Verify real-time integrations within the hook
rg "useRealtimeRun|useRealtimeBatch|stream|WebSocket|EventSource|socket" "$use_task_trigger_file"
# Step 3: Find all usages of useTaskTrigger and check for real-time integrations
import_files=$(rg "import.*useTaskTrigger" --files-with-matches .)
if [ -z "$import_files" ]; then
echo "No files import useTaskTrigger."
exit 0
fi
for file in $import_files; do
echo "Checking real-time integrations in $file"
rg -H -A5 "useTaskTrigger" "$file" | rg "useRealtimeRun|useRealtimeBatch|stream|WebSocket|EventSource|socket"
done
Length of output: 1275
Script:
#!/bin/bash
# Description: Investigate the role of "openai-streaming" in useTaskTrigger and its connection to real-time features
# Step 1: Find where "openai-streaming" is defined or used in the codebase
echo "Searching for definitions and usages of 'openai-streaming'..."
rg 'openai-streaming' --glob '*.ts*' .
# Step 2: Identify higher-level components or contexts that use useTaskTrigger with "openai-streaming"
echo "Finding components that use useTaskTrigger with 'openai-streaming'..."
rg 'useTaskTrigger.*["'"'"']openai-streaming["'"'"']' --glob '*.ts*' .
# Step 3: Search for related hooks or utilities that handle real-time features
echo "Searching for real-time related hooks or utilities..."
rg 'useRealtime|useSocket|useStream|real-time|streaming' --glob '*.ts*' .
Length of output: 5408
references/nextjs-realtime/src/app/page.tsx (2)
3-3
: LGTM: Import statements are properly structured
The new imports for TriggerButton and auth are correctly placed and align with the PR's real-time streaming objectives.
Also applies to: 5-5
27-27
: Verify TriggerButton implementation and prop usage
The button integration looks good, but let's verify the TriggerButton component implementation.
✅ Verification successful
TriggerButton implementation and prop usage verified
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check TriggerButton component implementation
echo "Checking TriggerButton component..."
fd TriggerButton.tsx --exec cat {}
# Verify prop types usage
ast-grep --pattern 'interface $_ {
publicAccessToken: $_
}'
Length of output: 1265
references/nextjs-realtime/src/components/TriggerButton.tsx (1)
8-8
: Verify openaiStreaming task type implementation
Let's verify that the imported task type matches the implementation.
✅ Verification successful
Verified: openaiStreaming task type is correctly implemented and used
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for the openaiStreaming task definition
ast-grep --pattern 'export const openaiStreaming = defineTask<{
$$$
}>($$$)'
# Search for any other usages of this task
rg "openaiStreaming" --type ts
Length of output: 587
Script:
#!/bin/bash
# Search for the openaiStreaming task definition using schemaTask
ast-grep --pattern 'export const openaiStreaming = schemaTask<{
$$$
}>($$$)'
# Search for any other usages of this task
rg "openaiStreaming" --type ts
Length of output: 587
packages/react-hooks/src/hooks/useTaskTrigger.ts (2)
16-20
: Well-designed type-safe interface!
The TriggerInstance
interface is well-structured with proper typing and includes all necessary properties for task management. The optional handle
property appropriately handles the async nature of task execution.
22-64
: Solid hook implementation with good TypeScript integration
The useTaskTrigger
hook is well-implemented with proper TypeScript types and integrates nicely with SWR for mutation management. It provides a clean and type-safe API for triggering tasks, which aligns well with the PR's objectives for real-time functionality.
apps/webapp/app/services/authorization.server.ts (2)
Line range hint 36-99
: Implementation securely handles the new write action type
The authorization system's implementation is robust and properly handles the new write permissions:
- Maintains strict type safety with the AuthorizationAction type
- Correctly validates permissions using the
${action}:${resourceType}
pattern - Preserves all existing security checks and error handling
1-1
: Verify handling of new "write" action type across the codebase
The addition of the "write" action type is a significant security change. While the implementation looks correct, we should ensure all code paths properly handle this new permission level.
Let's verify the usage and implementation:
packages/core/src/v3/types/tasks.ts (1)
1-17
: LGTM! Well-organized imports
The new imports for AI Schema and zod support the tool task feature, and the import organization follows a logical grouping pattern.
packages/core/src/v3/apiClient/index.ts (3)
185-193
: LGTM: Well-implemented JWT handling enhancement
The code properly handles both server-provided JWT tokens and client-side generation, improving flexibility in token management.
665-668
: LGTM: Clean browser detection implementation
The browser detection is implemented using reliable checks and follows best practices for client identification.
610-610
: Verify the usage of client parameter in stream implementation
The addition of client: this
parameter to subscription methods suggests a dependency on the ApiClient instance. Let's verify how this parameter is used in the stream implementation.
Also applies to: 624-624, 633-633
packages/trigger-sdk/src/v3/shared.ts (3)
14-14
: LGTM! New imports and types are well-structured.
The new imports and type declarations are properly organized and follow TypeScript best practices.
Also applies to: 20-20, 64-68
536-536
: LGTM! Consistent use of makeIdempotencyKey.
The changes consistently use makeIdempotencyKey
for handling idempotency keys across different trigger methods, which is a good practice for ensuring consistent key generation.
Also applies to: 599-599, 669-669, 766-766
224-227
: Verify schema conversion for tool parameters.
The function uses convertToolParametersToSchema
to convert tool parameters to a schema. Let's verify that this conversion is working as expected across different parameter types.
packages/trigger-sdk/src/v3/idempotencyKeys.ts (1)
1-5
: Simplification of idempotency key management looks good
The code now effectively consolidates the creation of idempotency keys by importing createIdempotencyKey
from @trigger.dev/core/v3
and exporting it within the idempotencyKeys
object. The import and export statements are correctly structured, and the code enhances modularity by relying on the core package for key creation.
packages/core/src/v3/idempotencyKeys.ts (2)
11-23
: Potential inconsistency in default scope between makeIdempotencyKey
and createIdempotencyKey
In the makeIdempotencyKey
function, the call to createIdempotencyKey
uses a fixed scope of "global"
. However, the createIdempotencyKey
function defaults its scope to "run"
when no scope is provided. This discrepancy might lead to unexpected behavior if the global scope is not intended in all cases where makeIdempotencyKey
is used. Please verify if "global"
is the intended default scope here.
90-100
: Ensure crypto.subtle
is available in the runtime environment
The generateIdempotencyKey
function uses crypto.subtle.digest
, which may not be available in all Node.js environments, especially in versions prior to Node.js 15 or in certain serverless environments. Please verify that crypto.subtle
is supported in your target runtime environments or consider using a polyfill or an alternative cryptographic library like crypto.createHash
.
apps/webapp/app/routes/api.v1.tasks.$taskId.trigger.ts (1)
88-93
: Verify the response structure meets client expectations
The success response returns only the id
. Ensure that the client does not require additional data from this endpoint.
Run the following script to check how the response is consumed by the client:
apps/webapp/app/services/routeBuiilders/apiBuilder.server.ts (16)
3-3
: Import necessary types from '@remix-run/server-runtime'
The addition of ActionFunctionArgs
, json
, and LoaderFunctionArgs
is appropriate for handling request and response types in the route builders.
19-24
: Adding THeadersSchema to support header validation
Introducing THeadersSchema
and adding the headers
property to ApiKeyRouteBuilderOptions
enhances the route builder's capability to validate request headers, ensuring type safety and improved request handling.
42-49
: Including THeadersSchema and headers in ApiKeyHandlerFunction
By adding THeadersSchema
and the headers
parameter to ApiKeyHandlerFunction
, the handler now supports type-safe access to validated request headers, improving the robustness of API endpoint implementations.
56-60
: Extending createLoaderApiRoute to support headers
Adding THeadersSchema
as a generic parameter to createLoaderApiRoute
, and including it in both the options
and handler
parameters, effectively enables header validation and parsing within loader functions.
181-186
: Adding THeadersSchema to PATRouteBuilderOptions
The addition of THeadersSchema
and the headers
property to PATRouteBuilderOptions
allows for header validation in routes authenticated with personal access tokens, enhancing consistency across API route builders.
192-199
: Including THeadersSchema and headers in PATHandlerFunction
By introducing THeadersSchema
and adding the headers
parameter to PATHandlerFunction
, the function now supports validated access to request headers, improving type safety and request handling in PAT-authenticated routes.
206-210
: Extending createLoaderPATApiRoute to support headers
The addition of THeadersSchema
to createLoaderPATApiRoute
enables header validation and parsing in loader functions that authenticate using personal access tokens.
307-316
: Introducing ApiKeyActionRouteBuilderOptions with support for body validation
The new ApiKeyActionRouteBuilderOptions
type extends ApiKeyRouteBuilderOptions
by adding TBodySchema
and body
, allowing for request body validation in API action routes.
317-332
: Defining ApiKeyActionHandlerFunction to include body parameter
Adding TBodySchema
and the body
parameter to ApiKeyActionHandlerFunction
provides handlers with type-safe access to validated request bodies, enhancing the reliability of action route handlers.
333-351
: Creating createActionApiRoute for handling action routes with bodies
The createActionApiRoute
function is introduced to facilitate the creation of API action routes that support request body parsing and validation, alongside headers, query parameters, and authentication.
363-369
: Handling CORS preflight and unsupported methods in loader
The loader
function within createActionApiRoute
now correctly handles CORS preflight requests when corsStrategy
is enabled and responds with a 405 Method Not Allowed for other requests, ensuring appropriate request handling.
371-381
: Authenticating API requests in action function
The action function correctly authenticates API requests using authenticateApiRequest
, ensuring that only authorized requests are processed.
382-388
: Validating content length against maxContentLength
The implementation checks the Content-Length
header to ensure the request body does not exceed the specified maxContentLength
, returning a 413 Payload Too Large error when appropriate.
462-480
: Authorization check includes headers
In the authorization step, the resource
function now receives parsedHeaders
, ensuring that any header-based authorization logic is properly supported.
482-503
: Handling errors in the action handler
The try-catch block ensures that any errors thrown within the handler are appropriately logged and responded to, returning an Internal Server Error
response when necessary.
505-506
: Exporting loader and action functions
The createActionApiRoute
function returns an object containing both the loader
and action
, correctly structuring the route for use in the application.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 9
🧹 Outside diff range and nitpick comments (27)
packages/react-hooks/src/utils/throttle.ts (1)
3-5
: Consider enhancing robustness and documentation.
- Add JSDoc documentation to describe the function's purpose, parameters, and return value
- Add validation for negative waitMs values
- Consider more specific typing for the generic function parameters
Here's a suggested improvement:
+/** + * Throttles a function to limit its execution rate. + * @param fn The function to throttle + * @param waitMs The minimum time (in milliseconds) between function executions + * @returns A throttled version of the function, or the original function if waitMs is null/undefined + * @throws {Error} If waitMs is negative + */ -export function throttle<T extends (...args: any[]) => any>(fn: T, waitMs: number | undefined): T { - return waitMs != null ? throttleFunction(fn, waitMs) : fn; +export function throttle<Args extends any[], Return>( + fn: (...args: Args) => Return, + waitMs: number | undefined +): (...args: Args) => Return { + if (waitMs != null && waitMs < 0) { + throw new Error("waitMs must be a non-negative number"); + } + return waitMs != null ? throttleFunction(fn, waitMs) : fn; }references/nextjs-realtime/src/app/ai/[id]/page.tsx (1)
3-9
: Consider making publicAccessToken optionalSearch parameters are not guaranteed to be present in URLs. Consider making
publicAccessToken
optional to handle cases where it's not provided:- searchParams: { publicAccessToken: string }; + searchParams: { publicAccessToken?: string };apps/webapp/app/utils/apiCors.ts (1)
11-11
: Add JSDoc comment for the exposedHeaders propertyConsider adding documentation to explain the purpose and security implications of exposing headers.
+ /** Headers that browsers are allowed to access from the response. Use with caution and expose only necessary headers. */ exposedHeaders?: string[];
references/nextjs-realtime/src/components/UploadImageDisplay.tsx (1)
Line range hint
1-89
: LGTM! Here are some optional improvements to considerThe overall implementation is well-structured with proper TypeScript types, loading states, and consistent styling.
Consider these optional improvements for future iterations:
- Make the main image loading state consistent with grid images by using the same
LoaderCircleIcon
- Externalize text strings for better i18n support
- Fine-tune the responsive grid layout with more breakpoints for better scaling
Let me know if you'd like to see code examples for any of these suggestions.
references/nextjs-realtime/src/app/ai/[id]/ClientAiDetails.tsx (2)
8-15
: Consider making throttle value configurable.The
throttleInMs
value is hardcoded to 500ms. Consider making this configurable through props or environment variables for better flexibility in different scenarios.- throttleInMs: 500, + throttleInMs: process.env.NEXT_PUBLIC_STREAM_THROTTLE_MS ? + parseInt(process.env.NEXT_PUBLIC_STREAM_THROTTLE_MS) : 500,
76-84
: Add prop validation for required fields.Consider adding runtime validation for the required props to catch potential issues early.
+const isValidRunId = (runId: string) => /^[a-zA-Z0-9_-]+$/.test(runId); +const isValidToken = (token: string) => token.length > 0; + export default function ClientAiDetails({ runId, publicAccessToken, }: { runId: string; publicAccessToken: string; }) { + if (!isValidRunId(runId)) { + throw new Error("Invalid runId format"); + } + if (!isValidToken(publicAccessToken)) { + throw new Error("Invalid access token"); + } return <AiRunDetailsWrapper runId={runId} accessToken={publicAccessToken} />; }references/nextjs-realtime/src/trigger/ai.ts (4)
20-53
: Consider refactoring nested switch statementsThe deeply nested switch statements make the code harder to maintain. Consider extracting the chunk handling logic into separate functions.
+ function handleOpenAIChunk(part: OpenAIChunkType, openaiCompletion: string): string { + switch (part.chunk.type) { + case "text-delta": + return openaiCompletion + part.chunk.textDelta; + case "tool-call": + handleToolCall(part.chunk); + return openaiCompletion; + case "tool-result": + handleToolResult(part.chunk); + return openaiCompletion; + default: + return openaiCompletion; + } + } + function handleToolCall(chunk: ToolCallChunk) { + switch (chunk.toolName) { + case "getWeather": + logger.info("Calling getWeather tool with args", { args: chunk.args }); + break; + } + } + function handleToolResult(chunk: ToolResultChunk) { + switch (chunk.toolName) { + case "getWeather": + logger.info("Received getWeather tool result", { result: chunk.result }); + break; + } + }
59-64
: Document the purpose of the wait taskThe purpose of this 30-second delay task is not clear. Consider adding documentation to explain its use case and why this specific duration was chosen.
78-78
: Improve mock temperature calculationThe current temperature calculation (
72 + Math.floor(Math.random() * 21) - 10
) could be simplified and made more realistic.- temperature: 72 + Math.floor(Math.random() * 21) - 10, + temperature: Math.floor(65 + Math.random() * 15), // Random temperature between 65-80°F
99-99
: Make maxSteps configurableThe
maxSteps
parameter is hardcoded to 10. Consider making this configurable through the schema to allow flexibility in different scenarios.schema: z.object({ model: z.string().default("chatgpt-4o-latest"), prompt: z.string().default("Hello, how are you?"), + maxSteps: z.number().int().positive().default(10), }),
apps/webapp/app/routes/api.v1.tasks.$taskId.trigger.ts (2)
59-67
: Enhance error handling and loggingWhile the implementation is solid, consider these improvements:
- Add error logging before returning error responses
- Redact sensitive information from debug logs (headers might contain auth tokens)
Consider applying this improvement:
logger.debug("Triggering task", { taskId: params.taskId, idempotencyKey, triggerVersion, - headers, + headers: { + ...headers, + // Exclude potentially sensitive headers + "idempotency-key": headers["idempotency-key"], + "trigger-version": headers["trigger-version"], + "x-trigger-client": headers["x-trigger-client"], + }, options: body.options, isFromWorker, traceContext, }); // ... in catch block } catch (error) { + logger.error("Error triggering task", { + taskId: params.taskId, + error: error instanceof Error ? error.message : "Unknown error", + errorType: error.constructor.name, + }); if (error instanceof ServiceValidationError) {Also applies to: 94-104
108-139
: Consider adding additional security headers for JWT responsesThe JWT implementation is secure, but consider adding these security-related headers for browser clients:
if (triggerClient === "browser") { // ... JWT generation ... return { "x-trigger-jwt-claims": claimsHeader, "x-trigger-jwt": jwt, + "Cache-Control": "no-store, private", + "Strict-Transport-Security": "max-age=31536000; includeSubDomains", }; }packages/core/src/v3/apiClient/runStream.ts (2)
84-86
: Consider making the default stream URL configurableThe fallback URL
"https://api.trigger.dev"
is hardcoded. Consider making this configurable through environment variables or configuration options to support different environments and improve flexibility.- getEnvVar("TRIGGER_STREAM_URL", getEnvVar("TRIGGER_API_URL")) ?? "https://api.trigger.dev", + getEnvVar("TRIGGER_STREAM_URL", getEnvVar("TRIGGER_API_URL")) ?? getEnvVar("TRIGGER_DEFAULT_URL", "https://api.trigger.dev"),
376-382
: Consider adding error logging in safeParseJSONWhile returning the original data on parse error prevents crashes, silently swallowing parsing errors could make debugging difficult. Consider adding error logging or a way to handle parse failures explicitly.
function safeParseJSON(data: string): unknown { try { return JSON.parse(data); } catch (error) { + console.warn(`Failed to parse JSON data: ${error.message}`); return data; } }
apps/webapp/app/services/routeBuiilders/apiBuilder.server.ts (4)
363-369
: Add explanatory message for Method Not Allowed responseThe loader function returns a 405 status without explaining why. Consider adding a message to help API consumers.
Apply this diff:
async function loader({ request, params }: LoaderFunctionArgs) { if (corsStrategy !== "none" && request.method.toUpperCase() === "OPTIONS") { return apiCors(request, json({})); } - return new Response(null, { status: 405 }); + return json( + { error: "Method Not Allowed. Use POST/PUT for this endpoint." }, + { status: 405 } + ); }
509-511
: Consider documenting the exposed headersThe wrapper now exposes JWT-related headers, but this isn't documented. Consider adding JSDoc to explain the purpose of these headers.
Apply this diff above the function:
+/** + * Wraps an API response with CORS headers when enabled. + * When CORS is enabled, exposes: + * - x-trigger-jwt: The JWT token for subsequent authenticated requests + * - x-trigger-jwt-claims: Claims contained in the JWT token + */ function wrapResponse(request: Request, response: Response, useCors: boolean) {
482-502
: Consider adding request method logging for debuggingWhen an error occurs, the current error logging could be enhanced with the request method for better debugging context.
Apply this diff:
try { const result = await handler({ params: parsedParams, searchParams: parsedSearchParams, headers: parsedHeaders, body: parsedBody, authentication: authenticationResult, request, }); return wrapResponse(request, result, corsStrategy !== "none"); } catch (error) { - console.error("Error in API route:", error); + console.error("Error in API route:", { + method: request.method, + error: error instanceof Error ? error.message : String(error) + }); if (error instanceof Response) { return wrapResponse(request, error, corsStrategy !== "none"); }
466-471
: Consider adding request path to debug loggingThe authorization debug logging could be enhanced with the request path for better context.
Apply this diff:
logger.debug("Checking authorization", { + path: new URL(request.url).pathname, action, resource: $resource, superScopes, scopes: authenticationResult.scopes, });
packages/react-hooks/src/hooks/useApiClient.ts (2)
18-19
: Update error message to reflect parameter precedenceIf the precedence is changed as suggested, the error message should indicate that
accessToken
may be missing fromoptions
orTriggerAuthContext
.Apply this diff to update the error message:
- throw new Error("Missing accessToken in TriggerAuthContext or useApiClient options"); + throw new Error("Missing accessToken in useApiClient options or TriggerAuthContext");
22-25
: Ensure consistent precedence when mergingrequestOptions
The merging of
requestOptions
currently allowsoptions?.requestOptions
to overrideauth?.requestOptions
, which is inconsistent with the precedence ofbaseUrl
andaccessToken
. For consistency, consider adjusting the merge order.Apply this diff to make the precedence consistent:
- const requestOptions: ApiRequestOptions = { - ...auth?.requestOptions, - ...options?.requestOptions, - }; + const requestOptions: ApiRequestOptions = { + ...options?.requestOptions, + ...auth?.requestOptions, + };references/nextjs-realtime/src/components/TriggerButton.tsx (3)
28-28
: Correct typo in the prompt textThere's a typographical error in the word "San Fransico"; it should be "San Francisco".
Apply this diff to fix the typo:
- "Based on the temperature, will I need to wear extra clothes today in San Fransico? Please be detailed.", + "Based on the temperature, will I need to wear extra clothes today in San Francisco? Please be detailed.",
21-21
: Ensure the button type is appropriateThe
Button
component hastype="submit"
, which is typically used within forms to submit data. If the button is not inside a form, consider changing it totype="button"
to prevent any unintended form submissions.Apply this diff if necessary:
- type="submit" + type="button"
26-28
: Consider externalizing themodel
andprompt
parametersThe
model
andprompt
parameters are hardcoded within thesubmit
function call. For better flexibility and maintainability, consider passing these parameters as props or retrieving them from state or context, especially if they might change in the future.packages/react-hooks/src/hooks/useRealtime.ts (4)
70-92
: RemoveabortControllerRef
fromtriggerRequest
dependencies to prevent unnecessary re-creations.The
abortControllerRef
is a mutable ref and does not need to be included in the dependency array of theuseCallback
hook fortriggerRequest
. Including it can causetriggerRequest
to be recreated unnecessarily on every render, impacting performance.Apply this diff to adjust the dependencies:
- }, [runId, mutateRun, abortControllerRef, apiClient, setError]); + }, [runId, mutateRun, apiClient, setError]);
39-107
: Refactor hooks to reduce code duplication and improve maintainability.The hooks
useRealtimeRun
,useRealtimeRunWithStreams
,useRealtimeRunsWithTag
, anduseRealtimeBatch
share similar logic, including state management, effect cleanup, and error handling. Consider abstracting the shared logic into a common utility or custom hook to minimize duplication, making the codebase easier to maintain and extend.Also applies to: 129-219, 232-303, 319-390
408-425
: ConsolidateinsertRunShapeInOrder
andinsertRunShape
functions for code reuse.Both functions handle inserting or updating a run in an array while maintaining order based on different criteria (
run.number
andrun.createdAt
). To enhance reusability and reduce redundancy, consider creating a generic function that accepts a sorting key or comparator, allowing it to handle both insertion logics.Also applies to: 443-462
55-59
: Initialize error state explicitly to avoid potential undefined values.The error state managed by SWR hooks is initialized as
undefined
. While this may work, it's safer to initialize error states tonull
to clearly represent the absence of an error and avoid potential issues with strict null checks in TypeScript.Apply this diff to initialize error states:
- const { data: error = undefined, mutate: setError } = useSWR<undefined | Error>(...); + const { data: error = null, mutate: setError } = useSWR<Error | null>(...);Also applies to: 165-169, 250-254, 337-341
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
⛔ Files ignored due to path filters (1)
pnpm-lock.yaml
is excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (22)
apps/webapp/app/routes/api.v1.tasks.$taskId.trigger.ts
(2 hunks)apps/webapp/app/services/apiAuth.server.ts
(1 hunks)apps/webapp/app/services/routeBuiilders/apiBuilder.server.ts
(8 hunks)apps/webapp/app/utils/apiCors.ts
(1 hunks)packages/core/src/v3/apiClient/index.ts
(6 hunks)packages/core/src/v3/apiClient/runStream.ts
(6 hunks)packages/react-hooks/package.json
(1 hunks)packages/react-hooks/src/contexts.tsx
(1 hunks)packages/react-hooks/src/hooks/useApiClient.ts
(1 hunks)packages/react-hooks/src/hooks/useRealtime.ts
(1 hunks)packages/react-hooks/src/hooks/useRealtimeBatch.ts
(0 hunks)packages/react-hooks/src/hooks/useRealtimeRun.ts
(0 hunks)packages/react-hooks/src/hooks/useRealtimeRunsWithTag.ts
(0 hunks)packages/react-hooks/src/index.ts
(1 hunks)packages/react-hooks/src/utils/throttle.ts
(1 hunks)packages/trigger-sdk/src/v3/index.ts
(2 hunks)packages/trigger-sdk/src/v3/runs.ts
(2 hunks)references/nextjs-realtime/src/app/ai/[id]/ClientAiDetails.tsx
(1 hunks)references/nextjs-realtime/src/app/ai/[id]/page.tsx
(1 hunks)references/nextjs-realtime/src/components/TriggerButton.tsx
(1 hunks)references/nextjs-realtime/src/components/UploadImageDisplay.tsx
(1 hunks)references/nextjs-realtime/src/trigger/ai.ts
(1 hunks)
💤 Files with no reviewable changes (3)
- packages/react-hooks/src/hooks/useRealtimeBatch.ts
- packages/react-hooks/src/hooks/useRealtimeRun.ts
- packages/react-hooks/src/hooks/useRealtimeRunsWithTag.ts
🚧 Files skipped from review as they are similar to previous changes (2)
- packages/core/src/v3/apiClient/index.ts
- packages/react-hooks/src/index.ts
🔇 Additional comments (25)
packages/react-hooks/src/utils/throttle.ts (1)
1-5
: LGTM! Clean and effective throttle implementation.
The implementation is concise and correctly handles the conditional throttling based on waitMs.
packages/react-hooks/src/contexts.tsx (2)
10-10
: LGTM! Export statement properly updated.
The export statement correctly includes all three elements: the context and both hook variants, maintaining proper encapsulation and providing a clean public API.
Let's verify the usage of these exports across the codebase to ensure proper adoption:
#!/bin/bash
# Description: Check usage patterns of the auth hooks
# Expected: Mix of both required and optional hook usage
# Check for usages of both hooks
echo "=== Usage of required auth hook ==="
rg "useTriggerAuthContext\(" -A 2
echo "=== Usage of optional auth hook ==="
rg "useTriggerAuthContextOptional\(" -A 2
7-8
: LGTM! Good addition of an optional hook variant.
The addition of useTriggerAuthContextOptional
follows React's best practices for providing a more flexible way to access context without throwing errors when the context is not available. This is particularly useful for components that can work with or without authentication.
Let's verify the implementation of the createContextAndHook
utility to ensure it properly supports the optional hook:
✅ Verification successful
Implementation correctly supports optional hook usage
The createContextAndHook
utility properly implements both required and optional hooks:
- The required hook
useCtx
throws an error when context is not found - The optional hook
useCtxWithoutGuarantee
safely returns an empty object when context is missing - The usage in
useApiClient.ts
demonstrates correct handling of optional values with fallbacks
This verifies that the addition of useTriggerAuthContextOptional
is well-implemented and follows React's context best practices.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the createContextAndHook implementation
# Expected: The utility should return a tuple with context and both required/optional hooks
ast-grep --pattern 'export function createContextAndHook$_($_) {
$$$
}'
Length of output: 513
Script:
#!/bin/bash
# Let's check the implementation of createContextAndHook utility
rg "createContextAndHook" -A 10 -B 2
# Also check for any test files that might verify this functionality
fd "createContextAndHook" --exec grep -l "createContextAndHook" {} \;
Length of output: 2174
Script:
#!/bin/bash
# Let's see the complete implementation of createContextAndHook
cat packages/react-hooks/src/utils/createContextAndHook.ts
Length of output: 1515
references/nextjs-realtime/src/app/ai/[id]/page.tsx (2)
1-1
: LGTM: Clean import statement
The import follows Next.js conventions and imports the required component.
10-14
: Verify error handling in ClientAiDetails
The component passes the publicAccessToken
directly without checking if it exists. Ensure that ClientAiDetails
handles cases where the token is missing or invalid.
Let's verify the error handling in ClientAiDetails:
apps/webapp/app/utils/apiCors.ts (1)
11-11
: Verify the headers being exposed in implementations
The addition of exposedHeaders
option is appropriate for the streaming functionality. However, ensure that only necessary headers are exposed in the implementations to maintain security.
Let's check how this option is being used across the codebase:
✅ Verification successful
The exposedHeaders
implementation is secure and properly scoped
The implementation exposes only two specific JWT-related headers: "x-trigger-jwt" and "x-trigger-jwt-claims". This is a secure implementation as:
- The exposed headers are strictly limited to authentication-related headers
- The implementation is consistent across the codebase with a single usage in
apiBuilder.server.ts
- The CORS strategy is properly controlled via the
corsStrategy
option - No sensitive or unnecessary headers are being exposed
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for usage of exposedHeaders in CORS configurations
rg -l 'exposedHeaders.*\['
# Look for streaming-related API routes that might use CORS
rg -l 'text/event-stream.*Content-Type'
Length of output: 164
Script:
#!/bin/bash
# Let's examine the actual implementation of exposedHeaders in the found files
rg -A 5 -B 5 'exposedHeaders' apps/webapp/app/utils/apiCors.ts apps/webapp/app/services/routeBuiilders/apiBuilder.server.ts
# Also search for any CORS configuration setup
rg -A 5 -B 5 'cors|CORS' apps/webapp/app/services/routeBuiilders/apiBuilder.server.ts
# Look for streaming response headers being set
rg -A 3 'headers\.set.*Content-Type'
Length of output: 9631
packages/trigger-sdk/src/v3/index.ts (2)
7-7
: Duplicate export statement detected
The duplicate export issue was previously identified. Please refer to the existing review comment for the fix.
Also applies to: 13-13
42-42
: Verify the RealtimeRun type implementation and usage
The new RealtimeRun
type has been added to the exports. Let's verify its implementation and usage across the codebase.
✅ Verification successful
Based on the shell script results, I can now generate the final response:
RealtimeRun type is properly implemented and used across the codebase
The type is well-defined in packages/core/src/v3/apiClient/runStream.ts
as RealtimeRun<TTask extends AnyTask> = TaskRunShape<TTask>
and is appropriately:
- Exported from core package and trigger-sdk
- Used extensively in react-hooks package for real-time run tracking
- Implemented in various React components and hooks with proper type safety
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the RealtimeRun type implementation and its usage
# Check the type definition
echo "Checking RealtimeRun type definition:"
ast-grep --pattern 'type RealtimeRun = $_'
# Check for usages of RealtimeRun
echo -e "\nChecking RealtimeRun usages:"
rg "RealtimeRun" -t ts
Length of output: 10085
packages/react-hooks/package.json (2)
42-42
: Verify security of the new throttleit dependency
The addition of throttleit
aligns well with the real-time functionality requirements. Let's verify its security and maintenance status.
✅ Verification successful
Throttleit dependency is properly integrated and no security concerns found
The verification shows:
- The package is properly imported and used in
packages/react-hooks/src/utils/throttle.ts
as a utility wrapper - It's integrated into the real-time functionality via
useRealtime.ts
hook - No known security vulnerabilities found in the registry
- The implementation follows best practices with a proper abstraction layer
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check throttleit package for security issues and maintenance status
# Check npm for latest versions and publish dates
npm view throttleit time --json | jq '.'
# Check for known vulnerabilities
npm audit --package-lock-only --audit-level=low --json <<EOF
{
"dependencies": {
"throttleit": "^2.1.0"
}
}
EOF
Length of output: 267
Script:
#!/bin/bash
# Let's verify the usage of throttleit and its integration
rg -l "throttleit|throttle" --type ts --type js
# Check if there are any direct imports of throttleit
rg "import.*throttle" --type ts --type js -A 2
# Let's also check package-lock.json for the resolved version and dependencies
fd "package-lock.json" --exec cat {} \; | jq '.packages | .[] | select(.name == "throttleit")'
# Check if there are any security advisories for throttleit
curl -s "https://registry.npmjs.org/-/npm/v1/security/advisories/search?package=throttleit" | jq '.'
Length of output: 1530
41-41
: Verify impact of SWR version update
Let's verify if the SWR version update has any breaking changes that could affect the real-time functionality.
references/nextjs-realtime/src/app/ai/[id]/ClientAiDetails.tsx (1)
1-6
: LGTM! Imports and directives are properly structured.
The "use client" directive and imports are correctly organized, with clear separation between UI components, types, and hooks.
references/nextjs-realtime/src/trigger/ai.ts (1)
1-7
: LGTM: Imports and type definitions are well-structured
The imports are appropriate for the functionality, and the STREAMS type is well-defined with proper typing.
apps/webapp/app/routes/api.v1.tasks.$taskId.trigger.ts (3)
8-8
: Import path typo 'routeBuiilders' still needs to be fixed
21-21
: LGTM: New header schema addition
The addition of the optional "x-trigger-client" header follows the existing schema pattern correctly.
26-39
: LGTM: Well-structured route configuration with security measures
The route configuration is comprehensive, including proper validation, authorization, and security measures.
Let's verify the content length limit configuration:
✅ Verification successful
Content length limit is properly configured and consistent
The maxContentLength
configuration is correctly set to env.TASK_PAYLOAD_MAXIMUM_SIZE
which is defined in env.server.ts
as 3MB (3,145,728 bytes). The implementation in apiBuilder.server.ts
shows proper handling of content length validation, returning a 413 status code for oversized requests.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the task payload size limit configuration
# Check for any other routes using different payload size limits
rg -l "TASK_PAYLOAD_MAXIMUM_SIZE" --type ts
# Check the environment variable definition
rg "TASK_PAYLOAD_MAXIMUM_SIZE.*=.*" --type ts
Length of output: 229
Script:
#!/bin/bash
# Let's check the content of env.server.ts to see the definition
cat apps/webapp/app/env.server.ts
# Also check the batch route to compare the configuration
cat apps/webapp/app/routes/api.v1.tasks.$taskId.batch.ts
Length of output: 11684
Script:
#!/bin/bash
# Let's check the batch route with the correct path
cat "apps/webapp/app/routes/api.v1.tasks.$taskId.batch.ts"
# Also check if there are any other routes with maxContentLength configuration
rg "maxContentLength" --type ts -A 2 -B 2
Length of output: 2249
packages/trigger-sdk/src/v3/runs.ts (2)
11-11
: LGTM: Import of RealtimeRun type
The addition of RealtimeRun
type aligns with the PR's real-time streaming functionality objectives.
32-39
: Verify integration with subscription functions
While the RealtimeRun
type is now exported, it's not clear how it integrates with the existing subscription functions (subscribeToRun
and subscribeToRunsWithTag
). Consider documenting the relationship or updating these functions to utilize the new type.
Let's check for any related changes in the codebase:
packages/core/src/v3/apiClient/runStream.ts (3)
59-72
: Well-structured type definitions for stream handling!
The StreamPartResult
and RunWithStreamsResult
types provide excellent type safety for stream handling. The mapped type implementation ensures type safety across all stream keys.
98-104
: Well-designed interfaces for stream handling!
The StreamSubscription
and StreamSubscriptionFactory
interfaces provide a clean abstraction for stream handling, making it easy to implement different streaming strategies.
155-162
: Clean and extensible provider interface design!
The RunShapeProvider
interface and RunSubscriptionOptions
type provide a flexible foundation for different shape handling implementations.
apps/webapp/app/services/routeBuiilders/apiBuilder.server.ts (2)
Line range hint 19-50
: LGTM: Type definitions are well-structured and type-safe
The type definitions for route builder options and handler functions are well-designed with:
- Proper generic constraints
- Optional type parameters with sensible defaults
- Consistent pattern across API key and PAT variants
307-315
: LGTM: Well-designed action route builder options
The ApiKeyActionRouteBuilderOptions
type is well-designed:
- Extends the base options type to maintain consistency
- Adds body schema and max content length options specific to actions
- Maintains type safety with proper generic constraints
packages/react-hooks/src/hooks/useApiClient.ts (2)
6-10
: Type definition for UseApiClientOptions
is well-structured
The optional parameters are correctly defined, enhancing the flexibility and configurability of the useApiClient
function.
12-13
: Function signature update improves usability
Updating useApiClient
to accept an optional options
parameter allows for greater control over API client configuration.
apps/webapp/app/services/apiAuth.server.ts (1)
44-45
: Good practice: Early return when API key is missing
The addition of if (!apiKey) { return; }
enhances the function by exiting early when no API key is present. This prevents unnecessary processing and potential errors in the authentication flow.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 8
🧹 Outside diff range and nitpick comments (9)
references/nextjs-realtime/src/components/TriggerButtonWithStreaming.tsx (3)
4-4
: Consider using more specific type importsInstead of using
type
import with wildcard, consider importing specific types needed from the AI module for better type safety and code maintainability.-import type { STREAMS, openaiStreaming } from "@/trigger/ai"; +import type { STREAMS } from "@/trigger/ai"; +import type { openaiStreaming } from "@/trigger/ai";
64-64
: Consider dynamic height instead of fixed heightUsing a fixed height of
60vh
might not work well on all screen sizes. Consider using dynamic height calculation or min/max height constraints.- <div className="h-[60vh] overflow-y-auto"> + <div className="min-h-[300px] max-h-[60vh] overflow-y-auto">
65-69
: Enhance loading state UXConsider adding a loading spinner or skeleton UI for better user experience while fetching data.
{weather ? ( <p className="text-lg leading-relaxed">{text || "Preparing weather report..."}</p> ) : ( - <p className="text-lg">Fetching weather data...</p> + <div className="flex items-center space-x-2"> + <LoadingSpinner /> + <p className="text-lg">Fetching weather data...</p> + </div> )}references/nextjs-realtime/src/components/ui/dialog.tsx (4)
16-30
: Consider adding ARIA attributes for better accessibility.The DialogOverlay implementation is well-structured with proper TypeScript types and animations. However, consider enhancing accessibility by adding ARIA attributes.
<DialogPrimitive.Overlay ref={ref} className={cn( "fixed inset-0 z-50 bg-black/80 data-[state=open]:animate-in data-[state=closed]:animate-out data-[state=closed]:fade-out-0 data-[state=open]:fade-in-0", className )} + aria-hidden="true" {...props} />
31-53
: Enhance keyboard navigation for the close button.While the DialogContent implementation is solid, the close button's keyboard navigation could be improved by adding explicit keyboard handling.
<DialogPrimitive.Close className="absolute right-4 top-4 rounded-sm opacity-70 ring-offset-background transition-opacity hover:opacity-100 focus:outline-none focus:ring-2 focus:ring-ring focus:ring-offset-2 disabled:pointer-events-none data-[state=open]:bg-accent data-[state=open]:text-muted-foreground" + onKeyDown={(e) => { + if (e.key === 'Enter' || e.key === ' ') { + e.preventDefault(); + e.currentTarget.click(); + } + }} + tabIndex={0} >
55-82
: Consider using semantic HTML elements for better accessibility.While the implementation is clean, using semantic HTML elements would improve accessibility and SEO.
-const DialogHeader = ({ +const DialogHeader = React.forwardRef< + HTMLElement, + React.HTMLAttributes<HTMLElement> +>(({ className, ...props -}: React.HTMLAttributes<HTMLDivElement>) => ( - <div +}, ref) => ( + <header + ref={ref} className={cn( "flex flex-col space-y-1.5 text-center sm:text-left", className )} {...props} /> - /> )) -const DialogFooter = ({ +const DialogFooter = React.forwardRef< + HTMLElement, + React.HTMLAttributes<HTMLElement> +>(({ className, ...props -}: React.HTMLAttributes<HTMLDivElement>) => ( - <div +}, ref) => ( + <footer + ref={ref} className={cn( "flex flex-col-reverse sm:flex-row sm:justify-end sm:space-x-2", className )} {...props} - /> + /> ))
83-108
: LGTM! Consider adding default ARIA roles.The DialogTitle and DialogDescription implementations are well-structured. Consider adding default ARIA roles for better screen reader support.
<DialogPrimitive.Title ref={ref} className={cn( "text-lg font-semibold leading-none tracking-tight", className )} + role="heading" + aria-level={2} {...props} />packages/react-hooks/src/hooks/useRealtime.ts (2)
9-13
: Consider documenting the experimental throttle option.The
experimental_throttleInMs
option could benefit from JSDoc documentation explaining its purpose, default value, and potential impact on performance.Add documentation like this:
export type UseRealtimeRunOptions = UseApiClientOptions & { id?: string; enabled?: boolean; + /** + * @experimental + * Throttle the updates in milliseconds to reduce re-renders. + * @default 0 + */ experimental_throttleInMs?: number; };
425-441
: Consider refactoring duplicate sorting logic.The
insertRunShape
andinsertRunShapeInOrder
functions share similar logic for inserting and ordering runs. Consider creating a generic sorting utility to reduce code duplication.Create a shared utility function:
function insertAndOrderRun<TTask extends AnyTask, TKey extends keyof RealtimeRun<TTask>>( previousRuns: RealtimeRun<TTask>[], run: RealtimeRun<TTask>, orderBy: TKey ): RealtimeRun<TTask>[] { const existingRun = previousRuns.find((r) => r.id === run.id); if (existingRun) { return previousRuns.map((r) => (r.id === run.id ? run : r)); } const orderValue = run[orderBy]; const index = previousRuns.findIndex((r) => r[orderBy] > orderValue); return index === -1 ? [...previousRuns, run] : [...previousRuns.slice(0, index), run, ...previousRuns.slice(index)]; }Then use it in both functions:
function insertRunShapeInOrder<TTask extends AnyTask>(...) { return insertAndOrderRun(previousRuns, run, 'number'); } function insertRunShape<TTask extends AnyTask>(...) { return insertAndOrderRun(previousRuns, run, 'createdAt'); }Also applies to: 460-478
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
⛔ Files ignored due to path filters (1)
pnpm-lock.yaml
is excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (8)
packages/react-hooks/src/hooks/useRealtime.ts
(1 hunks)packages/react-hooks/src/hooks/useTaskTrigger.ts
(1 hunks)references/nextjs-realtime/package.json
(2 hunks)references/nextjs-realtime/src/app/ai/[id]/ClientAiDetails.tsx
(1 hunks)references/nextjs-realtime/src/app/page.tsx
(2 hunks)references/nextjs-realtime/src/components/TriggerButton.tsx
(1 hunks)references/nextjs-realtime/src/components/TriggerButtonWithStreaming.tsx
(1 hunks)references/nextjs-realtime/src/components/ui/dialog.tsx
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- references/nextjs-realtime/src/app/ai/[id]/ClientAiDetails.tsx
- references/nextjs-realtime/src/components/TriggerButton.tsx
🔇 Additional comments (11)
references/nextjs-realtime/package.json (3)
15-15
: LGTM: Consistent with existing UI framework
The addition of @radix-ui/react-dialog
is consistent with the existing Radix UI components in use and maintains version compatibility.
36-36
: Skipping comment on @next/bundle-analyzer version mismatch
A previous review already addressed this issue.
13-13
: Verify the necessity of multiple OpenAI-related packages
The codebase includes three AI-related packages: @ai-sdk/openai
, ai
, and openai
. While this aligns with the PR's real-time streaming objectives, consider:
@ai-sdk/openai
is at version 0.0.72, indicating it's in early development- Both
@ai-sdk/openai
andopenai
might have overlapping functionality
Also applies to: 22-22, 27-27
✅ Verification successful
Let me check the actual usage patterns to determine if there's real duplication or if these packages serve different purposes.
Multiple OpenAI packages serve distinct purposes - no redundancy found
After reviewing the implementation, the packages serve complementary purposes:
@ai-sdk/openai
provides the OpenAI client configurationai
package provides streaming utilities (streamText
,TextStreamPart
)- Both are actively used in the real-time streaming implementation
The combination is intentional and each package serves a specific role in enabling the real-time AI features.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check if these packages are actively used in the codebase
echo "Checking @ai-sdk/openai usage:"
rg -l "@ai-sdk/openai"
echo "\nChecking direct openai usage:"
rg -l "from ['\"](openai|ai)['\"]"
Length of output: 935
Script:
#!/bin/bash
# Check the actual implementation in the realtime example
echo "Content of references/nextjs-realtime/src/trigger/ai.ts:"
cat references/nextjs-realtime/src/trigger/ai.ts
echo "\nChecking imports in the file:"
rg -A 3 "import.*from ['\"](@ai-sdk/openai|openai|ai)['\"]" references/nextjs-realtime/src/trigger/ai.ts
Length of output: 3820
references/nextjs-realtime/src/app/page.tsx (3)
3-4
: LGTM: New imports align with real-time streaming feature
The added imports for TriggerButton components and auth SDK are appropriate for implementing the real-time streaming functionality described in the PR objectives.
Also applies to: 6-6
8-15
: Consider security implications of token creation in page component
While the token scope is appropriately limited, creating the token directly in the page component means it could be exposed in the initial HTML. Consider moving token creation to a protected API route.
#!/bin/bash
# Check if token creation exists in API routes
echo "Checking for token creation in API routes..."
fd api --type d --exec rg -l "createPublicToken"
echo "Checking for other page components creating tokens..."
ast-grep --pattern 'auth.createPublicToken'
Add error handling for token creation
The token creation should be wrapped in a try-catch block to handle potential failures gracefully.
Consider SSR implications of async component
Making the page component async affects Next.js rendering behavior. Consider using a client component for token creation if immediate data isn't required for SEO.
28-29
: LGTM: UI components properly implement real-time streaming feature
The new button components are well-structured and properly receive the access token prop. The implementation aligns with the PR objectives for adding real-time streaming support.
references/nextjs-realtime/src/components/TriggerButtonWithStreaming.tsx (1)
20-26
: Verify environment variable handling
The baseURL
is using an environment variable directly without validation. Consider adding a fallback or validation to prevent runtime errors if the environment variable is not set.
references/nextjs-realtime/src/components/ui/dialog.tsx (2)
1-15
: LGTM! Clean and well-structured imports and basic component setup.
The imports and basic component mappings are well-organized, following React best practices. The "use client" directive is correctly placed for client-side rendering.
110-121
: LGTM! Clean and organized exports.
The export structure is clear and follows best practices with named exports.
packages/react-hooks/src/hooks/useRealtime.ts (1)
495-507
: Consider handling potential exceptions in the stream processing loop.
The for await...of
loop in processRealtimeRunWithStreams
should handle potential exceptions to prevent unhandled promise rejections.
This issue was already identified in a previous review comment. The suggestion to wrap the loop body in a try-catch block is still valid.
packages/react-hooks/src/hooks/useTaskTrigger.ts (1)
22-26
: The TriggerInstance
interface is well-defined
The interface correctly specifies the types and optional properties.
references/nextjs-realtime/src/components/TriggerButtonWithStreaming.tsx
Show resolved
Hide resolved
references/nextjs-realtime/src/components/TriggerButtonWithStreaming.tsx
Show resolved
Hide resolved
references/nextjs-realtime/src/components/TriggerButtonWithStreaming.tsx
Show resolved
Hide resolved
references/nextjs-realtime/src/components/TriggerButtonWithStreaming.tsx
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
🧹 Outside diff range and nitpick comments (10)
packages/core/src/v3/apiClient/stream.ts (1)
33-35
: Consider using async iteration for better memory efficiencyFor large datasets, consider using async iteration to process rows one at a time instead of loading all rows into memory.
- for (const shapeRow of newShape.rows) { - await callback(schema.parse(shapeRow)); - } + for await (const shapeRow of newShape.rows[Symbol.asyncIterator]?.() ?? newShape.rows) { + await callback(schema.parse(shapeRow)); + }packages/react-hooks/src/utils/throttle.ts (2)
8-14
: Add validation for throttleInMs parameterWhile the implementation handles the case when throttleInMs is undefined, it should also validate that the value is positive when provided.
export function createThrottledQueue<T>( onFlush: (items: T[]) => Promise<void>, throttleInMs?: number ): ThrottledQueue<T> { + if (throttleInMs !== undefined && throttleInMs <= 0) { + throw new Error("throttleInMs must be a positive number"); + } let queue: T[] = []; let lastFlushTime = 0; let flushPromise: Promise<void> | null = null;
1-1
: Add JSDoc documentation for the utilitySince this is a reusable utility in a React hooks package, it would be valuable to add JSDoc documentation with:
- Description of the utility's purpose
- Example usage with real-time streams
- Parameter descriptions
- Common use cases and patterns
+/** + * Creates a throttled queue that batches items and processes them at a controlled rate. + * Useful for managing real-time data streams and preventing excessive updates. + * + * @example + * ```typescript + * const queue = createThrottledQueue<Event>( + * async (events) => { + * // Process batch of events + * await processEvents(events); + * }, + * 1000 // Process at most once per second + * ); + * + * // Add events to queue + * queue.add(event); + * ``` + * + * @param onFlush - Callback to process batched items + * @param throttleInMs - Optional throttle duration in milliseconds + */ // Reusable throttle utilitypackages/react-hooks/src/hooks/useTaskTrigger.ts (3)
75-78
: Add documentation for experimental throttling featureThe
experimental_throttleInMs
option needs documentation explaining its purpose, usage, and potential impact on performance.Consider adding a JSDoc comment like:
export type UseRealtimeTaskTriggerOptions = UseTaskTriggerOptions & { enabled?: boolean; + /** @experimental Throttles real-time updates to reduce network traffic. + * @param experimental_throttleInMs - Minimum time (in ms) between updates + */ experimental_throttleInMs?: number; };
80-87
: Enhance type safety for stream recordsThe default
Record<string, any>
type forTStreams
is too permissive and could lead to runtime errors.Consider adding constraints or validation:
export type RealtimeTriggerInstanceWithStreams< TTask extends AnyTask, - TStreams extends Record<string, any> = Record<string, any>, + TStreams extends Record<string, unknown> = Record<string, unknown>, > = UseRealtimeRunWithStreamsInstance<TTask, TStreams> & { submit: (payload: TaskPayload<TTask>) => void; isLoading: boolean; handle?: RunHandleFromTypes<InferRunTypes<TTask>>; };
97-101
: Improve access token fallback logicThe current access token fallback using the nullish coalescing operator could be more explicit about the token source.
Consider adding a helper function:
function getAccessToken(handle?: { publicAccessToken?: string }, options?: { accessToken?: string }) { const token = handle?.publicAccessToken ?? options?.accessToken; if (!token) { console.warn('No access token provided for real-time updates'); } return token; }Then use it in the hook:
const realtimeInstance = useRealtimeRunWithStreams<TTask, TStreams>(triggerInstance.handle?.id, { ...options, id: triggerInstance.handle?.id, - accessToken: triggerInstance.handle?.publicAccessToken ?? options?.accessToken, + accessToken: getAccessToken(triggerInstance.handle, options), });apps/webapp/test/realtimeClient.test.ts (1)
89-90
: Remove debug logging statementDebug logging statements should not be committed to test files as they can clutter test output.
- console.log(headers);
packages/react-hooks/src/hooks/useRealtime.ts (3)
93-107
: Extract common useEffect pattern into a custom hook.The same useEffect pattern for handling real-time subscriptions is duplicated across multiple hooks. Consider extracting this into a custom hook to improve maintainability and reduce duplication.
Create a new custom hook:
function useRealtimeSubscription( enabled: boolean | undefined, identifier: string | undefined, triggerRequest: () => Promise<void>, stop: () => void ) { useEffect(() => { if (typeof enabled === "boolean" && !enabled) { return; } if (!identifier) { return; } triggerRequest().finally(() => {}); return () => { stop(); }; }, [identifier, stop, enabled, triggerRequest]); }Then use it in each hook:
useRealtimeSubscription(options?.enabled, runId, triggerRequest, stop);Also applies to: 214-228, 296-306, 377-387
485-507
: Document the throttling mechanism.The throttled queue implementation lacks documentation about its purpose and behavior. Add JSDoc comments to explain the throttling mechanism and its configuration.
Add documentation:
/** * Creates a throttled queue for batching stream updates. * Updates are grouped by type and processed in batches to reduce UI updates. * * @param updates - Array of stream updates to process * @param throttleInMs - Optional throttle duration in milliseconds */ const streamQueue = createThrottledQueue<StreamUpdate>(...);
409-425
: Add comprehensive documentation for utility functions.The utility functions lack documentation about their purpose, parameters, and return values. Add JSDoc comments to improve maintainability.
Add documentation:
/** * Inserts a run into an ordered array of runs, maintaining order by run number. * If a run with the same ID exists, it will be replaced. * * @param previousRuns - Array of existing runs * @param run - New run to insert * @returns New array with the run inserted in the correct position */ function insertRunShapeInOrder<TTask extends AnyTask>(...); /** * Inserts a run into an ordered array of runs, maintaining order by creation timestamp. * If a run with the same ID exists, it will be replaced. * * @param previousRuns - Array of existing runs * @param run - New run to insert * @returns New array with the run inserted in the correct position */ function insertRunShape<TTask extends AnyTask>(...);Also applies to: 444-462
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
⛔ Files ignored due to path filters (1)
pnpm-lock.yaml
is excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (10)
apps/webapp/app/services/realtimeClient.server.ts
(3 hunks)apps/webapp/test/realtimeClient.test.ts
(3 hunks)docker/docker-compose.yml
(1 hunks)internal-packages/testcontainers/src/utils.ts
(1 hunks)packages/core/package.json
(2 hunks)packages/core/src/v3/apiClient/stream.ts
(1 hunks)packages/react-hooks/src/hooks/useRealtime.ts
(1 hunks)packages/react-hooks/src/hooks/useTaskTrigger.ts
(1 hunks)packages/react-hooks/src/utils/throttle.ts
(1 hunks)references/nextjs-realtime/src/components/TriggerButtonWithStreaming.tsx
(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- docker/docker-compose.yml
🚧 Files skipped from review as they are similar to previous changes (2)
- packages/core/package.json
- references/nextjs-realtime/src/components/TriggerButtonWithStreaming.tsx
🔇 Additional comments (10)
packages/core/src/v3/apiClient/stream.ts (1)
26-30
: Verify Shape class implementation from @electric-sql/client
The change from using .values()
to directly accessing .rows
looks good, but we should verify the Shape class implementation.
✅ Verification successful
Let me try a different approach to verify the Shape implementation.
Let me try one final verification to check the Shape import and its usage.
Changes to Shape usage are correct and well-implemented
The verification shows that:
- The Shape class is dynamically imported from "@electric-sql/client" in the stream.ts file
- The implementation correctly uses the Shape constructor and its rows property
- The pattern is consistently used for both initial data and subscription updates
- The changes align with the @electric-sql/client v0.7.1 API
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check Shape class implementation and version compatibility
# Check for Shape class implementation
ast-grep --pattern 'class Shape {
$$$
rows$_: $_
$$$
}'
# Check package.json for @electric-sql/client version
fd package.json --exec jq -r '.dependencies["@electric-sql/client"] // .devDependencies["@electric-sql/client"]'
Length of output: 334
Script:
#!/bin/bash
# Search for Shape type definitions or interfaces
ast-grep --pattern 'interface Shape {
$$$
}'
# Search for Shape type definitions in .d.ts files
fd '\.d\.ts$' --exec rg -l 'Shape'
# Search for Shape usage patterns
rg 'Shape.*rows' -A 2 -B 2
Length of output: 461
Script:
#!/bin/bash
# Check imports and type definitions in the stream.ts file
cat packages/core/src/v3/apiClient/stream.ts
# Search for any Shape-related types or imports across the codebase
rg "import.*Shape" -A 2
Length of output: 4226
packages/react-hooks/src/utils/throttle.ts (1)
1-6
: Well-designed TypeScript interface!
The ThrottledQueue<T>
type provides a clean and type-safe interface with clearly defined methods for queue operations.
internal-packages/testcontainers/src/utils.ts (2)
Line range hint 58-65
: Consider additional configuration for real-time streaming
Since this PR introduces real-time streaming functionality, consider if the container needs additional configuration:
- Verify if port 3000 is sufficient for both HTTP and WebSocket connections
- Consider if any streaming-specific environment variables are required
Let's check the configuration requirements:
58-58
: Verify compatibility with the new ElectricSQL version
The upgrade from 0.7.5
to 0.8.1
could introduce breaking changes. Since this PR adds real-time streaming functionality, we should ensure this version supports the new features.
Let's verify the version compatibility and changelog:
Consider:
- Documenting the minimum required ElectricSQL version in the project's README
- Adding version constraints to prevent accidental downgrades
✅ Verification successful
Based on the search results, I can now provide a final response regarding the version upgrade from 0.7.5 to 0.8.1 of ElectricSQL:
Version upgrade is compatible with real-time streaming functionality
The version upgrade from 0.7.5 to 0.8.1 is aligned with the codebase's real-time streaming implementation. The evidence shows:
- The version is consistently used across both docker-compose and test container configurations
- The codebase has extensive real-time streaming functionality built on ElectricSQL
- The version is used in production code for handling real-time updates and SSE streams
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check ElectricSQL version compatibility and changes
# Check if the image exists and can be pulled
docker pull electricsql/electric:0.8.1
# Get image details and labels that might contain version info
docker inspect electricsql/electric:0.8.1
# Search for any version-specific configuration or dependencies in the codebase
rg -g '!node_modules' -g '!dist' "electric.*0\.7\.5|electric.*0\.8\.1"
Length of output: 492
Script:
#!/bin/bash
# Let's check for any configuration files, documentation, or test files that might provide insights into version compatibility
# Search for Electric-related configuration and documentation
rg -g '!node_modules' -g '!dist' -i "electric" "README|CHANGELOG|\.md$|\.json$" -A 3 -B 3
# Look for test files that might be using Electric features
fd -e test.ts -e spec.ts -e test.js -e spec.js | xargs rg "electric" -l
# Check for any streaming-related configurations or tests
rg -g '!node_modules' -g '!dist' "stream|realtime|real-time" -A 2 -B 2
Length of output: 237398
apps/webapp/test/realtimeClient.test.ts (4)
205-206
: Header name changes consistent with first test case
These changes mirror the header name updates from the first test case.
Line range hint 1-213
: Test changes look good overall
The test file has been successfully updated to reflect the new header names and URL parameters while maintaining good test coverage. The test structure and assertions properly validate the updated behavior of the RealtimeClient.
91-92
: Verify header name changes across the codebase
The header names have been updated from 'electric-shape-id' to 'electric-handle' and 'electric-chunk-last-offset' to 'electric-offset'. Let's verify these changes are consistent across the codebase.
101-101
: Verify URL parameter changes across the codebase
The URL parameter has been changed from 'shape_id' to 'handle'. Let's verify this change is consistent across all URL constructions in the codebase.
Also applies to: 109-109
✅ Verification successful
Let me gather more specific information about the URL parameter usage in the test file and related files.
Let me check the implementation of the RealtimeClient to understand if this is just a test-specific change.
Based on the evidence gathered, I can now provide the final response:
The URL parameter change is test-specific and correctly implemented
The change from 'shape_id' to 'handle' is consistent with the implementation in realtimeClient.server.ts
which uses the 'handle' parameter. The test file is the only place where this URL parameter is constructed, and it matches the server's expectations as seen in the extractShapeId
function which looks for the 'handle' parameter.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for any remaining old parameter names in URL constructions
rg -l "shape_id=" --type ts
# Search for new parameter name to ensure consistent usage
rg -l "handle=" --type ts
# Search for URL construction patterns in TypeScript files
ast-grep --pattern 'new URL($_, $_)' --lang ts
Length of output: 1248
Script:
#!/bin/bash
# Check the specific test file content for context
cat apps/webapp/test/realtimeClient.test.ts
# Search for URL-related code in the codebase
rg "handle=" -A 2 -B 2 --type ts
# Look for any route handlers or API endpoints that might use these parameters
ast-grep --pattern 'app.get($_, async ($_, $_) => { $$$ })'
ast-grep --pattern 'router.get($_, async ($_, $_) => { $$$ })'
Length of output: 7115
Script:
#!/bin/bash
# Check the RealtimeClient implementation
fd -e ts RealtimeClient.server.ts -x cat {}
# Look for any URL parameter handling in the server implementation
rg -B 3 -A 3 "\.searchParams\.get\(" --type ts
Length of output: 11134
apps/webapp/app/services/realtimeClient.server.ts (2)
Line range hint 73-89
: Clean up and improve URL construction.
Several improvements could enhance maintainability and flexibility:
- Remove the commented-out code if it's no longer needed.
- Consider making the table name configurable rather than hardcoding 'public."TaskRun"'.
- Consider filtering URL parameters to prevent potential conflicts with Electric's parameters.
Here's a suggested refactoring:
const electricUrl = new URL(`${this.options.electricOrigin}/v1/shape`);
-// Copy over all the url search params to the electric url
-$url.searchParams.forEach((value, key) => {
- electricUrl.searchParams.set(key, value);
-});
+// Define allowed parameters to copy
+const allowedParams = new Set(['limit', 'offset', 'cursor']); // adjust as needed
+
+// Copy only allowed parameters
+$url.searchParams.forEach((value, key) => {
+ if (allowedParams.has(key)) {
+ electricUrl.searchParams.set(key, value);
+ }
+});
-// const electricParams = ["shape_id", "live", "offset", "columns", "cursor"];
-
-// electricParams.forEach((param) => {
-// if ($url.searchParams.has(param) && $url.searchParams.get(param)!)) {
-// electricUrl.searchParams.set(param, $url.searchParams.get(param)!);
-// }
-// });
electricUrl.searchParams.set("where", whereClause);
-electricUrl.searchParams.set("table", 'public."TaskRun"');
+electricUrl.searchParams.set("table", this.options.tableName ?? 'public."TaskRun"');
Let's verify the impact of these URL parameter changes:
#!/bin/bash
# Search for any other URL parameter handling in the codebase
rg -A 5 'searchParams\.set|searchParams\.get' --type ts
235-235
: Consider renaming function to match new behavior.
The function name extractShapeId
is misleading as it now extracts a 'handle' parameter instead of a 'shape_id'.
Consider renaming for clarity:
-function extractShapeId(url: URL) {
+function extractHandle(url: URL) {
return url.searchParams.get("handle");
}
Let's verify all callers are updated:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Outside diff range and nitpick comments (4)
apps/webapp/app/routes/realtime.v1.runs.ts (1)
30-30
: Consider documenting the version header's purpose.The
x-trigger-electric-version
header's purpose and expected values should be documented for API consumers.Add a comment above the header usage explaining its purpose:
+ // The electric version header is used to handle client-specific behavior in real-time streaming request.headers.get("x-trigger-electric-version") ?? undefined
apps/webapp/app/routes/realtime.v1.runs.$runId.ts (1)
34-39
: Consider adding version header validation and documentation.The addition of client version support is good, but could benefit from some improvements:
- Consider validating the version header format to ensure it meets expected patterns
- Add JSDoc comments to document the expected version format and its impact on streaming behavior
Here's a suggested improvement:
+ // Expected format: x.y.z (semver) + const version = request.headers.get("x-trigger-electric-version"); + const validatedVersion = version ? + /^\d+\.\d+\.\d+$/.test(version) ? version : undefined + : undefined; return realtimeClient.streamRun( request.url, authentication.environment, run.id, - request.headers.get("x-trigger-electric-version") ?? undefined + validatedVersion );apps/webapp/app/routes/realtime.v1.batches.$batchId.ts (1)
34-39
: Consider adding JSDoc comments for the streamBatch parameters.To improve maintainability and help API consumers, consider documenting the parameters of
streamBatch
, especially the new version parameter.Example documentation:
+ /** + * Streams batch execution updates in real-time + * @param url - The request URL + * @param environment - The runtime environment + * @param batchId - The ID of the batch to stream + * @param clientVersion - Optional client version for compatibility (x-trigger-electric-version header) + */ return realtimeClient.streamBatch( request.url, authentication.environment, batchRun.id, request.headers.get("x-trigger-electric-version") ?? undefined );apps/webapp/app/utils/longPollingFetch.ts (1)
22-27
: Consider using structuredClone for response initializationWhile the current implementation works correctly, consider using
structuredClone
for a more efficient way to clone the response properties:- response = new Response(response.body, { - headers, - status: response.status, - statusText: response.statusText, - }); + const init = structuredClone({ + headers, + status: response.status, + statusText: response.statusText, + }); + response = new Response(response.body, init);
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (7)
apps/webapp/app/routes/realtime.v1.batches.$batchId.ts
(1 hunks)apps/webapp/app/routes/realtime.v1.runs.$runId.ts
(1 hunks)apps/webapp/app/routes/realtime.v1.runs.ts
(1 hunks)apps/webapp/app/services/realtimeClient.server.ts
(4 hunks)apps/webapp/app/utils/longPollingFetch.ts
(1 hunks)apps/webapp/test/realtimeClient.test.ts
(4 hunks)packages/core/src/v3/apiClient/stream.ts
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
- apps/webapp/app/services/realtimeClient.server.ts
- apps/webapp/test/realtimeClient.test.ts
- packages/core/src/v3/apiClient/stream.ts
🔇 Additional comments (4)
apps/webapp/app/routes/realtime.v1.runs.ts (1)
26-31
: LGTM! Verify version header consistency across routes.
The implementation looks good and follows best practices. The addition of the version header is handled safely using nullish coalescing.
Let's verify the consistent usage of this header across other real-time routes:
✅ Verification successful
Version header is consistently implemented across realtime routes
The verification confirms consistent implementation of the x-trigger-electric-version
header:
- All realtime routes (
runs
,runs/$runId
,batches/$batchId
) handle the header identically usingrequest.headers.get("x-trigger-electric-version") ?? undefined
- Found a default version "0.8.1" defined in
packages/core/src/v3/apiClient/stream.ts
, indicating proper version management
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for consistent usage of x-trigger-electric-version header across routes
# Expected: All real-time routes should handle the version header similarly
# Search for realtime routes and their version header handling
rg -l "realtimeClient" apps/webapp/app/routes/ | xargs rg "x-trigger-electric-version"
# Search for any other electric version header usage
rg "electric-version" --type ts
Length of output: 912
apps/webapp/app/routes/realtime.v1.runs.$runId.ts (1)
34-39
: Verify error handling in streamRun implementation.
The code assumes streamRun handles errors appropriately. Let's verify the implementation to ensure proper error handling.
✅ Verification successful
Error handling is properly implemented in streamRun
The implementation shows comprehensive error handling:
- Handles concurrency limits with proper error responses (429)
- Uses try-catch blocks to ensure cleanup of resources
- Properly manages Redis connections and error states
- Returns appropriate error responses (500) for limit provider failures
- Utilizes longPollingFetch with header rewrites for backward compatibility
- Has thorough test coverage for error scenarios
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check streamRun implementation for error handling
# Search for streamRun implementation
ast-grep --pattern 'streamRun($$$) {
$$$
}'
# Look for error handling patterns
rg -A 5 'streamRun.*catch|throw.*streamRun|try.*streamRun'
Length of output: 108
Script:
#!/bin/bash
# Let's try a broader search for streamRun implementation and error handling
# Search for streamRun method definition with more flexible pattern
ast-grep --pattern 'streamRun'
# Search for files containing streamRun
rg -l "streamRun"
# Search for realtimeClient class or interface definition
ast-grep --pattern 'class $_ {
$$$
streamRun
$$$
}'
ast-grep --pattern 'interface $_ {
$$$
streamRun
$$$
}'
# Look for any error handling related to realtime runs
rg -A 5 "realtime.*error|error.*realtime"
Length of output: 399
Script:
#!/bin/bash
# Let's examine the implementation in the identified files
# Check realtimeClient implementation
cat apps/webapp/app/services/realtimeClient.server.ts
# Check test file for error scenarios
cat apps/webapp/test/realtimeClient.test.ts
Length of output: 18026
apps/webapp/app/utils/longPollingFetch.ts (2)
9-13
: LGTM: Function signature update is well-designed
The new optional parameter maintains backward compatibility while providing a type-safe way to handle header rewrites.
9-46
: Verify consistent header handling across streaming endpoints
Since this utility is part of real-time streaming functionality, let's ensure consistent header handling across all streaming endpoints.
✅ Verification successful
Header handling is consistent with streaming functionality
The longPollingFetch
utility's header handling is consistent with the codebase's streaming patterns. The code correctly:
- Handles content-encoding and content-length headers for streaming responses
- Provides flexibility for header rewrites which is used by various streaming endpoints
- Maintains proper error handling with appropriate streaming-compatible responses
The header manipulation patterns found in the codebase confirm this is a well-designed utility that follows established patterns for streaming responses.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for similar header handling patterns in streaming-related files
rg -l "new Headers|content-encoding|content-length" | grep -i "stream\|realtime"
# Look for potential header manipulation in route handlers
ast-grep --pattern 'export async function $_(request: Request) {
$$$
headers.$_($$$)
$$$
}'
Length of output: 182
Script:
#!/bin/bash
# Let's try a broader search for streaming-related files and header handling
rg -l "stream|realtime" --type ts --type tsx
# Search for header manipulation patterns more broadly
rg "new Headers|headers\.(set|delete|get)" --type ts --type tsx -C 2
# Look for Response construction patterns
rg "new Response" --type ts --type tsx -C 2
Length of output: 247
Script:
#!/bin/bash
# Search for streaming-related files
rg -l "stream|realtime" -t ts
# Search for header manipulation patterns
rg "new Headers|headers\.(set|delete|get)" -t ts -C 2
# Look for Response construction patterns
rg "new Response" -t ts -C 2
Length of output: 99575
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 11
🧹 Outside diff range and nitpick comments (18)
references/nextjs-realtime/trigger.config.ts (1)
7-9
: Consider adding documentation for the RSC extension configurationSince this is in a reference implementation directory, it would be helpful to add a comment explaining what
rscExtension()
does and why it's needed for real-time functionality. This would make it more valuable as a reference for other developers.Consider adding a comment like this:
project: "proj_bzhdaqhlymtuhlrcgbqy", dirs: ["./src/trigger"], + // Enable React Server Components (RSC) support for real-time streaming functionality build: { extensions: [rscExtension()], },
packages/core/src/v3/build/extensions.ts (1)
68-68
: LGTM! Consider adding JSDoc comments.The addition of the optional
conditions
property toBuildLayer
is a clean extension of the interface. However, documentation would help clarify its purpose and expected values.Consider adding JSDoc comments to document the purpose and usage:
+ /** Array of condition strings that can be used to customize the build behavior */ conditions?: string[];
packages/core/src/v3/runMetadata/index.ts (2)
68-74
: Enhance stream method with constraints and documentationThe stream method implementation could be improved with:
- Type constraints on T to ensure serializable data
- Validation of the value parameter
- JSDoc comments explaining the streaming behavior
Apply this diff to enhance the implementation:
+ /** + * Streams values for a given key with support for cancellation. + * @param key The key to stream values for + * @param value The source stream or async iterable + * @param signal Optional AbortSignal for cancellation + * @returns An AsyncIterable that yields streamed values + */ public stream<T extends DeserializedJson>( key: string, value: AsyncIterable<T> | ReadableStream<T>, signal?: AbortSignal ): Promise<AsyncIterable<T>> { + if (!value) { + throw new Error("Stream value must not be null or undefined"); + } return this.#getManager().stream(key, value, signal); }
Line range hint
1-74
: Well-structured Singleton implementation with manager delegationThe RunMetadataAPI class effectively implements the Singleton pattern and uses dependency injection through the global manager. This design:
- Ensures consistent metadata management across the application
- Allows for easy testing by swapping the manager implementation
- Supports the new real-time streaming capabilities while maintaining backward compatibility
Consider adding a comment explaining the Singleton pattern usage and manager delegation strategy for future maintainers.
🧰 Tools
🪛 Biome
[error] 53-53: The function should not return a value because its return type is void.
The function is here:
'void' signals the absence of value. The returned value is likely to be ignored by the caller.
(lint/correctness/noVoidTypeReturn)
[error] 57-57: The function should not return a value because its return type is void.
The function is here:
'void' signals the absence of value. The returned value is likely to be ignored by the caller.
(lint/correctness/noVoidTypeReturn)
[error] 61-61: The function should not return a value because its return type is void.
The function is here:
'void' signals the absence of value. The returned value is likely to be ignored by the caller.
(lint/correctness/noVoidTypeReturn)
[error] 65-65: The function should not return a value because its return type is void.
The function is here:
'void' signals the absence of value. The returned value is likely to be ignored by the caller.
(lint/correctness/noVoidTypeReturn)
packages/core/src/v3/runMetadata/metadataStream.ts (1)
11-86
: Consider implementing backpressure handlingThe current implementation might benefit from proper backpressure handling to prevent memory issues when dealing with fast producers and slow consumers. This is especially important for real-time streaming scenarios.
Consider:
- Adding a configurable buffer size limit
- Implementing backpressure signals when the buffer is full
- Adding monitoring/metrics for queue sizes and processing rates
Would you like assistance in implementing these improvements?
🧰 Tools
🪛 Biome
[error] 17-17: void is confusing inside a union type.
Unsafe fix: Use undefined instead.
(lint/suspicious/noConfusingVoidType)
[error] 45-45: void is confusing inside a union type.
Unsafe fix: Use undefined instead.
(lint/suspicious/noConfusingVoidType)
packages/react-hooks/src/hooks/useTaskTrigger.ts (3)
75-78
: Document experimental throttle optionThe
experimental_throttleInMs
option needs documentation about its purpose, usage, and experimental status. Consider adding JSDoc comments.export type UseRealtimeTaskTriggerOptions = UseTaskTriggerOptions & { enabled?: boolean; + /** + * @experimental This feature is experimental and may change or be removed in future versions + * @description Controls the throttling interval for real-time updates in milliseconds + */ experimental_throttleInMs?: number; };
97-101
: Improve enabled state handlingThe
enabled
option from UseRealtimeTaskTriggerOptions should be derived from both the option and the handle's existence.const realtimeInstance = useRealtimeRunWithStreams<TTask, TStreams>(triggerInstance.handle?.id, { ...options, id: triggerInstance.handle?.id, + enabled: options?.enabled && !!triggerInstance.handle?.id, accessToken: triggerInstance.handle?.publicAccessToken ?? options?.accessToken, });
80-87
: Improve type safety for stream recordsThe
TStreams
type parameter could be more strictly typed to ensure type safety of stream data.- TStreams extends Record<string, any> = Record<string, any>, + TStreams extends Record<string, unknown> = Record<string, unknown>,Consider adding a type helper to enforce stream data structure:
type StreamData<T> = { data: T; timestamp: number; metadata?: Record<string, unknown>; }; type ValidatedStreams = Record<string, StreamData<unknown>>;packages/trigger-sdk/src/v3/metadata.ts (1)
28-31
: Update JSDoc to include new methods.The metadata object's JSDoc comment should be updated to include documentation for the new methods:
stream
,append
,increment
, anddecrement
./** * Provides access to run metadata operations. * @namespace * @property {Function} current - Get the current run's metadata. * @property {Function} get - Get a specific key from the current run's metadata. * @property {Function} set - Set a key in the current run's metadata. * @property {Function} del - Delete a key from the current run's metadata. * @property {Function} save - Update the entire metadata object for the current run. + * @property {Function} stream - Stream values to a metadata key in real-time. + * @property {Function} append - Append a value to a metadata key. + * @property {Function} increment - Increment a numeric metadata key. + * @property {Function} decrement - Decrement a numeric metadata key. */packages/cli-v3/src/build/extensions.ts (1)
187-191
: LGTM! Consider adding documentation.The implementation correctly handles custom conditions, following the established patterns in the codebase. Consider adding a comment explaining the purpose and impact of these custom conditions on the build process.
+ // Add custom conditions to the manifest. These conditions can be used + // to control module resolution behavior during the build process. if (layer.conditions) { $manifest.customConditions ??= []; $manifest.customConditions = $manifest.customConditions.concat(layer.conditions); $manifest.customConditions = Array.from(new Set($manifest.customConditions)); }packages/core/src/v3/runMetadata/manager.ts (3)
13-14
: Consider using a more specific type for the MapThe Map declaration could benefit from explicit type parameters for better type safety.
- private activeStreams = new Map<string, MetadataStream<any>>(); + private activeStreams = new Map<string, MetadataStream<unknown>>();
80-151
: Add input validation for numeric operationsThe numeric operations in
incrementKey
could benefit from additional validation to ensure type safety and prevent NaN results.public incrementKey(key: string, increment: number = 1) { if (!this.runId) { return; } + if (!Number.isFinite(increment)) { + throw new Error("Increment value must be a finite number"); + } let nextStore = this.store ? structuredClone(this.store) : {}; let currentValue = key.startsWith("$.") ? new JSONHeroPath(key).first(nextStore) : nextStore[key];
Line range hint
226-245
: Consider adding request cancellation handlingThe flush method could benefit from proper request cancellation handling using the provided
requestOptions
.public async flush(requestOptions?: ApiRequestOptions): Promise<void> { if (!this.runId) { return; } if (!this.store) { return; } if (!this.hasChanges) { return; } + const controller = new AbortController(); + const signal = requestOptions?.signal ?? controller.signal; + try { this.hasChanges = false; - await this.apiClient.updateRunMetadata(this.runId, { metadata: this.store }, requestOptions); + await this.apiClient.updateRunMetadata( + this.runId, + { metadata: this.store }, + { ...requestOptions, signal } + ); } catch (error) { + if (error instanceof Error && error.name === 'AbortError') { + // Handle cancellation specifically + return; + } this.hasChanges = true; throw error; } }references/nextjs-realtime/src/trigger/rsc.tsx (3)
3-3
: Combine imports from "ai/rsc" into a single statementTo improve code organization and avoid redundant imports, you can combine the imports from
"ai/rsc"
into a single statement.Apply this diff to consolidate the imports:
-import { streamUI } from "ai/rsc"; -import { createStreamableUI } from "ai/rsc"; +import { streamUI, createStreamableUI } from "ai/rsc";Also applies to: 5-5
94-97
: Use async/await instead of setTimeout for better controlUsing
setTimeout
within an async function can lead to less predictable code execution. Consider using a promise-based delay to enhance readability and error handling.Apply this change:
- setTimeout(() => { - weatherUI.done(<div>{message}</div>); - }, 1000); + await new Promise((resolve) => setTimeout(resolve, 1000)); + weatherUI.done(<div>{message}</div>);
57-63
: Handle unexpected chunk types in the streamCurrently, only chunks of type
"text-delta"
are processed. If other chunk types may be received, consider handling them or logging a warning for unhandled types.Modify the loop as follows:
for await (const chunk of stream) { logger.log("Received chunk", { chunk }); if (chunk.type === "text-delta") { text += chunk.textDelta; + } else { + logger.warn("Unhandled chunk type", { type: chunk.type }); } }packages/rsc/src/build.ts (2)
21-103
: Refactor duplicatebuild.onResolve
handlers into a helper functionThe
build.onResolve
handlers for module resolution have similar structures. Refactoring them into a helper function or iterating over an array can reduce code duplication and enhance maintainability.Apply this diff to refactor the code:
+const modulesToResolve = [ + { filter: /^react\/jsx-dev-runtime$/, conditions: ["react-server"] }, + { filter: /^react\/jsx-runtime$/, conditions: ["react-server"] }, + { filter: /^(react|react-dom)$/, conditions: ["react-server"] }, + { filter: /^react-dom\/server$/, conditions: ["worker"] }, +]; + +modulesToResolve.forEach(({ filter, conditions }) => { + build.onResolve({ filter }, (args) => { + context.logger.debug(`Resolving ${args.path}`, { args }); + + try { + const resolvedPath = esmResolveSync(args.path, { + url: sourceDir, + conditions, + }); + + context.logger.debug(`Resolved ${args.path}`, { resolvedPath }); + + return { + path: resolvedPath, + }; + } catch (error) { + context.logger.debug(`Failed to resolve ${args.path}`, { error }); + } + + return undefined; + }); +}); - -// Remove the existing build.onResolve handlers below - -build.onResolve({ filter: /^react\/jsx-dev-runtime$/ }, (args) => { /* ... */ }); -build.onResolve({ filter: /^react\/jsx-runtime$/ }, (args) => { /* ... */ }); -build.onResolve({ filter: /^(react|react-dom)$/ }, (args) => { /* ... */ }); -build.onResolve({ filter: /^react-dom\/server$/ }, (args) => { /* ... */ });
36-36
: Enhance error logging for better debuggingWhen logging errors in the catch blocks, consider including
error.message
or the full error stack to provide more detailed information. This can aid in diagnosing issues during module resolution.Apply this adjustment:
-context.logger.debug("Failed to resolve jsx-dev-runtime", { error }); +context.logger.debug("Failed to resolve jsx-dev-runtime", { error: error.message });Apply similar changes to the other error logs:
-context.logger.debug("Failed to resolve jsx-runtime", { error }); +context.logger.debug("Failed to resolve jsx-runtime", { error: error.message }); -context.logger.debug("Failed to resolve react", { error }); +context.logger.debug("Failed to resolve react", { error: error.message }); -context.logger.debug("Failed to resolve react-dom/server", { error }); +context.logger.debug("Failed to resolve react-dom/server", { error: error.message });Also applies to: 57-57, 78-78, 99-99
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
⛔ Files ignored due to path filters (1)
pnpm-lock.yaml
is excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (22)
packages/cli-v3/src/build/extensions.ts
(1 hunks)packages/core/package.json
(3 hunks)packages/core/src/v3/build/extensions.ts
(1 hunks)packages/core/src/v3/runMetadata/index.ts
(1 hunks)packages/core/src/v3/runMetadata/manager.ts
(6 hunks)packages/core/src/v3/runMetadata/metadataStream.ts
(1 hunks)packages/core/src/v3/runMetadata/types.ts
(1 hunks)packages/react-hooks/src/hooks/useTaskTrigger.ts
(1 hunks)packages/rsc/CHANGELOG.md
(1 hunks)packages/rsc/README.md
(1 hunks)packages/rsc/package.json
(1 hunks)packages/rsc/src/build.ts
(1 hunks)packages/rsc/src/index.ts
(1 hunks)packages/rsc/src/sourceDir-cjs.cts
(1 hunks)packages/rsc/src/sourceDir.ts
(1 hunks)packages/rsc/tsconfig.json
(1 hunks)packages/trigger-sdk/src/v3/auth.ts
(2 hunks)packages/trigger-sdk/src/v3/metadata.ts
(3 hunks)references/nextjs-realtime/package.json
(2 hunks)references/nextjs-realtime/src/app/page.tsx
(2 hunks)references/nextjs-realtime/src/trigger/rsc.tsx
(1 hunks)references/nextjs-realtime/trigger.config.ts
(1 hunks)
✅ Files skipped from review due to trivial changes (5)
- packages/rsc/CHANGELOG.md
- packages/rsc/README.md
- packages/rsc/src/index.ts
- packages/rsc/src/sourceDir.ts
- packages/rsc/tsconfig.json
🚧 Files skipped from review as they are similar to previous changes (5)
- packages/core/package.json
- packages/core/src/v3/runMetadata/types.ts
- packages/trigger-sdk/src/v3/auth.ts
- references/nextjs-realtime/package.json
- references/nextjs-realtime/src/app/page.tsx
🧰 Additional context used
🪛 Biome
packages/core/src/v3/runMetadata/index.ts
[error] 53-53: The function should not return a value because its return type is void.
The function is here:
'void' signals the absence of value. The returned value is likely to be ignored by the caller.
(lint/correctness/noVoidTypeReturn)
[error] 57-57: The function should not return a value because its return type is void.
The function is here:
'void' signals the absence of value. The returned value is likely to be ignored by the caller.
(lint/correctness/noVoidTypeReturn)
[error] 61-61: The function should not return a value because its return type is void.
The function is here:
'void' signals the absence of value. The returned value is likely to be ignored by the caller.
(lint/correctness/noVoidTypeReturn)
[error] 65-65: The function should not return a value because its return type is void.
The function is here:
'void' signals the absence of value. The returned value is likely to be ignored by the caller.
(lint/correctness/noVoidTypeReturn)
packages/core/src/v3/runMetadata/metadataStream.ts
[error] 17-17: void is confusing inside a union type.
Unsafe fix: Use undefined instead.
(lint/suspicious/noConfusingVoidType)
[error] 45-45: void is confusing inside a union type.
Unsafe fix: Use undefined instead.
(lint/suspicious/noConfusingVoidType)
🔇 Additional comments (17)
packages/rsc/src/sourceDir-cjs.cts (2)
1-1
: Add documentation for the sourceDir export
The sourceDir
export appears to be used for path resolution in React Server Components. Consider adding JSDoc comments to document its purpose and usage.
Here's a suggested improvement:
import { pathToFileURL } from "node:url";
//@ts-ignore - Have to ignore because TSC thinks this is ESM
+/**
+ * The absolute URL of the source directory for React Server Components.
+ * Used for resolving paths during server-side rendering.
+ * @example 'file:///path/to/packages/rsc/src'
+ */
export const sourceDir = pathToFileURL(__dirname).toString();
Let's check how sourceDir is used:
Also applies to: 3-3
✅ Verification successful
Documentation suggestion for sourceDir is appropriate and verified
The sourceDir
export is indeed used across multiple packages for path resolution:
- In
packages/rsc/src/build.ts
for React Server Components URL resolution - In
packages/cli-v3/src/build/packageModules.ts
for resolving various entry points and shims
The suggested JSDoc documentation accurately reflects its purpose and usage. The example path format is also correct since the value is converted to a file URL using pathToFileURL
.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for sourceDir usage in the codebase
rg -l "sourceDir" --type ts --type tsx
Length of output: 66
Script:
#!/bin/bash
# Search for sourceDir usage with ripgrep, without file type filtering
rg "sourceDir" -A 3
# Also check the specific file content
cat "packages/rsc/src/sourceDir-cjs.cts"
Length of output: 3321
2-2
: 🛠️ Refactor suggestion
Consider a more robust alternative to @ts-ignore
The TypeScript ignore directive is masking a module system detection issue. Consider using a more explicit solution:
- Add "type": "commonjs" to package.json, or
- Use the
.cts
extension with proper TypeScript configuration
Let's verify the module configuration:
✅ Verification successful
The @ts-ignore directive is actually necessary here
The TypeScript ignore directive is justified in this case because:
- The package is properly configured as a dual module (both ESM and CommonJS) in package.json
- The .cts file is specifically for CommonJS usage, as evidenced by the same pattern in multiple packages
- The directive is needed because TypeScript's module detection can get confused with dual-module setups, especially when the package.json has "type": "module"
The current solution is the standard approach for handling this edge case in dual-module packages.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check if package.json exists and contains module type
if [ -f package.json ]; then
echo "Package.json module type configuration:"
jq '.type' package.json
fi
# Check TypeScript configuration
if [ -f tsconfig.json ]; then
echo "TypeScript module configuration:"
jq '.compilerOptions.module' tsconfig.json
fi
Length of output: 170
Script:
#!/bin/bash
# Let's check the package.json in the specific package directory
if [ -f packages/rsc/package.json ]; then
echo "RSC Package.json configuration:"
jq '{type, module, main, exports}' packages/rsc/package.json
fi
# Check if there's a local tsconfig
if [ -f packages/rsc/tsconfig.json ]; then
echo "RSC TypeScript configuration:"
jq '.compilerOptions.module' packages/rsc/tsconfig.json
fi
# Look for other .cts files to understand the pattern
echo "Other .cts files in the codebase:"
fd -e cts
# Check imports/exports in the file
echo "File content:"
cat packages/rsc/src/sourceDir-cjs.cts
Length of output: 1063
Script:
#!/bin/bash
# Let's check the package.json in the specific package directory with correct jq syntax
echo "RSC Package.json configuration:"
cat packages/rsc/package.json | jq '{type: .type, module: .module, main: .main, exports: .exports}'
# Let's also check one of the other .cts files to see the pattern
echo "\nExample of another .cts file:"
cat packages/cli-v3/src/sourceDir-cjs.cts
# Check if there's a root tsconfig.json that might affect all packages
echo "\nRoot tsconfig.json module configuration:"
if [ -f tsconfig.json ]; then
cat tsconfig.json | jq '.compilerOptions.module'
fi
Length of output: 1032
references/nextjs-realtime/trigger.config.ts (1)
2-2
: LGTM: Clean import statement
The import of rscExtension
from @trigger.dev/rsc
is well-structured and follows standard practices.
packages/rsc/package.json (3)
36-36
: Verify version update script implementation
The update-version
script uses a shared script from the root directory. Let's verify its implementation to ensure consistent version management across the monorepo.
#!/bin/bash
# Description: Check the version update script implementation
# Check the update version script
cat scripts/updateVersion.ts || echo "Script not found"
# Look for similar version update configurations in other packages
fd -t f "package.json" -x grep -l "update-version"
56-58
: Verify Node.js version requirement
The package requires Node.js >= 18.20.0. Let's verify this requirement against other packages in the monorepo to ensure consistency.
#!/bin/bash
# Description: Check Node.js version requirements across the monorepo
# Find all package.json files and extract their Node.js version requirements
fd -t f "package.json" -x grep -A 1 '"engines":'
27-29
: Verify the source dialect configuration
The sourceDialects
array contains @triggerdotdev/source
. Let's verify if this is a valid source dialect and its purpose.
✅ Verification successful
@triggerdotdev/source
is a valid source dialect used across the project
The source dialect configuration is valid and consistently used across multiple packages in the project. It serves as a custom condition for TypeScript module resolution and is used in:
- Package exports mapping source files in multiple packages (core, rsc, react-hooks, build, trigger-sdk)
- TypeScript configuration files (tsconfig.json) for module resolution
- Consistent pattern of mapping source files to their distribution counterparts
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check if @triggerdotdev/source is a valid package and its usage in the codebase
# Check if the package exists in the repository
fd -t f "package.json" -x grep -l "@triggerdotdev/source"
# Look for any documentation or usage of this dialect
rg "@triggerdotdev/source" -A 5
Length of output: 1051
Script:
#!/bin/bash
# Let's try a more focused search for the source dialect
# Search for any references to sourceDialects in package.json files
fd -t f "package.json" -x grep -l "sourceDialects"
# Search for any imports or references to @triggerdotdev/source
rg -g "!{node_modules,dist,build}/*" "@triggerdotdev/source"
# Check dependencies in package.json files for @triggerdotdev
fd -t f "package.json" -x grep -A 5 "@triggerdotdev"
# Look for any configuration files that might define source dialects
fd -e ".config.js" -e ".config.ts" -x grep -l "sourceDialects"
Length of output: 17401
packages/core/src/v3/runMetadata/index.ts (1)
52-58
: Previous review comment is still applicable
The issues with increment/decrement methods were already identified in a previous review comment, including void return types, access modifiers, and input validation concerns.
🧰 Tools
🪛 Biome
[error] 53-53: The function should not return a value because its return type is void.
The function is here:
'void' signals the absence of value. The returned value is likely to be ignored by the caller.
(lint/correctness/noVoidTypeReturn)
[error] 57-57: The function should not return a value because its return type is void.
The function is here:
'void' signals the absence of value. The returned value is likely to be ignored by the caller.
(lint/correctness/noVoidTypeReturn)
packages/core/src/v3/runMetadata/metadataStream.ts (2)
79-86
: LGTM! Clean implementation of public methods
The wait()
method and async iterator implementation are clean and follow best practices.
72-74
: 🛠️ Refactor suggestion
Investigate and document the ts-expect-error usage
The @ts-expect-error
comment suggests a type definition issue with the duplex
option. This should be properly typed or documented.
Let's check if there's a type definition available for this:
Consider:
- Adding a comment explaining why this type assertion is necessary
- Creating a proper type definition if one doesn't exist
- Using a more specific type assertion if possible
✅ Verification successful
Let me search for more context about the duplex
option usage.
Let me try one more search to check for any documentation or comments about this specific usage.
The @ts-expect-error
for duplex option is valid and necessary
The duplex: "half"
option is a required configuration for fetch requests with a body stream, as evidenced by the changelog entry. This is a known TypeScript limitation where the type definitions haven't caught up with the Fetch API specification. The error suppression is appropriate here.
- The usage is documented in the changelog: "Set duplex 'half' when creating fetch based Request objects when they have a body"
- This is a common pattern when working with streaming requests in modern Fetch API
- The type error suppression is specific and minimal
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for fetch options type definitions that might include duplex
rg -t ts "interface.*RequestInit" -A 10
Length of output: 738
Script:
#!/bin/bash
# Search for duplex usage in TypeScript files
rg -t ts "duplex.*:" -B 2 -A 2
# Search for fetch related type definitions in node_modules
fd -e d.ts -x cat {} \; | rg -A 5 "interface.*RequestInit"
Length of output: 479
Script:
#!/bin/bash
# Search for the file content to understand the context better
cat packages/core/src/v3/runMetadata/metadataStream.ts
# Also search for any related documentation
rg -g "*.md" -i "duplex.*fetch"
Length of output: 2763
packages/react-hooks/src/hooks/useTaskTrigger.ts (2)
36-61
:
Improve error handling in triggerTask function
The function should handle potential errors from both stringifyIO
and apiClient.triggerTask
. Consider adding proper error boundaries and logging.
async function triggerTask(
id: string,
{
arg: { payload, options },
}: { arg: { payload: TaskPayload<TTask>; options?: TaskRunOptions } }
) {
+ try {
const payloadPacket = await stringifyIO(payload);
const handle = await apiClient.triggerTask(id, {
payload: payloadPacket.data,
options: {
queue: options?.queue,
concurrencyKey: options?.concurrencyKey,
payloadType: payloadPacket.dataType,
idempotencyKey: await makeIdempotencyKey(options?.idempotencyKey),
delay: options?.delay,
ttl: options?.ttl,
tags: options?.tags,
maxAttempts: options?.maxAttempts,
metadata: options?.metadata,
maxDuration: options?.maxDuration,
},
});
return { ...handle, taskIdentifier: id };
+ } catch (error) {
+ console.error(`Failed to trigger task ${id}:`, error);
+ throw new Error(`Failed to trigger task: ${error instanceof Error ? error.message : 'Unknown error'}`);
+ }
}
Likely invalid or redundant comment.
3-20
: Verify version compatibility with @trigger.dev/core/v3
The imports are using the v3 path. Ensure this is intentional and that there's proper documentation about the v3 API compatibility.
✅ Verification successful
Based on the search results, I can see extensive and consistent usage of @trigger.dev/core/v3
across the codebase, including in the React hooks package. The import path is part of the official v3 API and is being used correctly throughout the project. Let me verify the documentation to ensure this is properly documented.
Based on the comprehensive search results, I can now provide a final response:
The v3 import path is correct and well-documented
The use of @trigger.dev/core/v3
is intentional and properly documented. This is part of version 3 of the Trigger.dev platform, which is the current major version. The import path is used consistently across the codebase and is referenced in multiple README files, including the main documentation that points to v3-specific guides and self-hosting instructions.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check for v3 usage consistency and documentation
rg -g '!*.{log,lock}' '@trigger.dev/core/v3'
fd -g 'README*' -X grep -l 'v3'
Length of output: 30635
Script:
#!/bin/bash
# Check v3 documentation in README files
rg -g 'README*' -A 5 -B 5 'v3|version 3'
Length of output: 17838
packages/trigger-sdk/src/v3/metadata.ts (1)
143-149
: 🛠️ Refactor suggestion
Previous review comments still apply: Add JSDoc and error handling.
The stream function still needs documentation and error handling as suggested in the previous review. Additionally, since the function now accepts ReadableStream, the documentation should be updated accordingly.
+/**
+ * Streams values to a metadata key in real-time.
+ * @param {string} key - The key to stream values to
+ * @param {AsyncIterable<T> | ReadableStream<T>} value - The source of streamed values
+ * @param {AbortSignal} [signal] - Optional signal for cancellation
+ * @returns {Promise<AsyncIterable<T>>} A promise that resolves to an AsyncIterable of the streamed values
+ * @throws {Error} If streaming is not supported or fails
+ */
async function stream<T>(
key: string,
value: AsyncIterable<T> | ReadableStream<T>,
signal?: AbortSignal
): Promise<AsyncIterable<T>> {
+ if (!runMetadata.isStreamingSupported?.()) {
+ throw new Error("Streaming is not supported in the current context");
+ }
+ if (!key || typeof key !== "string") {
+ throw new Error("A valid string key is required for streaming");
+ }
return runMetadata.stream(key, value, signal);
}
packages/cli-v3/src/build/extensions.ts (1)
187-191
: Verify the usage of customConditions in the build process.
Let's verify how these custom conditions are being used in the build process and ensure they're properly handled during module resolution.
✅ Verification successful
Let me gather more specific information about how these custom conditions are being used in the build process.
Based on the gathered information, I can now provide a final response as the usage of customConditions is well-documented in the codebase.
Custom conditions are properly integrated in the build process
The custom conditions are correctly handled throughout the build pipeline:
- They're properly passed to the Node.js module resolution system via the
--conditions
flag - They're merged with default conditions ("trigger.dev", "module", "node") during bundling
- They're properly typed and validated in the build schema
- They're consistently propagated from config to runtime
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Search for usage of customConditions in the codebase to understand its impact
# on module resolution and build process.
echo "Searching for customConditions usage in build/bundling code..."
rg -l "customConditions" --type ts
echo "\nSearching for condition-related module resolution code..."
ast-grep --pattern 'resolve($$$, { conditions: $_})'
echo "\nChecking for any documentation about custom conditions..."
rg -i "custom.*conditions?" -g "*.md"
Length of output: 1002
Script:
#!/bin/bash
# Let's examine the actual usage of customConditions in the build process
echo "Checking customConditions usage in build/bundle.ts..."
rg "customConditions" packages/cli-v3/src/build/bundle.ts -B 2 -A 2
echo "\nChecking customConditions usage in buildWorker.ts..."
rg "customConditions" packages/cli-v3/src/build/buildWorker.ts -B 2 -A 2
echo "\nChecking runtime.ts implementation..."
rg "customConditions" packages/core/src/v3/build/runtime.ts -B 2 -A 2
echo "\nChecking build schema definition..."
rg "customConditions" packages/core/src/v3/schemas/build.ts -B 2 -A 2
Length of output: 1750
packages/core/src/v3/runMetadata/manager.ts (1)
202-224
: Fix timeout handling in waitForAllStreams method
The existing review comment about timeout handling is still valid. The current implementation using Promise.allSettled
with Promise.race
won't properly handle timeouts.
references/nextjs-realtime/src/trigger/rsc.tsx (1)
70-81
: Verify the usage of the 'App' component
The App
component is defined and used in rendering the readable stream. Ensure that it includes all necessary content and correctly represents the application's HTML structure.
To confirm, you can review the App
component and its usage in renderToReadableStream
.
packages/rsc/src/build.ts (2)
13-13
: Ensure compatibility of the nullish coalescing assignment operator ??=
The ??=
operator may not be supported in all target environments or might require additional transpilation. Consider using a more widely supported syntax to ensure compatibility.
Apply this alternative:
-context.config.build.conditions ??= [];
+if (context.config.build.conditions == null) {
+ context.config.build.conditions = [];
+}
88-91
: Verify the use of the 'worker' condition for react-dom/server
resolution
In the build.onResolve
handler for react-dom/server
, the conditions are set to ["worker"]
, while other handlers use ["react-server"]
. Please confirm that using ["worker"]
is intentional and aligns with the desired build behavior.
68937e7
to
8e97108
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 20
🧹 Outside diff range and nitpick comments (41)
packages/react-hooks/src/hooks/useApiClient.ts (1)
18-20
: Consider handling missing accessToken more gracefully.Throwing errors synchronously in React hooks can lead to poor user experience and potential issues with error boundaries. Consider returning an error state instead.
- if (!accessToken) { - throw new Error("Missing accessToken in TriggerAuthContext or useApiClient options"); - } + if (!accessToken) { + return { + error: new Error("Missing accessToken in TriggerAuthContext or useApiClient options"), + client: null, + }; + } + + return { + error: null, + client: new ApiClient(baseUrl, accessToken, requestOptions), + };packages/core/src/v3/waitUntil/index.ts (2)
6-20
: Add documentation to explain the NoopManager's purpose.Consider adding JSDoc comments to explain that this is a Null Object implementation used as a fallback when no manager is set. Also, consider marking the class as
/** @internal */
since it's not exported.+/** @internal + * A no-op implementation of WaitUntilManager used as a fallback + * when no global manager is set. + */ class NoopManager implements WaitUntilManager {
22-34
: Document thread-safety considerations for the singleton implementation.Since this is a singleton that could be accessed from multiple parts of the application, consider adding documentation about thread-safety guarantees, especially regarding the lazy initialization in
getInstance()
.+/** + * Singleton class implementing WaitUntilManager interface. + * Thread-safety note: The lazy initialization in getInstance() + * is not guaranteed to be thread-safe in concurrent environments. + */ export class WaitUntilAPI implements WaitUntilManager {🧰 Tools
🪛 Biome
[error] 28-28: Using this in a static context can be confusing.
this refers to the class.
Unsafe fix: Use the class name instead.(lint/complexity/noThisInStatic)
[error] 29-29: Using this in a static context can be confusing.
this refers to the class.
Unsafe fix: Use the class name instead.(lint/complexity/noThisInStatic)
[error] 32-32: Using this in a static context can be confusing.
this refers to the class.
Unsafe fix: Use the class name instead.(lint/complexity/noThisInStatic)
apps/webapp/app/routes/api.v1.packets.$.ts (1)
Line range hint
12-41
: Consider refactoring action function to use createActionApiRouteThe action function uses a different authentication pattern compared to the new loader implementation. For consistency and maintainability, consider refactoring it to use a similar pattern with
createActionApiRoute
.Example refactor:
-export async function action({ request, params }: ActionFunctionArgs) { - // Ensure this is a POST request - if (request.method.toUpperCase() !== "PUT") { - return { status: 405, body: "Method Not Allowed" }; - } - - // Next authenticate the request - const authenticationResult = await authenticateApiRequest(request); - - if (!authenticationResult) { - return json({ error: "Invalid or Missing API key" }, { status: 401 }); - } - - const parsedParams = ParamsSchema.parse(params); - const filename = parsedParams["*"]; - - const presignedUrl = await generatePresignedUrl( - authenticationResult.environment.project.externalRef, - authenticationResult.environment.slug, - filename, - "PUT" - ); - - if (!presignedUrl) { - return json({ error: "Failed to generate presigned URL" }, { status: 500 }); - } - - // Caller can now use this URL to upload to that object. - return json({ presignedUrl }); -} +export const action = createActionApiRoute( + { + params: ParamsSchema, + allowJWT: true, + corsStrategy: "all", + allowedMethods: ["PUT"], + }, + async ({ params, authentication }) => { + const filename = params["*"]; + + const presignedUrl = await generatePresignedUrl( + authentication.environment.project.externalRef, + authentication.environment.slug, + filename, + "PUT" + ); + + if (!presignedUrl) { + return json({ error: "Failed to generate presigned URL" }, { status: 500 }); + } + + return json({ presignedUrl }); + } +);packages/core/src/v3/runMetadata/metadataStream.ts (4)
1-7
: Consider adding runtime validation for MetadataOptionsWhile the type definition is well-structured, consider adding runtime validation to ensure the required fields are non-empty strings and valid URLs.
function validateMetadataOptions<T>(options: MetadataOptions<T>): void { if (!options.baseUrl || !options.baseUrl.startsWith('http')) { throw new Error('Invalid baseUrl'); } if (!options.runId || !options.key) { throw new Error('runId and key must be non-empty strings'); } if (!options.iterator) { throw new Error('iterator is required'); } }
17-23
: Consider separating stream initialization from constructorMoving the stream initialization to a separate method would improve separation of concerns and make the class more testable.
constructor(private options: MetadataOptions<T>) { const { serverIterator, consumerIterator } = this.createTeeIterators(); this.serverIterator = serverIterator; this.consumerIterator = consumerIterator; - this.streamPromise = this.initializeServerStream(); + this.initializeStream(); } +private initializeStream(): void { + this.streamPromise = this.initializeServerStream(); +}
77-79
: Simplify wait() method implementationThe current implementation using
then(() => void 0)
can be simplified for better readability.- public async wait(): Promise<void> { - return this.streamPromise.then(() => void 0); - } + public async wait(): Promise<void> { + await this.streamPromise; + }
9-84
: Consider implementing backpressure handlingThe current implementation might face memory issues with large streams as it doesn't implement proper backpressure handling. Consider:
- Adding a maximum queue size
- Implementing pause/resume mechanisms
- Adding memory usage monitoring
This would help prevent potential out-of-memory issues in production.
Would you like me to provide an example implementation of these improvements?
🧰 Tools
🪛 Biome
[error] 15-15: void is confusing inside a union type.
Unsafe fix: Use undefined instead.
(lint/suspicious/noConfusingVoidType)
[error] 43-43: void is confusing inside a union type.
Unsafe fix: Use undefined instead.
(lint/suspicious/noConfusingVoidType)
packages/core/src/v3/idempotencyKeys.ts (4)
11-23
: Consider adding input validation and error handling.While the function handles undefined and valid keys well, it might benefit from explicit error handling for invalid input types before passing them to
createIdempotencyKey
.export async function makeIdempotencyKey( idempotencyKey?: IdempotencyKey | string | string[] ): Promise<IdempotencyKey | undefined> { if (!idempotencyKey) { return; } if (isIdempotencyKey(idempotencyKey)) { return idempotencyKey; } + if (typeof idempotencyKey !== 'string' && !Array.isArray(idempotencyKey)) { + throw new Error('Idempotency key must be a string or array of strings'); + } + return await createIdempotencyKey(idempotencyKey, { scope: "global" }); }
60-69
: Consider removing unnecessary type assertion.The type assertion to
IdempotencyKey
might be unnecessary since thegenerateIdempotencyKey
function already returns a string matching the IdempotencyKey requirements.export async function createIdempotencyKey( key: string | string[], options?: { scope?: "run" | "attempt" | "global" } ): Promise<IdempotencyKey> { const idempotencyKey = await generateIdempotencyKey( [...(Array.isArray(key) ? key : [key])].concat(injectScope(options?.scope ?? "run")) ); - return idempotencyKey as IdempotencyKey; + return idempotencyKey; }
71-88
: Remove unnecessary break statements.The break statements after the return statements are unreachable and can be removed to improve code clarity.
function injectScope(scope: "run" | "attempt" | "global"): string[] { switch (scope) { case "run": { if (taskContext?.ctx) { return [taskContext.ctx.run.id]; } - break; } case "attempt": { if (taskContext?.ctx) { return [taskContext.ctx.attempt.id]; } - break; } } return []; }
90-100
: Add error handling for crypto operations.The crypto operations could fail in certain environments or with invalid inputs. Consider adding try-catch block to provide meaningful error messages.
async function generateIdempotencyKey(keyMaterial: string[]) { + try { const hash = await crypto.subtle.digest( "SHA-256", new TextEncoder().encode(keyMaterial.join("-")) ); // Return a hex string, using cross-runtime compatible methods return Array.from(new Uint8Array(hash)) .map((byte) => byte.toString(16).padStart(2, "0")) .join(""); + } catch (error) { + throw new Error(`Failed to generate idempotency key: ${error.message}`); + } }packages/rsc/src/build.ts (1)
13-14
: Consider adding duplicate condition checkThe code pushes the "react-server" condition without checking if it already exists. This could lead to duplicate conditions.
Consider updating the code to prevent duplicates:
- context.config.build.conditions ??= []; - context.config.build.conditions.push("react-server"); + context.config.build.conditions ??= []; + if (!context.config.build.conditions.includes("react-server")) { + context.config.build.conditions.push("react-server"); + }apps/webapp/app/routes/api.v1.tasks.$taskId.trigger.ts (3)
59-67
: Consider sanitizing sensitive data in logsThe debug log includes the entire headers object which might contain sensitive information. Consider filtering out sensitive headers before logging.
- headers, + headers: { + "idempotency-key": idempotencyKey, + "trigger-version": triggerVersion, + "x-trigger-worker": isFromWorker, + "x-trigger-client": triggerClient, + },
113-115
: Reduce claims duplicationThe claims object is defined twice with similar structure. Consider extracting the common claims to reduce duplication.
+ const baseClaims = { + sub: environment.id, + pub: true, + }; - const claimsHeader = JSON.stringify({ - sub: environment.id, - pub: true, - }); + const claimsHeader = JSON.stringify(baseClaims); if (triggerClient === "browser") { const claims = { - sub: environment.id, - pub: true, + ...baseClaims, scopes: [`read:runs:${run.friendlyId}`], };Also applies to: 119-123
125-128
: Consider making JWT expiration configurableThe JWT expiration time is hardcoded to "1h". Consider making this configurable through environment variables for different deployment scenarios.
packages/react-hooks/src/hooks/useTaskTrigger.ts (2)
64-74
: Consider adding retry and error handling configuration to SWR mutationThe SWR mutation setup could benefit from additional configuration for better error handling and retry logic, especially important for network requests.
Consider adding these configurations:
-const mutation = useSWRMutation(id as string, triggerTask); +const mutation = useSWRMutation(id as string, triggerTask, { + // Retry failed requests 3 times with exponential backoff + errorRetryCount: 3, + // Handle errors gracefully + onError: (error) => { + console.error(`Failed to trigger task ${id}:`, error); + } +});
77-80
: Document experimental feature and add deprecation warningThe
experimental_throttleInMs
option needs documentation explaining its purpose and potential impact. Consider adding a warning if this feature is planned for deprecation or is unstable.Add JSDoc comments to explain the experimental feature:
export type UseRealtimeTaskTriggerOptions = UseTaskTriggerOptions & { enabled?: boolean; + /** + * @experimental This feature is experimental and may change or be removed in future versions. + * Controls the throttling interval in milliseconds for real-time updates. + */ experimental_throttleInMs?: number; };apps/webapp/server.ts (1)
76-121
: Consider architectural improvements for the streaming endpointWhile the previous review covered implementation details, there are architectural considerations to address:
- The endpoint appears to be tightly coupled with the Express framework. Consider abstracting the streaming logic into a service layer for better maintainability and testing.
- The endpoint lacks authentication/authorization checks which are crucial for production use.
- The streaming logic could benefit from using RxJS or similar libraries for better stream management.
Consider refactoring to a service-based architecture:
// services/streamingService.ts export class StreamingService { async processStream(stream: NodeJS.ReadableStream): Promise<void> { // Streaming logic here } } // routes/streamingRoutes.ts export function setupStreamingRoutes(app: Express, streamingService: StreamingService) { app.post("/realtime/v1/streams/express/test", authMiddleware, validateContentType, async (req, res) => { await streamingService.processStream(req); } ); }packages/core/src/v3/runMetadata/manager.ts (3)
13-14
: Consider using a more specific type for activeStreams MapThe Map is currently typed with
any
which could lead to type safety issues. Consider using a more specific type based on the expected stream value types.- private activeStreams = new Map<string, MetadataStream<any>>(); + private activeStreams = new Map<string, MetadataStream<unknown>>();
18-21
: Add JSDoc comments for constructor parametersThe purpose of
streamsBaseUrl
parameter isn't immediately clear. Consider adding documentation to explain its usage.+ /** + * @param apiClient - Client for making API requests + * @param streamsBaseUrl - Base URL for stream endpoints + */ constructor( private apiClient: ApiClient, private streamsBaseUrl: string ) {}
80-123
: Consider refactoring appendKey to reduce code duplicationThe method has similar logic repeated for path-based and direct key access. Consider extracting the common array manipulation logic into a helper function.
+ private appendValueToExisting(current: unknown, value: DeserializedJson): DeserializedJson { + if (current === undefined) { + return [value]; + } + if (Array.isArray(current)) { + return [...current, value]; + } + return [current, value]; + } public appendKey(key: string, value: DeserializedJson) { if (!this.runId) { return; } let nextStore = this.store ? structuredClone(this.store) : {}; if (key.startsWith("$.")) { const path = new JSONHeroPath(key); - const currentValue = path.first(nextStore); - if (currentValue === undefined) { - path.set(nextStore, [value]); - } else if (Array.isArray(currentValue)) { - path.set(nextStore, [...currentValue, value]); - } else { - path.set(nextStore, [currentValue, value]); - } + const current = path.first(nextStore); + path.set(nextStore, this.appendValueToExisting(current, value)); } else { - const currentValue = nextStore[key]; - if (currentValue === undefined) { - nextStore[key] = [value]; - } else if (Array.isArray(currentValue)) { - nextStore[key] = [...currentValue, value]; - } else { - nextStore[key] = [currentValue, value]; - } + nextStore[key] = this.appendValueToExisting(nextStore[key], value); } if (!dequal(this.store, nextStore)) { this.hasChanges = true; } this.store = nextStore; }apps/webapp/app/services/realtimeClient.server.ts (2)
89-89
: Consider making the table name configurableThe table name is currently hardcoded. Consider making it configurable through the
RealtimeClientOptions
interface to improve flexibility and reusability.export type RealtimeClientOptions = { electricOrigin: string; redis: RedisOptions; cachedLimitProvider: CachedLimitProvider; keyPrefix: string; expiryTimeInSeconds?: number; + tableName?: string; };
Also applies to: 97-97
99-103
: Enhance parameter handling robustnessThe shape_id to handle conversion could be more robust by:
- Validating the shape_id value before setting it as handle
- Logging a warning when falling back to legacy behavior
if (!clientVersion) { + const shapeId = electricUrl.searchParams.get("shape_id"); + if (!shapeId) { + logger.warn("[realtimeClient] Missing shape_id parameter in legacy client request"); + } - electricUrl.searchParams.set("handle", electricUrl.searchParams.get("shape_id") ?? ""); + electricUrl.searchParams.set("handle", shapeId ?? ""); }apps/webapp/test/realtimeClient.test.ts (2)
67-68
: LGTM! Consider extracting the version constant.The updates to accommodate the new API version and header names look good. However, the hardcoded version "0.8.1" appears multiple times.
Consider extracting it as a constant at the top of the file:
import { containerWithElectricTest } from "@internal/testcontainers"; import { expect, describe } from "vitest"; import { RealtimeClient } from "../app/services/realtimeClient.server.js"; +const CURRENT_VERSION = "0.8.1";
Also applies to: 76-77, 91-92, 101-104, 110-113
201-219
: Consider enhancing test coverage for run tag filtering.While the test validates the response status and headers, it would be valuable to also verify:
- The actual run data received matches the expected filtered results
- Runs with non-matching tags are excluded
Consider adding these assertions:
expect(response.status).toBe(200); expect(shapeId).toBeDefined(); expect(chunkOffset).toBe("0_0"); + + const data = await response.json(); + expect(data.runs).toHaveLength(1); + expect(data.runs[0].id).toBe(run.id); + + // Create another run without the matching tag + const runWithoutTag = await prisma.taskRun.create({ + data: { + ...run, + friendlyId: "run_5678", + runTags: ["test:tag:5678"], + }, + }); + + // Verify it's not included in the response + expect(data.runs.find(r => r.id === runWithoutTag.id)).toBeUndefined();packages/cli-v3/src/entryPoints/dev-run-worker.ts (1)
88-91
: Consider making the URL fallback more explicitWhile the URL fallback chain works, it could be more explicit for better maintainability.
Consider this more explicit approach:
const runMetadataManager = new StandardMetadataManager( apiClientManager.clientOrThrow(), - getEnvVar("TRIGGER_STREAM_URL", getEnvVar("TRIGGER_API_URL")) ?? "https://api.trigger.dev" + getEnvVar("TRIGGER_STREAM_URL") ?? getEnvVar("TRIGGER_API_URL") ?? "https://api.trigger.dev" );packages/core/src/v3/apiClient/runStream.ts (2)
59-65
: Consider simplifying the StreamPartResult type for better readabilityThe current mapped type could be simplified to improve readability while maintaining the same functionality.
-export type StreamPartResult<TRun, TStreams extends Record<string, any>> = { - [K in keyof TStreams]: { - type: K; - chunk: TStreams[K]; - run: TRun; - }; -}[keyof TStreams]; +export type StreamPartResult<TRun, TStreams extends Record<string, any>> = { + type: keyof TStreams; + chunk: TStreams[keyof TStreams]; + run: TRun; +};
376-382
: Enhance error handling in safeParseJSONThe current implementation silently returns the raw data on parse errors. Consider adding more specific error handling or logging.
function safeParseJSON(data: string): unknown { try { return JSON.parse(data); } catch (error) { + if (error instanceof SyntaxError) { + console.warn(`Failed to parse JSON: ${error.message}`); + } return data; } }packages/cli-v3/src/entryPoints/deploy-run-worker.ts (1)
106-117
: Consider extracting stream management to a dedicated serviceThe current implementation tightly couples stream management with task execution. As the real-time functionality grows, consider extracting stream-related logic into a dedicated service class. This would:
- Improve separation of concerns
- Make the code more maintainable
- Allow for easier testing and mocking
- Enable reuse in other parts of the system
Example structure:
class StreamManager { constructor( private apiClient: ApiClient, private config: StreamConfig ) {} async initialize(runId: string) { // Setup streams for run } async waitForCompletion(timeoutMs: number) { // Wait for streams with timeout } startPeriodicFlush(intervalMs: number) { // Handle periodic flushing } }Also applies to: 324-328
packages/core/src/v3/workers/taskExecutor.ts (1)
498-514
: Consider architectural implications of waitUntil mechanismThe waitUntil mechanism introduces a blocking operation in the task execution lifecycle. While this is necessary for real-time stream handling, consider:
- Impact on task worker pools and concurrent execution
- Monitoring and alerting for tasks that frequently hit the timeout
- Circuit breaker pattern for degraded conditions
- Documentation for developers about proper usage of waitUntil
Consider implementing:
- Metrics collection for waitUntil timeouts and durations
- Circuit breaker to prevent system overload
- Clear documentation about when to use waitUntil in tasks
packages/react-hooks/src/hooks/useRealtime.ts (2)
408-422
: Consider adding retry logic for network interruptions.The
processRealtimeRun
andprocessRealtimeBatch
functions don't handle temporary network interruptions. Consider adding retry logic for resilience.Consider implementing exponential backoff retry logic:
async function withRetry<T>( operation: () => Promise<T>, maxRetries = 3, baseDelay = 1000 ): Promise<T> { let lastError: Error; for (let i = 0; i < maxRetries; i++) { try { return await operation(); } catch (error) { lastError = error instanceof Error ? error : new Error(String(error)); if (error instanceof Error && error.name === "AbortError") { throw error; // Don't retry if explicitly aborted } await new Promise(resolve => setTimeout(resolve, baseDelay * Math.pow(2, i))); } } throw lastError!; }Also applies to: 538-551
501-523
: Consider memory management for throttled queue.The throttled queue might accumulate updates in memory if the processing is slower than the incoming rate.
Consider implementing:
- A maximum queue size limit
- A strategy for handling queue overflow (e.g., dropping oldest updates)
- Monitoring of queue size for debugging purposes
packages/core/test/runStream.test.ts (4)
75-81
: Consider improving interval cleanup in DelayedTestShapeProviderThe interval setup could potentially lead to memory leaks if the test fails before cleanup or if unsubscribed before the first shape is emitted.
Consider this safer implementation:
- const interval = setInterval(async () => { + let interval: NodeJS.Timeout; + interval = setInterval(async () => { if (this.unsubscribed || this.currentShapeIndex >= this.shapes.length) { clearInterval(interval); return; } await callback(this.shapes[this.currentShapeIndex++]!); }, 100);
90-223
: Consider adding error and edge case testsThe basic RunSubscription tests cover the happy path well, but would benefit from additional test cases:
- Error scenarios (invalid JSON in payload/output)
- Edge cases (empty payload/output)
- Invalid status transitions
260-263
: Consider using constants for magic numbersThe expected count of results (3) is a magic number that could be calculated from the test data.
+ const EXPECTED_RESULTS = shapes.length + streamFactory.getStreamChunks("run_123", "openai").length; const results = await collectNResults( subscription.withStreams<{ openai: { id: string; content: string } }>(), - 3 // 1 run + 2 stream chunks + EXPECTED_RESULTS );
583-610
: Consider enhancing error handling in collectNResultsThe timeout error could include more context about the results collected so far.
setTimeout( - () => reject(new Error(`Timeout waiting for ${count} results after ${timeoutMs}ms`)), + () => reject(new Error(`Timeout waiting for ${count} results after ${timeoutMs}ms. Collected ${results.length} results so far.`)), timeoutMs )packages/core/src/v3/apiClient/index.ts (1)
682-685
: Document the browser client header behaviorThe browser detection and header setting is implemented correctly, but it would be helpful to document this behavior for API consumers.
Add a JSDoc comment explaining the header's purpose:
#getHeaders(spanParentAsLink: boolean) { const headers: Record<string, string> = { "Content-Type": "application/json", Authorization: `Bearer ${this.accessToken}`, "trigger-version": VERSION, }; + /** + * Sets the x-trigger-client header to "browser" when running in a browser environment. + * This helps the server optimize its responses for browser-specific requirements. + */ if (typeof window !== "undefined" && typeof window.document !== "undefined") { headers["x-trigger-client"] = "browser"; }apps/webapp/app/services/realtimeStreams.server.ts (1)
14-14
: Consider reusing Redis clients to improve performanceCurrently, a new Redis client is created for each method invocation, which can be inefficient and may lead to resource exhaustion under high load. Consider reusing a single Redis client instance or implementing a connection pool to improve performance and resource management.
Also applies to: 102-102
apps/webapp/app/services/routeBuilders/apiBuilder.server.ts (2)
386-386
: Specify radix parameter inparseInt
functionWhen using
parseInt
, it's recommended to specify the radix to avoid unexpected results. Adding the radix parameter improves code reliability.Apply this diff to specify the radix:
- if (!contentLength || parseInt(contentLength) > maxContentLength) { + if (!contentLength || parseInt(contentLength, 10) > maxContentLength) {
87-135
: Refactor duplicated parameter parsing logic into utility functionsThe code for parsing
params
,searchParams
, andheaders
is repeated across multiple functions (createLoaderApiRoute
,createLoaderPATApiRoute
, andcreateActionApiRoute
). Refactoring this logic into reusable utility functions can reduce code duplication and improve maintainability.Consider extracting the parsing logic into helper functions. For example:
- Create a utility function for parsing
params
with Zod schemas.- Create a utility function for parsing
searchParams
.- Create a utility function for parsing
headers
.This approach promotes DRY (Don't Repeat Yourself) principles and simplifies future updates.
Also applies to: 235-283, 391-439
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
⛔ Files ignored due to path filters (1)
pnpm-lock.yaml
is excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (82)
.vscode/launch.json
(1 hunks)apps/webapp/app/env.server.ts
(1 hunks)apps/webapp/app/routes/api.v1.packets.$.ts
(2 hunks)apps/webapp/app/routes/api.v1.projects.$projectRef.runs.ts
(1 hunks)apps/webapp/app/routes/api.v1.runs.ts
(1 hunks)apps/webapp/app/routes/api.v1.tasks.$taskId.trigger.ts
(2 hunks)apps/webapp/app/routes/api.v3.runs.$runId.ts
(1 hunks)apps/webapp/app/routes/realtime.v1.batches.$batchId.ts
(2 hunks)apps/webapp/app/routes/realtime.v1.runs.$runId.ts
(2 hunks)apps/webapp/app/routes/realtime.v1.runs.ts
(2 hunks)apps/webapp/app/routes/realtime.v1.streams.$runId.$streamId.ts
(1 hunks)apps/webapp/app/routes/realtime.v1.streams.test.ts
(1 hunks)apps/webapp/app/services/apiAuth.server.ts
(1 hunks)apps/webapp/app/services/authorization.server.ts
(1 hunks)apps/webapp/app/services/httpAsyncStorage.server.ts
(1 hunks)apps/webapp/app/services/realtimeClient.server.ts
(4 hunks)apps/webapp/app/services/realtimeStreams.server.ts
(1 hunks)apps/webapp/app/services/realtimeStreamsGlobal.server.ts
(1 hunks)apps/webapp/app/services/routeBuiilders/apiBuilder.server.ts
(0 hunks)apps/webapp/app/services/routeBuilders/apiBuilder.server.ts
(1 hunks)apps/webapp/app/utils/apiCors.ts
(1 hunks)apps/webapp/app/utils/longPollingFetch.ts
(1 hunks)apps/webapp/app/v3/environmentVariables/environmentVariablesRepository.server.ts
(1 hunks)apps/webapp/app/v3/services/createBackgroundWorker.server.ts
(1 hunks)apps/webapp/server.ts
(2 hunks)apps/webapp/test/realtimeClient.test.ts
(4 hunks)apps/webapp/test/realtimeStreams.test.ts
(1 hunks)apps/webapp/test/utils/streams.ts
(1 hunks)docker/docker-compose.yml
(1 hunks)internal-packages/database/prisma/migrations/20241112111156_add_description_to_background_worker_task/migration.sql
(1 hunks)internal-packages/database/prisma/schema.prisma
(1 hunks)internal-packages/testcontainers/src/utils.ts
(1 hunks)packages/cli-v3/src/build/extensions.ts
(1 hunks)packages/cli-v3/src/config.ts
(1 hunks)packages/cli-v3/src/entryPoints/deploy-run-worker.ts
(4 hunks)packages/cli-v3/src/entryPoints/dev-run-worker.ts
(4 hunks)packages/core/package.json
(3 hunks)packages/core/src/v3/apiClient/index.ts
(6 hunks)packages/core/src/v3/apiClient/runStream.ts
(6 hunks)packages/core/src/v3/apiClient/stream.ts
(1 hunks)packages/core/src/v3/build/extensions.ts
(1 hunks)packages/core/src/v3/idempotencyKeys.ts
(1 hunks)packages/core/src/v3/index.ts
(1 hunks)packages/core/src/v3/runMetadata/index.ts
(1 hunks)packages/core/src/v3/runMetadata/manager.ts
(6 hunks)packages/core/src/v3/runMetadata/metadataStream.ts
(1 hunks)packages/core/src/v3/runMetadata/noopManager.ts
(1 hunks)packages/core/src/v3/runMetadata/types.ts
(1 hunks)packages/core/src/v3/runtime/devRuntimeManager.ts
(1 hunks)packages/core/src/v3/schemas/resources.ts
(1 hunks)packages/core/src/v3/schemas/schemas.ts
(1 hunks)packages/core/src/v3/types/index.ts
(1 hunks)packages/core/src/v3/types/tasks.ts
(5 hunks)packages/core/src/v3/types/tools.ts
(1 hunks)packages/core/src/v3/utils/getEnv.ts
(1 hunks)packages/core/src/v3/utils/globals.ts
(2 hunks)packages/core/src/v3/utils/ioSerialization.ts
(5 hunks)packages/core/src/v3/wait-until-api.ts
(1 hunks)packages/core/src/v3/waitUntil/index.ts
(1 hunks)packages/core/src/v3/waitUntil/manager.ts
(1 hunks)packages/core/src/v3/waitUntil/types.ts
(1 hunks)packages/core/src/v3/workers/index.ts
(1 hunks)packages/core/src/v3/workers/taskExecutor.ts
(3 hunks)packages/core/test/runStream.test.ts
(1 hunks)packages/core/test/standardMetadataManager.test.ts
(1 hunks)packages/react-hooks/package.json
(1 hunks)packages/react-hooks/src/contexts.tsx
(1 hunks)packages/react-hooks/src/hooks/useApiClient.ts
(1 hunks)packages/react-hooks/src/hooks/useRealtime.ts
(1 hunks)packages/react-hooks/src/hooks/useRealtimeBatch.ts
(0 hunks)packages/react-hooks/src/hooks/useRealtimeRun.ts
(0 hunks)packages/react-hooks/src/hooks/useRealtimeRunsWithTag.ts
(0 hunks)packages/react-hooks/src/hooks/useTaskTrigger.ts
(1 hunks)packages/react-hooks/src/index.ts
(1 hunks)packages/react-hooks/src/utils/throttle.ts
(1 hunks)packages/rsc/CHANGELOG.md
(1 hunks)packages/rsc/README.md
(1 hunks)packages/rsc/package.json
(1 hunks)packages/rsc/src/build.ts
(1 hunks)packages/rsc/src/index.ts
(1 hunks)packages/rsc/src/sourceDir-cjs.cts
(1 hunks)packages/rsc/src/sourceDir.ts
(1 hunks)
⛔ Files not processed due to max files limit (25)
- packages/rsc/tsconfig.json
- packages/trigger-sdk/package.json
- packages/trigger-sdk/src/v3/auth.ts
- packages/trigger-sdk/src/v3/idempotencyKeys.ts
- packages/trigger-sdk/src/v3/index.ts
- packages/trigger-sdk/src/v3/metadata.ts
- packages/trigger-sdk/src/v3/runs.ts
- packages/trigger-sdk/src/v3/shared.ts
- packages/trigger-sdk/src/v3/tasks.ts
- packages/trigger-sdk/src/v3/waitUntil.ts
- references/nextjs-realtime/next.config.mjs
- references/nextjs-realtime/package.json
- references/nextjs-realtime/src/app/ai/[id]/ClientAiDetails.tsx
- references/nextjs-realtime/src/app/ai/[id]/page.tsx
- references/nextjs-realtime/src/app/page.tsx
- references/nextjs-realtime/src/app/runs/[id]/ClientRunDetails.tsx
- references/nextjs-realtime/src/components/RunDetails.tsx
- references/nextjs-realtime/src/components/TriggerButton.tsx
- references/nextjs-realtime/src/components/TriggerButtonWithStreaming.tsx
- references/nextjs-realtime/src/components/UploadImageDisplay.tsx
- references/nextjs-realtime/src/components/ui/dialog.tsx
- references/nextjs-realtime/src/trigger/ai.ts
- references/nextjs-realtime/src/trigger/example.ts
- references/nextjs-realtime/src/trigger/rsc.tsx
- references/nextjs-realtime/trigger.config.ts
💤 Files with no reviewable changes (4)
- apps/webapp/app/services/routeBuiilders/apiBuilder.server.ts
- packages/react-hooks/src/hooks/useRealtimeBatch.ts
- packages/react-hooks/src/hooks/useRealtimeRun.ts
- packages/react-hooks/src/hooks/useRealtimeRunsWithTag.ts
✅ Files skipped from review due to trivial changes (4)
- apps/webapp/app/routes/api.v1.projects.$projectRef.runs.ts
- apps/webapp/app/routes/api.v1.runs.ts
- apps/webapp/app/routes/api.v3.runs.$runId.ts
- apps/webapp/app/routes/realtime.v1.streams.$runId.$streamId.ts
🚧 Files skipped from review as they are similar to previous changes (51)
- .vscode/launch.json
- apps/webapp/app/env.server.ts
- apps/webapp/app/routes/realtime.v1.batches.$batchId.ts
- apps/webapp/app/routes/realtime.v1.runs.$runId.ts
- apps/webapp/app/routes/realtime.v1.runs.ts
- apps/webapp/app/routes/realtime.v1.streams.test.ts
- apps/webapp/app/services/apiAuth.server.ts
- apps/webapp/app/services/authorization.server.ts
- apps/webapp/app/services/httpAsyncStorage.server.ts
- apps/webapp/app/services/realtimeStreamsGlobal.server.ts
- apps/webapp/app/utils/apiCors.ts
- apps/webapp/app/utils/longPollingFetch.ts
- apps/webapp/app/v3/environmentVariables/environmentVariablesRepository.server.ts
- apps/webapp/app/v3/services/createBackgroundWorker.server.ts
- apps/webapp/test/realtimeStreams.test.ts
- apps/webapp/test/utils/streams.ts
- docker/docker-compose.yml
- internal-packages/database/prisma/migrations/20241112111156_add_description_to_background_worker_task/migration.sql
- internal-packages/database/prisma/schema.prisma
- internal-packages/testcontainers/src/utils.ts
- packages/cli-v3/src/build/extensions.ts
- packages/cli-v3/src/config.ts
- packages/core/src/v3/apiClient/stream.ts
- packages/core/src/v3/build/extensions.ts
- packages/core/src/v3/index.ts
- packages/core/src/v3/runMetadata/noopManager.ts
- packages/core/src/v3/runMetadata/types.ts
- packages/core/src/v3/runtime/devRuntimeManager.ts
- packages/core/src/v3/schemas/resources.ts
- packages/core/src/v3/schemas/schemas.ts
- packages/core/src/v3/types/index.ts
- packages/core/src/v3/types/tasks.ts
- packages/core/src/v3/types/tools.ts
- packages/core/src/v3/utils/getEnv.ts
- packages/core/src/v3/utils/globals.ts
- packages/core/src/v3/utils/ioSerialization.ts
- packages/core/src/v3/wait-until-api.ts
- packages/core/src/v3/waitUntil/manager.ts
- packages/core/src/v3/waitUntil/types.ts
- packages/core/src/v3/workers/index.ts
- packages/core/test/standardMetadataManager.test.ts
- packages/react-hooks/package.json
- packages/react-hooks/src/contexts.tsx
- packages/react-hooks/src/index.ts
- packages/react-hooks/src/utils/throttle.ts
- packages/rsc/CHANGELOG.md
- packages/rsc/README.md
- packages/rsc/package.json
- packages/rsc/src/index.ts
- packages/rsc/src/sourceDir-cjs.cts
- packages/rsc/src/sourceDir.ts
🧰 Additional context used
🪛 Biome
packages/core/src/v3/runMetadata/index.ts
[error] 53-53: The function should not return a value because its return type is void.
The function is here:
'void' signals the absence of value. The returned value is likely to be ignored by the caller.
(lint/correctness/noVoidTypeReturn)
[error] 57-57: The function should not return a value because its return type is void.
The function is here:
'void' signals the absence of value. The returned value is likely to be ignored by the caller.
(lint/correctness/noVoidTypeReturn)
[error] 61-61: The function should not return a value because its return type is void.
The function is here:
'void' signals the absence of value. The returned value is likely to be ignored by the caller.
(lint/correctness/noVoidTypeReturn)
[error] 65-65: The function should not return a value because its return type is void.
The function is here:
'void' signals the absence of value. The returned value is likely to be ignored by the caller.
(lint/correctness/noVoidTypeReturn)
packages/core/src/v3/runMetadata/metadataStream.ts
[error] 15-15: void is confusing inside a union type.
Unsafe fix: Use undefined instead.
(lint/suspicious/noConfusingVoidType)
[error] 43-43: void is confusing inside a union type.
Unsafe fix: Use undefined instead.
(lint/suspicious/noConfusingVoidType)
packages/core/src/v3/waitUntil/index.ts
[error] 28-28: Using this in a static context can be confusing.
this refers to the class.
Unsafe fix: Use the class name instead.
(lint/complexity/noThisInStatic)
[error] 29-29: Using this in a static context can be confusing.
this refers to the class.
Unsafe fix: Use the class name instead.
(lint/complexity/noThisInStatic)
[error] 32-32: Using this in a static context can be confusing.
this refers to the class.
Unsafe fix: Use the class name instead.
(lint/complexity/noThisInStatic)
[error] 44-44: The function should not return a value because its return type is void.
The function is here:
'void' signals the absence of value. The returned value is likely to be ignored by the caller.
(lint/correctness/noVoidTypeReturn)
packages/core/test/runStream.test.ts
[error] 574-581: Do not export from a test file.
(lint/suspicious/noExportsInTest)
🔇 Additional comments (43)
packages/react-hooks/src/hooks/useApiClient.ts (2)
3-10
: LGTM! Well-structured type definition.
The imports are appropriate and the UseApiClientOptions
type is well-defined with clear optional fields.
12-27
: LGTM! Clean implementation with proper precedence.
The hook is well-implemented with:
- Correct precedence (options override context)
- Proper type safety
- Clear default values
- Appropriate merging of request options
packages/core/src/v3/waitUntil/index.ts (2)
1-5
: LGTM! Clean imports and constant definition.
The imports are well-organized, and the API_NAME constant is appropriately defined.
35-54
: LGTM! Clean implementation with proper delegation.
The implementation follows good practices:
- Clear separation of concerns with the global manager
- Proper fallback to NOOP_MANAGER
- Clean method delegation
🧰 Tools
🪛 Biome
[error] 44-44: The function should not return a value because its return type is void.
The function is here:
'void' signals the absence of value. The returned value is likely to be ignored by the caller.
(lint/correctness/noVoidTypeReturn)
apps/webapp/app/routes/api.v1.packets.$.ts (1)
43-66
: Review CORS configuration for security implications
The corsStrategy: "all"
setting allows unrestricted cross-origin access to this endpoint. While this might be intentional for public API access, it's important to verify if this level of access is required and secure for your use case.
Let's check if other API routes have similar CORS settings:
✅ Verification successful
CORS configuration is appropriately secured
The corsStrategy: "all"
setting in this endpoint is justified and properly implemented:
- The endpoint generates presigned URLs for GET operations, which is typically a public API operation
- The implementation includes multiple security layers:
- JWT authentication is required (
allowJWT: true
) - Environment-specific authentication checks
- The CORS implementation uses secure defaults:
- 5-minute max age
- Proper header validation
- Uses the battle-tested
remix-utils/cors
package
- JWT authentication is required (
- This configuration aligns with other API endpoints in the codebase that handle similar public-facing operations (runs, streams, tasks)
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check CORS configurations across API routes
# Look for corsStrategy settings in API route files
rg -A 2 'corsStrategy:' 'app/routes/api'
Length of output: 93
Script:
#!/bin/bash
# Let's try searching in the entire codebase for corsStrategy patterns
rg -A 2 'corsStrategy:' .
Length of output: 2109
Script:
#!/bin/bash
# Let's also look for createLoaderApiRoute usage patterns
rg -A 2 'createLoaderApiRoute' .
Length of output: 3850
Script:
#!/bin/bash
# Let's check the actual file structure to ensure we're searching in the right place
fd api.v1
Length of output: 4793
Script:
#!/bin/bash
# Let's check the implementation of createLoaderApiRoute to understand the CORS strategy
cat ./apps/webapp/app/services/routeBuilders/apiBuilder.server.ts
Length of output: 16749
Script:
#!/bin/bash
# Let's also check if there's any CORS configuration type definition
rg -A 2 'type.*CorsStrategy' .
Length of output: 32
Script:
#!/bin/bash
# Let's check the implementation of apiCors utility
cat ./apps/webapp/app/utils/apiCors.ts
Length of output: 905
packages/core/src/v3/runMetadata/index.ts (4)
52-54
: Refer to previous review comment
A comprehensive review comment already exists for this code segment, addressing void return type, access modifiers, and input validation concerns.
🧰 Tools
🪛 Biome
[error] 53-53: The function should not return a value because its return type is void.
The function is here:
'void' signals the absence of value. The returned value is likely to be ignored by the caller.
(lint/correctness/noVoidTypeReturn)
56-58
: Refer to previous review comment
A comprehensive review comment already exists for this code segment, addressing void return type, access modifiers, and input validation concerns.
🧰 Tools
🪛 Biome
[error] 57-57: The function should not return a value because its return type is void.
The function is here:
'void' signals the absence of value. The returned value is likely to be ignored by the caller.
(lint/correctness/noVoidTypeReturn)
60-62
: Refer to previous review comment
A comprehensive review comment already exists for this code segment, addressing void return type, access modifiers, and input validation concerns.
🧰 Tools
🪛 Biome
[error] 61-61: The function should not return a value because its return type is void.
The function is here:
'void' signals the absence of value. The returned value is likely to be ignored by the caller.
(lint/correctness/noVoidTypeReturn)
68-74
: 🛠️ Refactor suggestion
Enhance stream method robustness
The stream method implementation could benefit from additional safeguards:
- Add input validation for the key parameter
- Consider type constraints for T to prevent potential runtime issues
- Ensure proper resource cleanup when the stream is aborted
Consider applying these improvements:
public stream<T>(
key: string,
value: AsyncIterable<T> | ReadableStream<T>,
signal?: AbortSignal
): Promise<AsyncIterable<T>> {
+ if (!key?.trim()) {
+ throw new Error("Key must be a non-empty string");
+ }
+ if (signal?.aborted) {
+ throw new Error("Stream aborted");
+ }
return this.#getManager().stream(key, value, signal);
}
Let's verify the stream implementation in the manager:
packages/core/src/v3/idempotencyKeys.ts (2)
4-9
: Well-documented type guard implementation!
The function correctly implements a runtime check for idempotency keys, with a helpful comment explaining why brand checking isn't possible.
25-59
: Excellent documentation with practical examples!
The JSDoc comments are comprehensive, including both basic usage and advanced scenarios with different scopes. This greatly improves the developer experience.
packages/rsc/src/build.ts (2)
1-4
: LGTM: Clean imports and exports
The imports are well-organized and the function export is clear.
88-91
: Verify the use of "worker" condition
The react-dom/server resolution uses a "worker" condition while other React modules use "react-server". This inconsistency might lead to resolution issues.
Let's verify if this is intentional:
✅ Verification successful
Let me analyze the specific file to understand the context better.
The "worker" condition for react-dom/server resolution is correct
The use of the "worker" condition specifically for react-dom/server
resolution is intentional and correct. This module is meant to be used in a worker-like environment for server-side rendering, while other React-related modules correctly use the "react-server" condition. This is consistent with React's module resolution strategy where server components use "react-server" and the server renderer uses "worker".
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for other uses of conditions in React-related module resolution
rg -A 2 "conditions.*worker"
rg -A 2 "conditions.*react-server"
Length of output: 2923
Script:
#!/bin/bash
# Get the context around the worker condition usage
rg -B 10 -A 10 "conditions.*worker" packages/rsc/src/build.ts
Length of output: 718
apps/webapp/app/routes/api.v1.tasks.$taskId.trigger.ts (2)
2-2
: LGTM: Clean schema update with backward compatibility
The addition of the optional "x-trigger-client" header maintains backward compatibility while enabling new client-specific functionality.
Also applies to: 21-21
38-38
: Review CORS configuration for security implications
The corsStrategy: "all"
setting might be too permissive. Consider restricting CORS to specific origins if this endpoint doesn't need to be accessible from any domain.
packages/react-hooks/src/hooks/useTaskTrigger.ts (2)
99-103
: Verify handle availability before using real-time features
The hook uses triggerInstance.handle?.id
without ensuring it exists, which could lead to unnecessary real-time connection attempts when the handle is not yet available.
Consider adding a condition to prevent real-time setup until handle is available:
-const realtimeInstance = useRealtimeRunWithStreams<TTask, TStreams>(triggerInstance.handle?.id, {
+const realtimeInstance = useRealtimeRunWithStreams<TTask, TStreams>(
+ triggerInstance.handle?.id,
+ triggerInstance.handle
+ ? {
+ ...options,
+ id: triggerInstance.handle.id,
+ accessToken: triggerInstance.handle.publicAccessToken ?? options?.accessToken,
+ }
+ : { enabled: false }
+);
134-134
: Verify stop functionality implementation
The stop
function is exposed from the real-time instance, but there's no verification that the real-time connection is active before stopping.
Run this script to verify the stop implementation in useRealtimeRun:
packages/core/src/v3/runMetadata/manager.ts (2)
207-230
: Fix timeout handling in waitForAllStreams
The existing review comment about timeout handling in waitForAllStreams is still valid. The current implementation using Promise.allSettled
with Promise.race
won't properly handle timeouts.
Please refer to the previous review comment for the detailed fix suggestion.
Line range hint 232-251
: LGTM! Good error handling
The flush implementation properly handles errors and maintains state consistency by restoring hasChanges flag on failure.
apps/webapp/app/services/realtimeClient.server.ts (4)
Line range hint 40-72
: LGTM: Clean implementation of backward-compatible client versioning
The addition of the optional clientVersion
parameter to public methods is well-structured and maintains backward compatibility while supporting new client versions.
119-122
: LGTM: Clean implementation of header compatibility
The conditional header rewriting for backward compatibility is well-implemented and properly integrated with the existing error handling and concurrency management.
Also applies to: 173-173
257-257
: LGTM: Consistent parameter naming
The update to use "handle" is consistent with the new URL parameter naming convention.
Line range hint 40-257
: Verify backward compatibility with existing clients
While the implementation looks solid, we should verify that existing clients continue to work as expected.
✅ Verification successful
Backward compatibility is properly implemented and tested
The test suite confirms that the changes maintain backward compatibility:
- For older clients (without version), it correctly:
- Uses
shape_id
parameter instead ofhandle
- Rewrites response headers from
electric-handle
toelectric-shape-id
- Rewrites response headers from
electric-offset
toelectric-chunk-last-offset
- Uses
- For newer clients (with version), it uses the new parameter and header names
- Both old and new client implementations pass the same test scenarios
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for existing client implementations to ensure they're not affected
rg -A 3 "new RealtimeClient\(" --type typescript
Length of output: 85
Script:
#!/bin/bash
# Search for existing client implementations using ripgrep without file type restriction
rg -A 3 "new RealtimeClient\("
Length of output: 1267
Script:
#!/bin/bash
# Search for imports of RealtimeClient to find usage patterns
rg "import.*RealtimeClient"
Length of output: 258
Script:
#!/bin/bash
# Search for class definition to understand the constructor parameters
ast-grep --pattern 'class RealtimeClient {
constructor($_) {
$$$
}
}'
Length of output: 79
Script:
#!/bin/bash
# Let's examine the test file to understand how the client is used and tested
cat apps/webapp/test/realtimeClient.test.ts
Length of output: 9908
Script:
#!/bin/bash
# Look for any references to shape_id to understand the backward compatibility handling
rg "shape_id" -A 3
Length of output: 1047
packages/cli-v3/src/entryPoints/dev-run-worker.ts (3)
18-19
: LGTM: New imports for stream handling
The addition of waitUntil
and apiClientManager
imports is appropriate for implementing the real-time streams functionality.
93-99
: LGTM: Proper stream completion handling setup
The WaitUntil setup correctly ensures all streams complete before finishing the run. The implementation follows good practices by:
- Using a standard manager
- Setting it globally
- Properly registering stream completion checks
294-295
: LGTM: Proper run ID assignment
The assignment of the run ID to the metadata manager is correctly placed at the start of task execution.
packages/core/src/v3/apiClient/runStream.ts (2)
113-140
: Error handling still needed in subscribe method
The implementation still lacks proper error handling for the stream processing loop.
This issue was previously identified in a past review. Please refer to the existing comment about adding error handling in the subscribe
method.
223-264
: Active stream subscriptions management still needed
The implementation still lacks proper cleanup of stream subscriptions.
This issue was previously identified in a past review. Please refer to the existing comment about managing active stream subscriptions to prevent memory leaks.
packages/cli-v3/src/entryPoints/deploy-run-worker.ts (3)
18-19
: LGTM: Import additions align with real-time streams feature
The new imports for waitUntil
, apiClientManager
, and StandardWaitUntilManager
are appropriate for implementing the real-time streams functionality.
Also applies to: 39-39
324-328
: LGTM: Enhanced metadata management with configurable flush
The metadata management improvements are well-implemented with:
- Proper runId tracking for streams
- Configurable periodic flush with a sensible default (1000ms)
Consider monitoring the flush interval in production to ensure it's optimal for your use case.
✅ Verification successful
Let me check the implementation of the flush interval in the dynamic scheduler to ensure consistency.
Let me try a different approach to check the dynamic scheduler implementation.
Consistent flush interval implementation confirmed
The flush interval implementation is consistent across the codebase:
deploy-run-worker.ts
anddev-run-worker.ts
use the same environment variableTRIGGER_RUN_METADATA_FLUSH_INTERVAL
with a default of 1000ms- The
DynamicFlushScheduler
class properly handles flush intervals through its configuration, maintaining the same timing pattern
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for other flush interval configurations to ensure consistency
rg "TRIGGER_RUN_METADATA_FLUSH_INTERVAL|FLUSH_INTERVAL"
Length of output: 632
Script:
#!/bin/bash
# Check the dynamic scheduler implementation for flush interval configuration
ast-grep --pattern 'class DynamicFlushScheduler {
$$$
constructor($$$) {
$$$
}
$$$
}'
Length of output: 99
Script:
#!/bin/bash
# Check the content of the dynamic scheduler file
cat apps/webapp/app/v3/dynamicFlushScheduler.server.ts
Length of output: 1976
106-109
: Verify apiClient initialization before usage
The apiClientManager.clientOrThrow()
call could throw if the client isn't properly initialized. Consider adding error handling or ensuring the client is initialized earlier in the bootstrap process.
packages/core/src/v3/workers/taskExecutor.ts (2)
6-6
: LGTM: Import statement is correctly updated
The addition of waitUntil
to the existing imports is appropriate and aligns with its usage in the new functionality.
226-226
: Verify impact of waitUntil on task completion time
The addition of #blockForWaitUntil
in the finally block ensures proper handling of pending conditions, but could potentially delay task completion by up to 30 seconds. Consider monitoring task execution times to assess the impact.
✅ Verification successful
Based on the search results, I can now generate the final response since I have enough information about the waitUntil
implementation and its usage:
#blockForWaitUntil
is a standard practice with appropriate timeout
The 30-second timeout in #blockForWaitUntil
is a well-established pattern across the codebase, used consistently in various timeout scenarios. The implementation:
- Is properly instrumented with tracing spans for monitoring
- Only blocks when there are pending conditions (
requiresResolving()
) - Is used in both dev and prod runtime managers with the same timeout value
- Has existing performance monitoring through tracing spans
The placement in the finally block is correct as it ensures proper cleanup of async operations, and the timeout value aligns with other similar operations in the codebase.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for other waitUntil usages and timeout configurations
rg -A 3 "waitUntil|setTimeout.*30.*000|30_000" --type ts
# Look for any existing performance monitoring or timing code
rg "performance\.|console\.time|metrics\." --type ts
Length of output: 24045
packages/react-hooks/src/hooks/useRealtime.ts (3)
9-13
: LGTM: Well-structured options type with experimental feature flag.
The UseRealtimeRunOptions
type is well-defined with clear optional properties. The experimental_throttleInMs
flag is appropriately prefixed to indicate its experimental nature.
425-441
: LGTM: Well-implemented ordering logic.
The insertRunShapeInOrder
and insertRunShape
functions correctly handle:
- Duplicate prevention
- Proper ordering based on run number/timestamp
- Immutable state updates
Also applies to: 460-478
97-111
: LGTM: Proper cleanup in useEffect.
The useEffect hook properly:
- Handles the enabled flag
- Cleans up subscriptions on unmount
- Has correct dependency array
packages/core/package.json (3)
200-200
: Verify security of the new eventsource-parser dependency.
The addition of eventsource-parser is aligned with the real-time streams feature.
#!/bin/bash
# Check for known vulnerabilities in eventsource-parser
# Query for security advisories
gh api graphql -f query='
{
securityVulnerabilities(first: 5, ecosystem: NPM, package: "eventsource-parser") {
nodes {
advisory {
summary
severity
publishedAt
}
vulnerableVersionRange
firstPatchedVersion {
identifier
}
}
}
}'
# Check usage patterns
rg "eventsource-parser" --type ts
212-218
: Verify AI-related dependencies align with toolTask requirements.
The new AI-related dependencies (@ai-sdk/provider-utils and ai) support the toolTask integration objective.
#!/bin/bash
# Analyze AI package usage and compatibility
# Check for AI-related imports and usage patterns
rg -l "import.*@ai-sdk/provider-utils|import.*ai" --type ts
# Verify test server usage
rg -l "@epic-web/test-server" --type ts
185-185
: Verify compatibility with the upgraded @electric-sql/client.
The upgrade from 0.6.3 to 0.7.1 is a minor version bump that could introduce breaking changes.
✅ Verification successful
The @electric-sql/client upgrade appears safe to proceed
The package is used minimally in the codebase, with only one direct usage in packages/core/src/v3/apiClient/stream.ts
where it's specifically importing ShapeStream
and Shape
. The upgrade from 0.6.3 to 0.7.1 only involves importing these specific utilities for stream handling, which is a relatively stable API surface. The limited scope of usage suggests minimal risk of breaking changes impact.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check for breaking changes in @electric-sql/client upgrade
# Search for @electric-sql/client usage patterns that might be affected
rg -l "@electric-sql/client" | xargs rg "ElectricClient|ElectricDatabase|electrify"
# Check the changelog or release notes
curl -s "https://registry.npmjs.org/@electric-sql%2Fclient" | jq -r '."dist-tags".latest, .versions["0.7.1"].dist.tarball'
Length of output: 12949
Script:
#!/bin/bash
# Let's search for any usage of @electric-sql/client in the codebase
rg -l "@electric-sql/client"
# Then let's look for specific usage patterns
rg "@electric-sql/client" -A 5 -B 5
Length of output: 3284
packages/core/src/v3/apiClient/index.ts (2)
48-48
: LGTM: Type exports are well-organized
The addition of RealtimeRun
and other stream-related type exports aligns well with the PR's real-time functionality objectives.
Also applies to: 92-99
Line range hint 614-650
: Verify resource cleanup on AbortSignal
The addition of AbortSignal support is good for cancellation control. Let's verify that all resources are properly cleaned up when the signal is aborted.
✅ Verification successful
Let me gather more information about the runShapeStream
implementation since it wasn't found in the first search.
Let me search for the SSE (Server-Sent Events) implementation since we found that runShapeStream
uses SSEStreamSubscriptionFactory
.
Let me search for the actual stream cleanup implementation since we see it uses EventSourceParserStream
.
Resource cleanup on AbortSignal is properly implemented
The implementation shows proper resource cleanup when AbortSignal is triggered:
- The AbortSignal is correctly passed through to the fetch request in
SSEStreamSubscription
- The
RunSubscription
class maintains its ownAbortController
and properly handles cleanup:- Calls
unsubscribe()
on cancellation - Aborts the controller and calls
unsubscribeShape()
during cleanup - Automatically closes and aborts when run is complete (if
closeOnComplete
is true)
- Calls
- The stream is properly terminated through the ReadableStream controller
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for cleanup handling in the runShapeStream implementation
ast-grep --pattern 'function runShapeStream($_) {
$$$
signal$_
$$$
}'
# Search for any event listener cleanup
rg -A 5 'addEventListener.*abort'
Length of output: 2985
Script:
#!/bin/bash
# Search for runShapeStream implementation
rg -A 20 "export function runShapeStream"
# Search for WebSocket or EventSource cleanup
rg -A 5 "(WebSocket|EventSource).*(close|destroy)"
Length of output: 2116
Script:
#!/bin/bash
# Search for SSEStreamSubscriptionFactory implementation
rg -A 20 "class SSEStreamSubscriptionFactory"
# Search for EventSource cleanup in the subscription implementation
rg -A 10 "new EventSource"
Length of output: 3187
Script:
#!/bin/bash
# Search for SSEStreamSubscription implementation
rg -A 15 "class SSEStreamSubscription"
# Search for cleanup in EventSourceParserStream
rg -A 10 "class EventSourceParserStream"
# Search for any abort handling in the stream
rg -A 5 "abort|cleanup|close" packages/core/src/v3/apiClient/runStream.ts
Length of output: 4009
apps/webapp/app/services/realtimeStreams.server.ts (2)
46-51
: Safely parse Redis stream fields to access data
The fields
array from Redis contains field-value pairs. Accessing fields[1]
directly assumes a fixed structure, which might lead to errors if the data format changes. Parse fields
into an object to safely access the data
field.
121-150
: Handle partial lines when reading from the stream to avoid data loss
Splitting value
directly by \n
can result in incomplete lines being processed incorrectly. Buffer incomplete lines between reads to ensure data integrity.
// @ts-expect-error | ||
duplex: "half", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Replace @ts-expect-error with proper typing
The @ts-expect-error
for the duplex option can be replaced with proper typing using the experimental RequestInit type.
- // @ts-expect-error
- duplex: "half",
+ duplex: "half" as RequestDuplexMode,
Add this type import at the top:
type RequestDuplexMode = "half";
const serverStream = new ReadableStream({ | ||
async pull(controller) { | ||
try { | ||
const { value, done } = await serverIterator.next(); | ||
if (done) { | ||
controller.close(); | ||
return; | ||
} | ||
|
||
controller.enqueue(JSON.stringify(value) + "\n"); | ||
} catch (err) { | ||
controller.error(err); | ||
} | ||
}, | ||
cancel: () => this.controller.abort(), | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Enhance error handling in ReadableStream
The current error handling could be improved by:
- Adding specific error types
- Including error details in the controller error
- Implementing proper cleanup in the cancel callback
const serverStream = new ReadableStream({
async pull(controller) {
try {
const { value, done } = await serverIterator.next();
if (done) {
controller.close();
return;
}
controller.enqueue(JSON.stringify(value) + "\n");
} catch (err) {
- controller.error(err);
+ controller.error(err instanceof Error ? err : new Error('Unknown error during stream processing'));
}
},
- cancel: () => this.controller.abort(),
+ cancel: (reason) => {
+ console.error('Stream cancelled:', reason);
+ this.controller.abort(reason);
+ },
});
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
const serverStream = new ReadableStream({ | |
async pull(controller) { | |
try { | |
const { value, done } = await serverIterator.next(); | |
if (done) { | |
controller.close(); | |
return; | |
} | |
controller.enqueue(JSON.stringify(value) + "\n"); | |
} catch (err) { | |
controller.error(err); | |
} | |
}, | |
cancel: () => this.controller.abort(), | |
}); | |
const serverStream = new ReadableStream({ | |
async pull(controller) { | |
try { | |
const { value, done } = await serverIterator.next(); | |
if (done) { | |
controller.close(); | |
return; | |
} | |
controller.enqueue(JSON.stringify(value) + "\n"); | |
} catch (err) { | |
controller.error(err instanceof Error ? err : new Error('Unknown error during stream processing')); | |
} | |
}, | |
cancel: (reason) => { | |
console.error('Stream cancelled:', reason); | |
this.controller.abort(reason); | |
}, | |
}); |
context.logger.debug("Failed to resolve jsx-dev-runtime", { error }); | ||
} | ||
|
||
return undefined; | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Improve error handling in resolution failures
The current implementation silently fails by returning undefined when resolution fails. Consider adding more detailed error logging or propagating the error when appropriate.
Consider updating the error handling:
} catch (error) {
- context.logger.debug("Failed to resolve ...", { error });
+ context.logger.error("Failed to resolve module", {
+ module: args.path,
+ error: error instanceof Error ? error.message : String(error),
+ stack: error instanceof Error ? error.stack : undefined
+ });
+ // Optionally throw if this is a critical module
+ if (args.path === "react" || args.path === "react-dom") {
+ throw error;
+ }
}
Also applies to: 57-61, 78-82, 99-103
context.registerPlugin({ | ||
name: "rsc", | ||
async setup(build) { | ||
const { resolvePathSync: esmResolveSync } = await import("mlly"); | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add error handling for plugin setup
The dynamic import and plugin registration lack proper error handling. Consider wrapping these operations in try-catch blocks to handle potential failures gracefully.
Consider updating the code:
context.registerPlugin({
name: "rsc",
async setup(build) {
- const { resolvePathSync: esmResolveSync } = await import("mlly");
+ let esmResolveSync;
+ try {
+ const mlly = await import("mlly");
+ esmResolveSync = mlly.resolvePathSync;
+ } catch (error) {
+ context.logger.error("Failed to import mlly", { error });
+ throw error;
+ }
Committable suggestion skipped: line range outside the PR's diff.
packages/rsc/src/build.ts
Outdated
build.onResolve({ filter: /^react\/jsx-dev-runtime$/ }, (args) => { | ||
context.logger.debug("Resolving jsx-dev-runtime", { args }); | ||
|
||
try { | ||
const resolvedPath = esmResolveSync(args.path, { | ||
url: sourceDir, | ||
conditions: ["react-server"], | ||
}); | ||
|
||
context.logger.debug("Resolved jsx-dev-runtime", { resolvedPath }); | ||
|
||
return { | ||
path: resolvedPath, | ||
}; | ||
} catch (error) { | ||
context.logger.debug("Failed to resolve jsx-dev-runtime", { error }); | ||
} | ||
|
||
return undefined; | ||
}); | ||
|
||
build.onResolve({ filter: /^react\/jsx-runtime$/ }, (args) => { | ||
context.logger.debug("Resolving jsx-runtime", { args }); | ||
|
||
try { | ||
const resolvedPath = esmResolveSync(args.path, { | ||
url: sourceDir, | ||
conditions: ["react-server"], | ||
}); | ||
|
||
context.logger.debug("Resolved jsx-runtime", { resolvedPath }); | ||
|
||
return { | ||
path: resolvedPath, | ||
}; | ||
} catch (error) { | ||
context.logger.debug("Failed to resolve jsx-runtime", { error }); | ||
} | ||
|
||
return undefined; | ||
}); | ||
|
||
build.onResolve({ filter: /^(react|react-dom)$/ }, (args) => { | ||
context.logger.debug("Resolving react", { args }); | ||
|
||
try { | ||
const resolvedPath = esmResolveSync(args.path, { | ||
url: sourceDir, | ||
conditions: ["react-server"], | ||
}); | ||
|
||
context.logger.debug("Resolved react", { resolvedPath }); | ||
|
||
return { | ||
path: resolvedPath, | ||
}; | ||
} catch (error) { | ||
context.logger.debug("Failed to resolve react", { error }); | ||
} | ||
|
||
return undefined; | ||
}); | ||
|
||
build.onResolve({ filter: /^react-dom\/server$/ }, (args) => { | ||
context.logger.debug("Resolving react-dom/server", { args }); | ||
|
||
try { | ||
const resolvedPath = esmResolveSync(args.path, { | ||
url: sourceDir, | ||
conditions: ["worker"], | ||
}); | ||
|
||
context.logger.debug("Resolved react-dom/server", { resolvedPath }); | ||
|
||
return { | ||
path: resolvedPath, | ||
}; | ||
} catch (error) { | ||
context.logger.debug("Failed to resolve react-dom/server", { error }); | ||
} | ||
|
||
return undefined; | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Refactor duplicate resolution logic
There's significant code duplication across the module resolution handlers. Consider extracting the common logic into a reusable function.
Here's a suggested refactor:
+ async function resolveReactModule(modulePath: string, conditions: string[] = ["react-server"]) {
+ context.logger.debug(`Resolving ${modulePath}`, { modulePath });
+ try {
+ const resolvedPath = esmResolveSync(modulePath, {
+ url: sourceDir,
+ conditions,
+ });
+ context.logger.debug(`Resolved ${modulePath}`, { resolvedPath });
+ return { path: resolvedPath };
+ } catch (error) {
+ context.logger.debug(`Failed to resolve ${modulePath}`, { error });
+ return undefined;
+ }
+ }
+
- build.onResolve({ filter: /^react\/jsx-dev-runtime$/ }, (args) => {
- context.logger.debug("Resolving jsx-dev-runtime", { args });
- // ... existing duplicate code
- });
+ build.onResolve({ filter: /^react\/jsx-dev-runtime$/ }, (args) =>
+ resolveReactModule(args.path));
- // ... other handlers
+ // Apply similar changes to other handlers
Committable suggestion skipped: line range outside the PR's diff.
type StreamUpdate = { | ||
type: keyof TStreams; | ||
chunk: any; | ||
}; | ||
|
||
const streamQueue = createThrottledQueue<StreamUpdate>(async (updates) => { | ||
const nextStreamData = { ...existingDataRef.current }; | ||
|
||
// Group updates by type | ||
const updatesByType = updates.reduce( | ||
(acc, update) => { | ||
if (!acc[update.type]) { | ||
acc[update.type] = []; | ||
} | ||
acc[update.type].push(update.chunk); | ||
return acc; | ||
}, | ||
{} as Record<keyof TStreams, any[]> | ||
); | ||
|
||
// Apply all updates | ||
for (const [type, chunks] of Object.entries(updatesByType)) { | ||
// @ts-ignore | ||
nextStreamData[type] = [...(existingDataRef.current[type] || []), ...chunks]; | ||
} | ||
|
||
await mutateStreamData(nextStreamData); | ||
}, throttleInMs); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider adding error handling to the throttled queue processing.
The stream queue processing doesn't handle potential errors that could occur during mutation.
Apply this improvement:
const streamQueue = createThrottledQueue<StreamUpdate>(async (updates) => {
+ try {
const nextStreamData = { ...existingDataRef.current };
// Group updates by type
const updatesByType = updates.reduce(
(acc, update) => {
if (!acc[update.type]) {
acc[update.type] = [];
}
acc[update.type].push(update.chunk);
return acc;
},
{} as Record<keyof TStreams, any[]>
);
// Apply all updates
for (const [type, chunks] of Object.entries(updatesByType)) {
// @ts-ignore
nextStreamData[type] = [...(existingDataRef.current[type] || []), ...chunks];
}
await mutateStreamData(nextStreamData);
+ } catch (error) {
+ console.error('Error processing stream updates:', error);
+ throw error;
+ }
}, throttleInMs);
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
type StreamUpdate = { | |
type: keyof TStreams; | |
chunk: any; | |
}; | |
const streamQueue = createThrottledQueue<StreamUpdate>(async (updates) => { | |
const nextStreamData = { ...existingDataRef.current }; | |
// Group updates by type | |
const updatesByType = updates.reduce( | |
(acc, update) => { | |
if (!acc[update.type]) { | |
acc[update.type] = []; | |
} | |
acc[update.type].push(update.chunk); | |
return acc; | |
}, | |
{} as Record<keyof TStreams, any[]> | |
); | |
// Apply all updates | |
for (const [type, chunks] of Object.entries(updatesByType)) { | |
// @ts-ignore | |
nextStreamData[type] = [...(existingDataRef.current[type] || []), ...chunks]; | |
} | |
await mutateStreamData(nextStreamData); | |
}, throttleInMs); | |
type StreamUpdate = { | |
type: keyof TStreams; | |
chunk: any; | |
}; | |
const streamQueue = createThrottledQueue<StreamUpdate>(async (updates) => { | |
try { | |
const nextStreamData = { ...existingDataRef.current }; | |
// Group updates by type | |
const updatesByType = updates.reduce( | |
(acc, update) => { | |
if (!acc[update.type]) { | |
acc[update.type] = []; | |
} | |
acc[update.type].push(update.chunk); | |
return acc; | |
}, | |
{} as Record<keyof TStreams, any[]> | |
); | |
// Apply all updates | |
for (const [type, chunks] of Object.entries(updatesByType)) { | |
// @ts-ignore | |
nextStreamData[type] = [...(existingDataRef.current[type] || []), ...chunks]; | |
} | |
await mutateStreamData(nextStreamData); | |
} catch (error) { | |
console.error('Error processing stream updates:', error); | |
throw error; | |
} | |
}, throttleInMs); |
export async function convertAsyncIterableToArray<T>(iterable: AsyncIterable<T>): Promise<T[]> { | ||
const result: T[] = []; | ||
for await (const item of iterable) { | ||
result.push(item); | ||
} | ||
return result; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Move helper function to a shared test utilities file
The convertAsyncIterableToArray
function is currently exported from a test file, which is not a recommended practice. Consider moving this and other test utilities to a shared location.
Create a new file test/utils.ts
and move the helper functions there.
🧰 Tools
🪛 Biome
[error] 574-581: Do not export from a test file.
(lint/suspicious/noExportsInTest)
const jwtHeader = response.headers.get("x-trigger-jwt"); | ||
|
||
if (typeof jwtHeader === "string") { | ||
return { | ||
...data, | ||
publicAccessToken: jwtHeader, | ||
}; | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add JWT validation before returning the header value
The current implementation directly returns the JWT from the x-trigger-jwt
header without validation. This could potentially lead to security vulnerabilities if the header contains malformed or malicious JWT data.
Consider adding validation:
const jwtHeader = response.headers.get("x-trigger-jwt");
if (typeof jwtHeader === "string") {
+ try {
+ // Verify JWT structure and signature
+ const decoded = verifyJWT(jwtHeader, this.accessToken);
+ // Verify expiration
+ if (decoded.exp && decoded.exp < Math.floor(Date.now() / 1000)) {
+ throw new Error("JWT has expired");
+ }
return {
...data,
publicAccessToken: jwtHeader,
};
+ } catch (error) {
+ console.error("Invalid JWT in x-trigger-jwt header:", error);
+ // Fall through to generate a new JWT
+ }
}
Committable suggestion skipped: line range outside the PR's diff.
console.error("Error in API route:", error); | ||
if (error instanceof Response) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Replace console.error
with logger.error
for consistent logging
To ensure consistent logging and better log management across the application, consider replacing console.error
with logger.error
. This aligns error handling with the application's logging strategy.
Apply this diff to update the logging method:
- console.error("Error in API route:", error);
+ logger.error("Error in API route:", error);
Also applies to: 295-296, 504-505
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🧹 Outside diff range and nitpick comments (4)
references/nextjs-realtime/trigger.config.ts (1)
Line range hint
1-11
: Consider documenting real-time streaming configurationSince this configuration enables a core feature (real-time streaming), consider adding a comment block explaining the purpose and implications of the RSC configuration.
import { defineConfig } from "@trigger.dev/sdk/v3"; import { rscExtension } from "@trigger.dev/rsc"; +/** + * Trigger.dev Configuration + * + * Enables real-time streaming through React Server Components: + * - Uses worker environment for isolated stream processing + * - Supports @trigger.dev/rsc extension for real-time data handling + */ export default defineConfig({ project: "proj_bzhdaqhlymtuhlrcgbqy", dirs: ["./src/trigger"], build: { extensions: [rscExtension({ reactDomEnvironment: "worker" })], }, });references/nextjs-realtime/src/trigger/rsc.tsx (1)
14-17
: Consider making WeatherProps more reusableThe interface could be enhanced to support different weather data formats and temperature units.
interface WeatherProps { location: string; - weather: string; + weather: { + temperature: number; + unit: "C" | "F"; + condition: string; + icon?: string; + }; }.vscode/launch.json (1)
16-24
: LGTM! Consider adding environment variables for test configuration.The debug configuration for RealtimeStreams tests is well-structured and follows the established pattern.
Consider adding test-specific environment variables:
{ "type": "node-terminal", "request": "launch", "name": "Debug realtimeStreams.test.ts", "command": "pnpm run test -t RealtimeStreams", "envFile": "${workspaceFolder}/.env", + "env": { + "NODE_ENV": "test", + "LOG_LEVEL": "debug" + }, "cwd": "${workspaceFolder}/apps/webapp", "sourceMaps": true }packages/rsc/src/build.ts (1)
20-21
: Consider creating a new array instead of mutating existing conditionsDirect mutation of the conditions array could lead to side effects. Consider creating a new array instead.
- context.config.build.conditions ??= []; - context.config.build.conditions.push("react-server"); + context.config.build.conditions = [ + ...(context.config.build.conditions ?? []), + "react-server" + ];
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (6)
.changeset/swift-glasses-mate.md
(1 hunks).vscode/launch.json
(2 hunks)packages/rsc/src/build.ts
(1 hunks)references/nextjs-realtime/package.json
(2 hunks)references/nextjs-realtime/src/trigger/rsc.tsx
(1 hunks)references/nextjs-realtime/trigger.config.ts
(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- .changeset/swift-glasses-mate.md
🚧 Files skipped from review as they are similar to previous changes (1)
- references/nextjs-realtime/package.json
🔇 Additional comments (6)
references/nextjs-realtime/trigger.config.ts (2)
2-2
: LGTM: Import statement aligns with real-time streaming requirements
The import of rscExtension
from @trigger.dev/rsc
is appropriate for enabling React Server Components functionality.
7-9
: Verify React Server Components configuration
The build configuration specifies a worker environment for React Server Components, which is appropriate for handling real-time streams. However, we should verify this configuration against the rest of the application setup.
✅ Verification successful
The search results provide comprehensive evidence about the React Server Components (RSC) configuration in the codebase. Let me analyze the key findings:
-
The RSC extension configuration in
trigger.config.ts
setsreactDomEnvironment: "worker"
which is a valid option as shown inpackages/rsc/src/build.ts
where the type definition allows "node" | "worker" | "bun" -
The codebase has proper RSC support through:
- The
@trigger.dev/rsc
package - RSC-specific build extensions and configurations
- Worker environment handling for RSC rendering
- The
-
The worker environment setting is consistent with the rest of the application architecture which heavily utilizes workers for various tasks including real-time features.
-
The configuration appears to be properly integrated with Next.js based on the presence of Next.js configuration files and RSC-related code.
RSC configuration is correctly set up and appropriate for real-time streams
The worker
environment for React Server Components is the right choice here as it provides proper isolation for handling real-time streaming data. The configuration is well-integrated with the rest of the application's worker-based architecture.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the RSC configuration is consistent across the application
# Check for Next.js configuration that might affect RSC
echo "Checking Next.js configuration..."
fd -t f "next.config." -x cat {}
# Check for other RSC-related configurations
echo "Checking for other RSC configurations..."
rg -l "reactDomEnvironment|rscExtension" --type typescript
# Check for streaming-related implementations
echo "Checking for streaming implementations..."
rg -l "useStream|createStream" --type typescript
Length of output: 1192
Script:
#!/bin/bash
# Let's try a different approach to verify RSC configuration and streaming implementations
# Check for RSC-related configurations without file type restriction
echo "Checking for RSC configurations..."
rg "reactDomEnvironment|rscExtension"
# Check for streaming-related implementations without file type restriction
echo "Checking for streaming implementations..."
rg "useStream|createStream"
# Look for any trigger or RSC-related configuration files
echo "Looking for trigger or RSC config files..."
fd "trigger.config" -x cat {}
# Check for any worker-related configurations
echo "Checking for worker configurations..."
rg "worker" -C 2
Length of output: 333921
.vscode/launch.json (1)
48-55
: LGTM! Well-structured debug configuration.
The debug configuration for Next.js Realtime development is properly configured with all necessary fields and follows the established pattern in the file.
packages/rsc/src/build.ts (3)
1-7
: LGTM! Well-structured type definitions
The RSCExtensionOptions
type is well-defined with appropriate optional properties and proper type constraints for reactDomEnvironment
.
91-94
: LGTM! Well-implemented runtime condition handling
The condition selection for react-dom/server properly considers:
- Bun runtime as a priority
- User-provided reactDomEnvironment option
- Fallback to "node"
28-113
: Verify coverage of all necessary React modules for streaming
Since this PR implements real-time streams functionality, let's verify that all necessary React modules are covered by the resolution handlers.
✅ Verification successful
Let me check the content of these files to better understand the streaming implementations and module usage.
Based on the analysis of the files, I can now generate the final response:
Current React module resolution is sufficient for streaming needs
The current resolution handlers in build.ts
adequately cover the React modules needed for streaming functionality:
- The base
react
andreact-dom
resolution is handled react-dom/server
resolution includes environment-specific conditions (bun/node)- The codebase uses standard streaming APIs (
renderToPipeableStream
,renderToReadableStream
) which are available through these resolved modules
The resolution of react-dom/client
for hydration and other streaming-related imports is handled through standard module resolution since they don't require special conditions.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for React streaming-related imports in the codebase
# Search for React streaming-related imports
echo "Searching for React streaming imports..."
rg -l "react-dom/client|react-dom/server\.edge|react-dom/server\.browser|react/cache|react/streaming"
# Search for custom streaming implementations
echo "Searching for custom streaming implementations..."
rg -l "createFromReadableStream|renderToReadableStream|renderToPipeableStream"
Length of output: 497
Script:
#!/bin/bash
# Check content of identified files
echo "=== Content of entry.client.tsx ==="
cat apps/webapp/app/entry.client.tsx
echo -e "\n=== Content of entry.server.tsx ==="
cat apps/webapp/app/entry.server.tsx
echo -e "\n=== Content of rsc.tsx ==="
cat references/nextjs-realtime/src/trigger/rsc.tsx
Length of output: 10390
function App() { | ||
return ( | ||
<html> | ||
<head> | ||
<meta charSet="utf-8" /> | ||
<meta name="viewport" content="width=device-width, initial-scale=1" /> | ||
<title>My app</title> | ||
</head> | ||
<body></body> | ||
</html> | ||
); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Enhance accessibility and prevent hydration issues
The App component needs improvements:
- Missing lang attribute for accessibility
- Empty body could cause React hydration issues
function App() {
return (
- <html>
+ <html lang="en">
<head>
<meta charSet="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<title>My app</title>
</head>
- <body></body>
+ <body>
+ <div id="root"></div>
+ </body>
</html>
);
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
function App() { | |
return ( | |
<html> | |
<head> | |
<meta charSet="utf-8" /> | |
<meta name="viewport" content="width=device-width, initial-scale=1" /> | |
<title>My app</title> | |
</head> | |
<body></body> | |
</html> | |
); | |
} | |
function App() { | |
return ( | |
<html lang="en"> | |
<head> | |
<meta charSet="utf-8" /> | |
<meta name="viewport" content="width=device-width, initial-scale=1" /> | |
<title>My app</title> | |
</head> | |
<body> | |
<div id="root"></div> | |
</body> | |
</html> | |
); | |
} |
const getWeather = async (location: string) => { | ||
await new Promise((resolve) => setTimeout(resolve, 2000)); | ||
return "82°F️ ☀️"; | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add error handling and remove hard-coded responses
The getWeather
function needs improvements for production readiness:
- Missing error handling for potential network/API failures
- Uses hard-coded delay and weather response
Consider implementing actual weather API integration:
const getWeather = async (location: string) => {
+ try {
- await new Promise((resolve) => setTimeout(resolve, 2000));
- return "82°F️ ☀️";
+ const response = await fetch(`${process.env.WEATHER_API_URL}?location=${encodeURIComponent(location)}`);
+ if (!response.ok) {
+ throw new Error(`Weather API returned ${response.status}`);
+ }
+ return await response.json();
+ } catch (error) {
+ logger.error("Failed to fetch weather", { error, location });
+ throw error;
+ }
};
Committable suggestion skipped: line range outside the PR's diff.
while (true) { | ||
const { done, value } = await reader.read(); | ||
|
||
if (done) { | ||
break; | ||
} | ||
|
||
logger.log("Received chunk", { value }); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Improve stream reading implementation
The current implementation has several issues:
- Infinite loop without proper cleanup
- Missing error handling
- No memory management for large streams
Consider implementing these improvements:
+ try {
+ const chunks: Uint8Array[] = [];
+ const maxSize = 10 * 1024 * 1024; // 10MB limit
+ let totalSize = 0;
+
while (true) {
const { done, value } = await reader.read();
if (done) {
break;
}
+ totalSize += value.length;
+ if (totalSize > maxSize) {
+ throw new Error("Stream too large");
+ }
+
+ chunks.push(value);
logger.log("Received chunk", { value });
}
+ } catch (error) {
+ logger.error("Failed to read stream", { error });
+ throw error;
+ } finally {
+ reader.releaseLock();
+ }
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
while (true) { | |
const { done, value } = await reader.read(); | |
if (done) { | |
break; | |
} | |
logger.log("Received chunk", { value }); | |
} | |
try { | |
const chunks: Uint8Array[] = []; | |
const maxSize = 10 * 1024 * 1024; // 10MB limit | |
let totalSize = 0; | |
while (true) { | |
const { done, value } = await reader.read(); | |
if (done) { | |
break; | |
} | |
totalSize += value.length; | |
if (totalSize > maxSize) { | |
throw new Error("Stream too large"); | |
} | |
chunks.push(value); | |
logger.log("Received chunk", { value }); | |
} | |
} catch (error) { | |
logger.error("Failed to read stream", { error }); | |
throw error; | |
} finally { | |
reader.releaseLock(); | |
} |
const stream = await metadata.stream("openai", result.stream); | ||
|
||
let text = ""; | ||
|
||
for await (const chunk of stream) { | ||
logger.log("Received chunk", { chunk }); | ||
|
||
if (chunk.type === "text-delta") { | ||
text += chunk.textDelta; | ||
} | ||
} | ||
|
||
return { text, value: result.value }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add error handling and timeout for stream processing
The stream processing needs additional safeguards:
- Missing error handling for stream processing
- No timeout protection for long-running streams
Consider implementing these safeguards:
- const stream = await metadata.stream("openai", result.stream);
+ const stream = await metadata.stream("openai", result.stream, {
+ timeout: 30000, // 30 second timeout
+ });
let text = "";
+ try {
for await (const chunk of stream) {
logger.log("Received chunk", { chunk });
if (chunk.type === "text-delta") {
text += chunk.textDelta;
}
}
+ } catch (error) {
+ logger.error("Stream processing failed", { error });
+ throw error;
+ }
Committable suggestion skipped: line range outside the PR's diff.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🧹 Outside diff range and nitpick comments (2)
references/nextjs-realtime/src/app/batches/[id]/ClientBatchRunDetails.tsx (1)
118-124
: Add JSDoc documentation for the BatchRunTableWrapper componentConsider adding JSDoc documentation to improve code maintainability and provide better IDE support.
+/** + * Wrapper component for displaying batch run details with real-time updates + * @param {Object} props - Component props + * @param {string} props.batchId - The ID of the batch to display + * @param {string} props.publicAccessToken - The authentication token for API access + * @returns {JSX.Element} The rendered batch run table + */ function BatchRunTableWrapper({ batchId, publicAccessToken, }: { batchId: string; publicAccessToken: string; }) {packages/react-hooks/src/hooks/useRealtime.ts (1)
425-441
: Consider optimizing array operations for large datasets.The current implementation creates new arrays on each update. For large datasets, this could impact performance.
Consider using a more efficient data structure or batching updates:
function insertRunShapeInOrder<TTask extends AnyTask>( previousRuns: RealtimeRun<TTask>[], run: RealtimeRun<TTask> ) { + // For large datasets, consider maintaining a Map for O(1) lookups + const runsMap = new Map(previousRuns.map(r => [r.id, r])); + - const existingRun = previousRuns.find((r) => r.id === run.id); + const existingRun = runsMap.has(run.id); if (existingRun) { return previousRuns.map((r) => (r.id === run.id ? run : r)); } const runNumber = run.number; + // Consider binary search for better performance const index = previousRuns.findIndex((r) => r.number > runNumber); if (index === -1) { return [...previousRuns, run]; } return [...previousRuns.slice(0, index), run, ...previousRuns.slice(index)]; }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (2)
packages/react-hooks/src/hooks/useRealtime.ts
(1 hunks)references/nextjs-realtime/src/app/batches/[id]/ClientBatchRunDetails.tsx
(3 hunks)
🔇 Additional comments (2)
references/nextjs-realtime/src/app/batches/[id]/ClientBatchRunDetails.tsx (2)
5-5
: LGTM: Import changes align with real-time functionality
The new imports support the real-time streaming functionality mentioned in the PR objectives and provide proper type safety.
Also applies to: 16-16
152-152
: LGTM: Clean prop forwarding implementation
The component correctly forwards the JWT as publicAccessToken, maintaining a clear authentication flow.
accessToken: publicAccessToken, | ||
baseURL: process.env.NEXT_PUBLIC_TRIGGER_API_URL, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Environment variable configuration needs attention
The NEXT_PUBLIC_TRIGGER_API_URL
environment variable is used across multiple components but lacks proper configuration documentation. While it's used in:
src/app/batches/[id]/ClientBatchRunDetails.tsx
src/components/TriggerButton.tsx
src/components/TriggerButtonWithStreaming.tsx
src/app/uploads/[id]/ClientUploadDetails.tsx
src/app/ai/[id]/ClientAiDetails.tsx
src/app/runs/[id]/ClientRunDetails.tsx
There is no .env.example
file or documentation specifying its expected value. Please:
- Create a
.env.example
file in thereferences/nextjs-realtime
directory - Document the required environment variable and its expected format
- Update the README with environment setup instructions
🔗 Analysis chain
Verify the environment variable configuration
The code relies on NEXT_PUBLIC_TRIGGER_API_URL
. Ensure this environment variable is properly configured in all deployment environments.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check if the environment variable is defined in relevant config files
echo "Checking for NEXT_PUBLIC_TRIGGER_API_URL in environment files..."
fd -t f "^\.env" -x grep -l "NEXT_PUBLIC_TRIGGER_API_URL" {} \;
Length of output: 413
Script:
#!/bin/bash
# Check for environment variable usage in the codebase
echo "Checking for NEXT_PUBLIC_TRIGGER_API_URL usage in the codebase..."
rg "NEXT_PUBLIC_TRIGGER_API_URL" -l
# Check for any .env files including hidden ones
echo -e "\nChecking for .env files..."
fd -H "\.env" --no-ignore
# Look for environment setup documentation
echo -e "\nChecking for environment setup documentation..."
fd -t f "README|CONTRIBUTING|DEVELOPMENT|ENV" -i
Length of output: 5782
Script:
#!/bin/bash
# Check the .env.example files for the expected configuration
echo "Checking .env.example files for NEXT_PUBLIC_TRIGGER_API_URL configuration..."
cat references/nextjs-realtime/.env.example
# Check the README for environment setup instructions
echo -e "\nChecking README for environment setup instructions..."
cat references/nextjs-realtime/README.md
Length of output: 1896
useEffect(() => { | ||
if (typeof options?.enabled === "boolean" && !options.enabled) { | ||
return; | ||
} | ||
|
||
if (!runId) { | ||
return; | ||
} | ||
|
||
triggerRequest().finally(() => {}); | ||
|
||
return () => { | ||
stop(); | ||
}; | ||
}, [runId, stop, options?.enabled]); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add missing dependencies to useEffect.
The triggerRequest
function is used in the effect but not included in the dependency array. This could lead to stale closures.
Apply this fix:
- }, [runId, stop, options?.enabled]);
+ }, [runId, stop, options?.enabled, triggerRequest]);
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
useEffect(() => { | |
if (typeof options?.enabled === "boolean" && !options.enabled) { | |
return; | |
} | |
if (!runId) { | |
return; | |
} | |
triggerRequest().finally(() => {}); | |
return () => { | |
stop(); | |
}; | |
}, [runId, stop, options?.enabled]); | |
useEffect(() => { | |
if (typeof options?.enabled === "boolean" && !options.enabled) { | |
return; | |
} | |
if (!runId) { | |
return; | |
} | |
triggerRequest().finally(() => {}); | |
return () => { | |
stop(); | |
}; | |
}, [runId, stop, options?.enabled, triggerRequest]); |
for await (const part of subscription.withStreams<TStreams>()) { | ||
if (part.type === "run") { | ||
mutateRunData(part.run); | ||
} else { | ||
streamQueue.add({ | ||
type: part.type, | ||
// @ts-ignore | ||
chunk: part.chunk, | ||
}); | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Handle potential race conditions in stream processing.
The stream processing loop doesn't handle the case where the component unmounts while processing is in progress. This could lead to memory leaks or state updates on unmounted components.
Apply this improvement:
+ let isMounted = true;
+
for await (const part of subscription.withStreams<TStreams>()) {
+ if (!isMounted) break;
+
if (part.type === "run") {
mutateRunData(part.run);
} else {
streamQueue.add({
type: part.type,
// @ts-ignore
chunk: part.chunk,
});
}
}
+
+ return () => {
+ isMounted = false;
+ };
Committable suggestion skipped: line range outside the PR's diff.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 9
🧹 Outside diff range and nitpick comments (10)
packages/core/src/v3/jwt.ts (1)
Line range hint
38-49
: Consider adding input validation and logging.The implementation is solid, but could benefit from:
- Input validation for
token
andapiKey
parameters- Logging of validation failures for security monitoring
export async function validateJWT(token: string, apiKey: string): Promise<ValidationResult> { + if (!token || typeof token !== "string") { + return { + ok: false, + error: "Invalid token format", + code: "ERR_INVALID_INPUT" + }; + } + if (!apiKey || typeof apiKey !== "string") { + return { + ok: false, + error: "Invalid API key format", + code: "ERR_INVALID_INPUT" + }; + } const { jwtVerify, errors } = await import("jose");apps/webapp/app/services/authorization.server.ts (2)
40-48
: Update JSDoc examples to reflect new return type and actionsThe code examples in the JSDoc comment are outdated:
- They show boolean return values instead of the new AuthorizationResult type
- They don't demonstrate the new "write" action
Consider updating the examples like this:
/** * @example * const entity = { * type: "PUBLIC", * scope: ["read:runs:run_1234", "write:tasks"] * }; * * checkAuthorization(entity, "read", { runs: "run_1234" }); * // Returns { authorized: true } * * checkAuthorization(entity, "write", { runs: "run_1234" }); * // Returns { authorized: false, reason: "Public Access Token is missing required permissions..." } */
106-115
: Improve error message readabilityThe error message construction could be simplified for better maintainability.
Consider extracting the message construction:
- return { - authorized: false, - reason: `Public Access Token is missing required permissions. Permissions required for ${resourceValues - .map((v) => `'${action}:${resourceType}:${v}'`) - .join(", ")} but token has the following permissions: ${entity.scopes - .map((s) => `'${s}'`) - .join( - ", " - )}. See https://trigger.dev/docs/frontend/overview#authentication for more information.`, - }; + const requiredPermissions = resourceValues + .map((v) => `'${action}:${resourceType}:${v}'`) + .join(", "); + const availablePermissions = entity.scopes + .map((s) => `'${s}'`) + .join(", "); + + return { + authorized: false, + reason: [ + "Public Access Token is missing required permissions.", + `Permissions required for ${requiredPermissions}`, + `but token has the following permissions: ${availablePermissions}`, + "See https://trigger.dev/docs/frontend/overview#authentication for more information." + ].join(" ") + };apps/webapp/test/authorization.test.ts (3)
94-99
: Document the reason for @ts-expect-errorWhile testing nonexistent resource types is valuable, please add a comment explaining why the
@ts-expect-error
is necessary and what type error it's suppressing. This helps maintainers understand the intentional type system bypass.+ // @ts-expect-error Intentionally testing behavior with invalid resource type nonexistent: "resource",
193-268
: Consider adding test cases for super scope interactionsWhile the super scope test coverage is good, consider adding test cases for:
- Interaction between different super scopes (e.g., when one grants access but another denies)
- Precedence rules between super scopes
- Edge cases where super scopes might conflict
This would help document and verify the expected behavior in complex authorization scenarios.
279-299
: Consider expanding non-super scope test coverageThe current test cases cover the basic scenarios well. Consider adding tests for:
- Multiple specific scopes with partial matches
- Case sensitivity in scope matching
- Malformed scope strings
- Empty or whitespace-only scope strings
This would help catch potential edge cases in scope matching logic.
apps/webapp/app/services/realtime/jwtAuth.server.ts (3)
13-13
: Include Documentation Link in Error Message for Missing SubjectTo maintain consistency and provide users with guidance, consider adding the documentation link to the error message when the JWT subject is missing.
Apply this diff to enhance the error message:
throw json( { - error: "Invalid Public Access Token, missing subject." + error: "Invalid Public Access Token, missing subject. See https://trigger.dev/docs/frontend/overview#authentication for more information." }, { status: 401 } );
19-19
: Include Documentation Link in Error Message for Missing EnvironmentSimilarly, include the documentation link in the error message when the environment is not found to assist users.
Apply this diff to update the error message:
throw json( { - error: "Invalid Public Access Token, environment not found." + error: "Invalid Public Access Token, environment not found. See https://trigger.dev/docs/frontend/overview#authentication for more information." }, { status: 401 } );
24-52
: Ensure Comprehensive Handling of All JWT Validation Error CodesCurrently, only specific error codes are handled explicitly. To improve robustness, consider handling all known error codes returned by
validateJWT
.If
validateJWT
can return additional error codes, explicitly handle them to provide precise feedback. This enhances maintainability and makes future additions of error codes easier to manage.Example:
switch (result.code) { case "ERR_JWT_EXPIRED": { // Existing handling } case "ERR_JWT_CLAIM_INVALID": { // Existing handling + } + case "ERR_JWT_SIGNATURE_INVALID": { + // Handle invalid signature error + } default: { // Existing default handling } }apps/webapp/app/services/routeBuilders/apiBuilder.server.ts (1)
499-505
: Ensure consistent authorization error handlingIn
createActionApiRoute
, the authorization check does not provide detailed error messages like increateLoaderApiRoute
. Capture the authorization result to include specific reasons for unauthorized access, enhancing client feedback.Refactor the code as follows:
- if (!checkAuthorization(authenticationResult, action, $resource, superScopes)) { - return wrapResponse( - request, - json({ error: "Unauthorized" }, { status: 403 }), - corsStrategy !== "none" - ); - } + const authorizationResult = checkAuthorization( + authenticationResult, + action, + $resource, + superScopes + ); + if (!authorizationResult.authorized) { + return wrapResponse( + request, + json( + { + error: `Unauthorized: ${authorizationResult.reason}`, + code: "unauthorized", + param: "access_token", + type: "authorization", + }, + { status: 403 } + ), + corsStrategy !== "none" + ); + }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (7)
apps/webapp/app/services/authorization.server.ts
(3 hunks)apps/webapp/app/services/realtime/jwtAuth.server.ts
(2 hunks)apps/webapp/app/services/routeBuilders/apiBuilder.server.ts
(1 hunks)apps/webapp/test/authorization.test.ts
(3 hunks)packages/core/src/v3/apiClient/stream.ts
(2 hunks)packages/core/src/v3/jwt.ts
(3 hunks)references/nextjs-realtime/src/app/runs/[id]/ClientRunDetails.tsx
(2 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- packages/core/src/v3/apiClient/stream.ts
- references/nextjs-realtime/src/app/runs/[id]/ClientRunDetails.tsx
🔇 Additional comments (8)
packages/core/src/v3/jwt.ts (2)
27-36
: Well-structured discriminated union type!
The ValidationResult
type follows the Result pattern, providing clear type safety and explicit error handling. This makes it impossible to access the payload without first checking if the validation was successful.
50-63
: Verify error handling coverage.
Let's verify that we're handling all possible JWT validation error cases from the jose library.
✅ Verification successful
Since we don't have access to the node_modules/jose
directory in the sandbox environment, let's try to analyze the JWT validation code in the codebase to understand the error handling better.
Let's examine the actual error handling code in the JWT verification function to understand the complete implementation.
Let's try a different approach to find the JWT verification implementation.
Let's try one final approach to locate the JWT verification code by searching for the error handling pattern we see in the original snippet.
Based on the code analysis, I can now provide a final response about the error handling coverage.
JWT validation error handling is properly implemented
The current implementation appropriately handles all possible error cases from the jose library by:
- Catching all
JOSEError
instances which covers all JWT-specific errors (expired, malformed, invalid signature, etc.) - Including a fallback case for any unexpected errors
The error handling is complete and follows best practices by:
- Preserving the original error message and code from JOSE errors
- Providing a generic "ERR_UNKNOWN" code for unexpected errors
- Safely handling non-Error throws with an "Unknown error" message
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check all possible JWT validation error types from jose library
# Expected: List of all error types that should be handled
# Search for error class definitions in jose
rg -A 5 "export class \w+Error (extends|implements) (JOSEError|Error)" $(fd -t f . node_modules/jose/)
# Search for actual error instances in the codebase
rg "throw new \w+Error" $(fd -t f . node_modules/jose/)
Length of output: 19285
Script:
#!/bin/bash
# Let's examine the JWT validation code and its error handling
rg -A 10 "import.*jose" .
ast-grep --pattern 'jwtVerify($$$)'
Length of output: 3480
Script:
#!/bin/bash
# Get the complete JWT verification function implementation
ast-grep --pattern 'export async function verifyJWT($$$)'
Length of output: 59
Script:
#!/bin/bash
# Search for the JWT verification function and its error handling
rg -A 20 "verifyJWT" .
Length of output: 22
Script:
#!/bin/bash
# Search for the complete implementation containing the error handling pattern
rg -B 20 -A 10 "if \(error instanceof errors\.JOSEError\)" .
Length of output: 1748
apps/webapp/test/authorization.test.ts (3)
15-26
: LGTM! Well-structured test cases for PRIVATE entity
The test cases comprehensively verify the PRIVATE entity authorization behavior with proper assertions for both the authorized
flag and the absence of error reasons across different resource types.
31-48
: LGTM! Thorough test coverage for deprecated PUBLIC entity
The test suite properly verifies that PUBLIC type is deprecated and consistently unauthorized, with appropriate type-guarded assertions for both the authorization status and the deprecation message.
133-177
: LGTM! Excellent edge case coverage
The test suite thoroughly covers important edge cases including empty objects, undefined scopes, and partial authorization scenarios. The assertions properly verify both success and failure cases with appropriate error messages.
apps/webapp/app/services/realtime/jwtAuth.server.ts (1)
1-1
: Import 'json' Correctly for Error Responses
The import of json
from @remix-run/server-runtime
enables appropriate JSON responses for error handling in Remix applications.
apps/webapp/app/services/routeBuilders/apiBuilder.server.ts (2)
250-298
: Refactor duplicated validation logic into a shared helper
As with createLoaderApiRoute
, the validation logic here is repeated. Abstracting this code would enhance maintainability and reduce potential errors from inconsistent updates.
406-454
: Refactor duplicated validation logic into a shared helper
The validation code for params
, searchParams
, headers
, and now body
is again duplicated. Creating a shared utility function would streamline the codebase.
} | ||
|
||
return { | ||
environment, | ||
claims, | ||
claims: result.payload, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Correct the Property Name from 'claims' to 'payload'
In the return statement, the property name claims
should accurately reflect the data being returned. If result.payload
contains the JWT payload, consider renaming claims
to payload
for clarity.
Apply this diff to update the property name:
return {
environment,
- claims: result.payload,
+ payload: result.payload,
};
Update any references to claims
accordingly to maintain consistency.
Committable suggestion skipped: line range outside the PR's diff.
const result = await validateJWT(token, environment.apiKey); | ||
|
||
if (!result.ok) { | ||
switch (result.code) { | ||
case "ERR_JWT_EXPIRED": { | ||
throw json( | ||
{ | ||
error: | ||
"Public Access Token has expired. See https://trigger.dev/docs/frontend/overview#authentication for more information.", | ||
}, | ||
{ status: 401 } | ||
); | ||
} | ||
case "ERR_JWT_CLAIM_INVALID": { | ||
throw json( | ||
{ | ||
error: `Public Access Token is invalid: ${result.error}. See https://trigger.dev/docs/frontend/overview#authentication for more information.`, | ||
}, | ||
{ status: 401 } | ||
); | ||
} | ||
default: { | ||
throw json( | ||
{ | ||
error: | ||
"Public Access Token is invalid. See https://trigger.dev/docs/frontend/overview#authentication for more information.", | ||
}, | ||
{ status: 401 } | ||
); | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wrap JWT Validation in Try-Catch to Handle Unexpected Errors
If validateJWT
might throw unexpected exceptions, wrap the call in a try-catch block to prevent unhandled promise rejections and ensure consistent error responses.
Apply this diff to add error handling:
+try {
const result = await validateJWT(token, environment.apiKey);
if (!result.ok) {
switch (result.code) {
// Existing error handling cases
}
}
+} catch (error) {
+ // Log the error internally
+ console.error("Unexpected error during JWT validation:", error);
+ // Respond with a generic error message
+ throw json(
+ {
+ error: "An unexpected error occurred during authentication. Please try again later.",
+ },
+ { status: 500 }
+ );
+}
This ensures that any unforeseen errors are handled gracefully.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
const result = await validateJWT(token, environment.apiKey); | |
if (!result.ok) { | |
switch (result.code) { | |
case "ERR_JWT_EXPIRED": { | |
throw json( | |
{ | |
error: | |
"Public Access Token has expired. See https://trigger.dev/docs/frontend/overview#authentication for more information.", | |
}, | |
{ status: 401 } | |
); | |
} | |
case "ERR_JWT_CLAIM_INVALID": { | |
throw json( | |
{ | |
error: `Public Access Token is invalid: ${result.error}. See https://trigger.dev/docs/frontend/overview#authentication for more information.`, | |
}, | |
{ status: 401 } | |
); | |
} | |
default: { | |
throw json( | |
{ | |
error: | |
"Public Access Token is invalid. See https://trigger.dev/docs/frontend/overview#authentication for more information.", | |
}, | |
{ status: 401 } | |
); | |
} | |
} | |
try { | |
const result = await validateJWT(token, environment.apiKey); | |
if (!result.ok) { | |
switch (result.code) { | |
case "ERR_JWT_EXPIRED": { | |
throw json( | |
{ | |
error: | |
"Public Access Token has expired. See https://trigger.dev/docs/frontend/overview#authentication for more information.", | |
}, | |
{ status: 401 } | |
); | |
} | |
case "ERR_JWT_CLAIM_INVALID": { | |
throw json( | |
{ | |
error: `Public Access Token is invalid: ${result.error}. See https://trigger.dev/docs/frontend/overview#authentication for more information.`, | |
}, | |
{ status: 401 } | |
); | |
} | |
default: { | |
throw json( | |
{ | |
error: | |
"Public Access Token is invalid. See https://trigger.dev/docs/frontend/overview#authentication for more information.", | |
}, | |
{ status: 401 } | |
); | |
} | |
} | |
} | |
} catch (error) { | |
// Log the error internally | |
console.error("Unexpected error during JWT validation:", error); | |
// Respond with a generic error message | |
throw json( | |
{ | |
error: "An unexpected error occurred during authentication. Please try again later.", | |
}, | |
{ status: 500 } | |
); | |
} |
let parsedParams: any = undefined; | ||
if (paramsSchema) { | ||
const parsed = paramsSchema.safeParse(params); | ||
if (!parsed.success) { | ||
return wrapResponse( | ||
request, | ||
json( | ||
{ error: "Params Error", details: fromZodError(parsed.error).details }, | ||
{ status: 400 } | ||
), | ||
corsStrategy !== "none" | ||
); | ||
} | ||
parsedParams = parsed.data; | ||
} | ||
|
||
let parsedSearchParams: any = undefined; | ||
if (searchParamsSchema) { | ||
const searchParams = Object.fromEntries(new URL(request.url).searchParams); | ||
const parsed = searchParamsSchema.safeParse(searchParams); | ||
if (!parsed.success) { | ||
return wrapResponse( | ||
request, | ||
json( | ||
{ error: "Query Error", details: fromZodError(parsed.error).details }, | ||
{ status: 400 } | ||
), | ||
corsStrategy !== "none" | ||
); | ||
} | ||
parsedSearchParams = parsed.data; | ||
} | ||
|
||
let parsedHeaders: any = undefined; | ||
if (headersSchema) { | ||
const rawHeaders = Object.fromEntries(request.headers); | ||
const headers = headersSchema.safeParse(rawHeaders); | ||
if (!headers.success) { | ||
return wrapResponse( | ||
request, | ||
json( | ||
{ error: "Headers Error", details: fromZodError(headers.error).details }, | ||
{ status: 400 } | ||
), | ||
corsStrategy !== "none" | ||
); | ||
} | ||
parsedHeaders = headers.data; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Refactor duplicated validation logic into a shared helper
The parsing and validation code for params
, searchParams
, and headers
is repeated across createLoaderApiRoute
, createLoaderPATApiRoute
, and createActionApiRoute
. Consider abstracting this logic into a shared utility function to reduce code duplication and improve maintainability.
}); | ||
return wrapResponse(request, result, corsStrategy !== "none"); | ||
} catch (error) { | ||
console.error("Error in API route:", error); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Replace console.error
with logger.error
for consistent logging
To ensure consistent logging and better log management across the application, replace console.error
with logger.error
.
Apply this diff:
- console.error("Error in API route:", error);
+ logger.error("Error in API route:", error);
Also applies to: 518-518
return json({ error: "Request body too large" }, { status: 413 }); | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Wrap the response with wrapResponse
to apply CORS headers
The response returned when the request body is too large is not wrapped with wrapResponse
, which may lead to missing CORS headers when corsStrategy
is enabled. Wrap the response to ensure headers are correctly applied.
Modify the code as follows:
- return json({ error: "Request body too large" }, { status: 413 });
+ return wrapResponse(
+ request,
+ json({ error: "Request body too large" }, { status: 413 }),
+ corsStrategy !== "none"
+ );
Committable suggestion skipped: line range outside the PR's diff.
return new Response(null, { status: 405 }); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Wrap the response with wrapResponse
to apply CORS headers
The 405 Method Not Allowed
response is not wrapped with wrapResponse
, which may omit necessary CORS headers. Ensure consistent response handling by wrapping all responses.
Modify the code as follows:
- return new Response(null, { status: 405 });
+ return wrapResponse(
+ request,
+ new Response(null, { status: 405 }),
+ corsStrategy !== "none"
+ );
Committable suggestion skipped: line range outside the PR's diff.
if (maxContentLength) { | ||
const contentLength = request.headers.get("content-length"); | ||
|
||
if (!contentLength || parseInt(contentLength) > maxContentLength) { | ||
return json({ error: "Request body too large" }, { status: 413 }); | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Validate Content-Length
header correctly to handle invalid values
If the Content-Length
header is missing or contains invalid data (e.g., non-numeric), parseInt
will return NaN
, and the comparison will not work as intended. Ensure that contentLength
is a valid number before comparing it to maxContentLength
.
You can modify the code as follows:
if (maxContentLength) {
const contentLength = request.headers.get("content-length");
- if (!contentLength || parseInt(contentLength) > maxContentLength) {
+ const parsedContentLength = parseInt(contentLength, 10);
+ if (!contentLength || isNaN(parsedContentLength) || parsedContentLength > maxContentLength) {
return wrapResponse(
request,
json({ error: "Request body too large" }, { status: 413 }),
corsStrategy !== "none"
);
}
}
Committable suggestion skipped: line range outside the PR's diff.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 13
🧹 Outside diff range and nitpick comments (13)
packages/core/src/v3/runMetadata/noopManager.ts (2)
6-11
: Consider adding more descriptive error messagesThe error messages could be more helpful by indicating this is a no-op implementation. This would make debugging easier when these methods are accidentally called.
appendKey(key: string, value: DeserializedJson): void { - throw new Error("Method not implemented."); + throw new Error("appendKey not implemented in NoopRunMetadataManager"); } removeFromKey(key: string, value: DeserializedJson): void { - throw new Error("Method not implemented."); + throw new Error("removeFromKey not implemented in NoopRunMetadataManager"); }
Line range hint
1-20
: Well-structured implementation of the null object patternThe implementation provides a clean foundation for the real-time streaming functionality while maintaining the null object pattern. The new methods integrate well with the existing structure and follow TypeScript best practices.
packages/core/src/v3/runMetadata/index.ts (2)
72-78
: Add input validation and error handling guidanceWhile the stream implementation aligns well with the PR objectives for real-time functionality, consider these improvements:
- Add input validation
- Document error handling expectations
Apply this diff:
public stream<T>( key: string, value: AsyncIterable<T> | ReadableStream<T>, signal?: AbortSignal ): Promise<AsyncIterable<T>> { + if (!key) { + throw new Error("Key must not be empty"); + } + if (!value) { + throw new Error("Stream value must not be null or undefined"); + } return this.#getManager().stream(key, value, signal); }Consider adding JSDoc comments to document error handling:
/** * Streams values for a given key * @throws {Error} If the stream operation fails * @throws {AbortError} If the operation is cancelled via AbortSignal */
Line range hint
1-85
: Consider implementing consistent error handling strategyThe class would benefit from a consistent approach to error handling across all methods. Consider:
- Creating custom error types for different categories of errors (validation, runtime, etc.)
- Implementing a common error handling wrapper for manager method calls
Example implementation:
class MetadataError extends Error { constructor(message: string) { super(message); this.name = 'MetadataError'; } } private wrapManagerCall<T>(operation: () => T): T { try { return operation(); } catch (error) { throw new MetadataError(`Operation failed: ${error.message}`); } }🧰 Tools
🪛 Biome
[error] 53-53: The function should not return a value because its return type is void.
The function is here:
'void' signals the absence of value. The returned value is likely to be ignored by the caller.
(lint/correctness/noVoidTypeReturn)
[error] 57-57: The function should not return a value because its return type is void.
The function is here:
'void' signals the absence of value. The returned value is likely to be ignored by the caller.
(lint/correctness/noVoidTypeReturn)
[error] 69-69: The function should not return a value because its return type is void.
The function is here:
'void' signals the absence of value. The returned value is likely to be ignored by the caller.
(lint/correctness/noVoidTypeReturn)
packages/core/src/v3/runMetadata/manager.ts (3)
9-10
: Document stream limit constantsConsider adding JSDoc comments explaining the rationale behind these limits and their implications on system resources.
+/** Maximum number of streams that can be active simultaneously */ const MAXIMUM_ACTIVE_STREAMS = 2; +/** Maximum total number of streams allowed for a single run */ const MAXIMUM_TOTAL_STREAMS = 5;
83-126
: Consider adding type validation in appendKeyThe method handles array operations but doesn't validate the types of values being appended. Consider adding type validation to ensure consistent data structures.
public appendKey(key: string, value: DeserializedJson) { if (!this.runId) { return; } + // Validate value type if needed + if (value === null || value === undefined) { + throw new Error(`Cannot append null or undefined value to key: ${key}`); + } let nextStore: Record<string, DeserializedJson> | undefined = this.store
Line range hint
287-302
: Consider adding retry logic to flush methodThe flush method is critical for persisting metadata. Consider adding retry logic for transient failures.
public async flush(requestOptions?: ApiRequestOptions): Promise<void> { if (!this.runId || !this.store || !this.hasChanges) { return; } + const maxRetries = 3; + let retryCount = 0; + + while (retryCount < maxRetries) { try { this.hasChanges = false; await this.apiClient.updateRunMetadata( this.runId, { metadata: this.store }, requestOptions ); + return; } catch (error) { this.hasChanges = true; + if (retryCount === maxRetries - 1) { throw error; + } + retryCount++; + await new Promise(resolve => setTimeout(resolve, 1000 * retryCount)); } + } }packages/core/src/v3/utils/ioSerialization.ts (2)
Line range hint
332-361
: Add input validation and documentationConsider these improvements:
- Add JSDoc documentation explaining the options parameter and its usage
- Add validation for filteredKeys to ensure they are valid strings
+/** + * Pretty prints a packet with optional key filtering + * @param rawData The data to pretty print + * @param dataType The type of data (e.g., 'application/json') + * @param options Options for controlling the output + * @param options.filteredKeys Array of keys to exclude from the output + * @returns A formatted string representation of the data + */ export async function prettyPrintPacket( rawData: any, dataType?: string, options?: ReplacerOptions ): Promise<string> { + if (options?.filteredKeys) { + if (!Array.isArray(options.filteredKeys) || + !options.filteredKeys.every(k => typeof k === 'string')) { + throw new Error('filteredKeys must be an array of strings'); + } + }
368-400
: Consider edge cases in type conversionsThe replacer handles various types well, but consider these improvements:
- Map keys are implicitly converted to strings, which might not be desired for all use cases
- Circular references could cause stack overflow
function makeSafeReplacer(options?: ReplacerOptions) { + const seen = new WeakSet(); return function replacer(key: string, value: any) { // Check if the key should be filtered out if (options?.filteredKeys?.includes(key)) { return undefined; } + // Handle circular references + if (typeof value === 'object' && value !== null) { + if (seen.has(value)) { + return '[Circular]'; + } + seen.add(value); + } // if it is a Map, convert it to an object if (value instanceof Map) { const obj: Record<string, any> = {}; value.forEach((v, k) => { - obj[k] = v; + // Preserve key type information + const keyStr = typeof k === 'object' ? JSON.stringify(k) : String(k); + obj[keyStr] = v; }); return obj; } return value; }; }packages/core/src/v3/apiClient/runStream.ts (2)
45-45
: Add JSDoc comment to document the RealtimeRun typeThe
RealtimeRun
type alias would benefit from documentation explaining its purpose and relationship toTaskRunShape
.Add this documentation above the type:
+/** + * Represents a real-time run for a specific task type. + * This is an alias for TaskRunShape to provide more semantic meaning in real-time contexts. + */ export type RealtimeRun<TTask extends AnyTask> = TaskRunShape<TTask>;
78-92
: Consider destructuring options for clarityThe options object construction could be more explicit about which properties are being passed through.
- const $options: RunSubscriptionOptions = { - provider: { - async onShape(callback) { - return zodShapeStream(SubscribeRunRawShape, url, callback, options); - }, - }, - streamFactory: new SSEStreamSubscriptionFactory( - getEnvVar("TRIGGER_STREAM_URL", getEnvVar("TRIGGER_API_URL")) ?? "https://api.trigger.dev", - { - headers: options?.headers, - signal: options?.signal, - } - ), - ...options, - }; + const { headers, signal, client, closeOnComplete } = options ?? {}; + const $options: RunSubscriptionOptions = { + provider: { + async onShape(callback) { + return zodShapeStream(SubscribeRunRawShape, url, callback, { headers, signal, client }); + }, + }, + streamFactory: new SSEStreamSubscriptionFactory( + getEnvVar("TRIGGER_STREAM_URL", getEnvVar("TRIGGER_API_URL")) ?? "https://api.trigger.dev", + { headers, signal } + ), + headers, + signal, + client, + closeOnComplete, + };apps/webapp/app/services/routeBuilders/apiBuilder.server.ts (2)
142-147
: Consider reducing logged authorization details.The debug log contains potentially sensitive information (scopes). Consider logging only necessary information or masking sensitive details.
logger.debug("Checking authorization", { action, resource: $resource, - superScopes, - scopes: authenticationResult.scopes, + hasScopes: authenticationResult.scopes.length > 0, });
181-190
: Enhance error handling with specific error types.The catch-all error handler could be more informative. Consider creating specific error types for different failure scenarios.
} catch (error) { if (error instanceof Response) { return wrapResponse(request, error, corsStrategy !== "none"); } + if (error instanceof Error) { + logger.error("API route error:", { error: error.message, stack: error.stack }); + return wrapResponse( + request, + json({ error: error.message }, { status: 500 }), + corsStrategy !== "none" + ); + } return wrapResponse( request, json({ error: "Internal Server Error" }, { status: 500 }), corsStrategy !== "none" ); }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (11)
apps/webapp/app/presenters/v3/SpanPresenter.server.ts
(1 hunks)apps/webapp/app/services/routeBuilders/apiBuilder.server.ts
(1 hunks)packages/core/src/v3/apiClient/runStream.ts
(6 hunks)packages/core/src/v3/runMetadata/index.ts
(1 hunks)packages/core/src/v3/runMetadata/manager.ts
(6 hunks)packages/core/src/v3/runMetadata/noopManager.ts
(1 hunks)packages/core/src/v3/runMetadata/types.ts
(1 hunks)packages/core/src/v3/utils/ioSerialization.ts
(9 hunks)packages/core/test/runStream.test.ts
(1 hunks)packages/core/test/standardMetadataManager.test.ts
(1 hunks)packages/trigger-sdk/src/v3/metadata.ts
(3 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- packages/core/src/v3/runMetadata/types.ts
- packages/core/test/standardMetadataManager.test.ts
🧰 Additional context used
🪛 Biome
packages/core/src/v3/runMetadata/index.ts
[error] 53-53: The function should not return a value because its return type is void.
The function is here:
'void' signals the absence of value. The returned value is likely to be ignored by the caller.
(lint/correctness/noVoidTypeReturn)
[error] 57-57: The function should not return a value because its return type is void.
The function is here:
'void' signals the absence of value. The returned value is likely to be ignored by the caller.
(lint/correctness/noVoidTypeReturn)
[error] 69-69: The function should not return a value because its return type is void.
The function is here:
'void' signals the absence of value. The returned value is likely to be ignored by the caller.
(lint/correctness/noVoidTypeReturn)
packages/core/src/v3/utils/ioSerialization.ts
[error] 322-322: Other switch clauses can erroneously access this declaration.
Wrap the declaration in a block to restrict its access to the switch clause.
The declaration is defined in this switch clause:
Unsafe fix: Wrap the declaration in a block.
(lint/correctness/noSwitchDeclarations)
packages/core/test/runStream.test.ts
[error] 569-576: Do not export from a test file.
(lint/suspicious/noExportsInTest)
🔇 Additional comments (16)
packages/core/src/v3/runMetadata/index.ts (1)
52-58
: Previous review comments are still applicable
The issues with void return types, inconsistent access modifiers, and missing input validation were already identified in the previous review.
🧰 Tools
🪛 Biome
[error] 53-53: The function should not return a value because its return type is void.
The function is here:
'void' signals the absence of value. The returned value is likely to be ignored by the caller.
(lint/correctness/noVoidTypeReturn)
[error] 57-57: The function should not return a value because its return type is void.
The function is here:
'void' signals the absence of value. The returned value is likely to be ignored by the caller.
(lint/correctness/noVoidTypeReturn)
packages/trigger-sdk/src/v3/metadata.ts (1)
28-32
: LGTM: New metadata operations properly exported
The new operations are consistently named and well-organized within the metadata object.
packages/core/src/v3/runMetadata/manager.ts (2)
16-24
: LGTM! Well-structured property and constructor changes
The changes follow good practices:
- Properly typed Map for stream tracking
- Dependency injection through constructor
263-285
: Fix timeout handling in waitForAllStreams
The current implementation has the same issue as mentioned in the past review comment. The timeout logic won't properly abort streams.
apps/webapp/app/presenters/v3/SpanPresenter.server.ts (1)
213-213
: Verify the streams metadata filtering behavior.
The addition of filteredKeys: ["$$streams"]
aligns with the PR's real-time streams feature by preventing the display of internal stream data in the metadata output. However, we should verify that this filtering doesn't impact any existing metadata visualization requirements.
Let's verify the metadata structure and usage:
✅ Verification successful
Let me generate another script to get more specific information about the metadata handling:
The previous search had a syntax error for $$streams
, and we need to see the implementation of prettyPrintPacket
to understand how the filtering is applied. We also need to check for any other instances of stream-related metadata filtering.
Let me try one final verification to check the prettyPrintPacket
implementation:
Based on the gathered information, I can now generate the final response:
Filtering $$streams
from metadata is safe and appropriate
The filtering of $$streams
from metadata display is a correct implementation because:
- The
$$streams
key is an internal implementation detail for real-time streams functionality - No UI components or presenters directly depend on or display the
$$streams
metadata field - The filtering is only applied during presentation/display, preserving the original metadata structure
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for any direct references to $$streams in metadata across the codebase
# and verify there are no UI components expecting this data
# Search for any direct references to $$streams in the codebase
echo "Checking for direct $$streams references:"
rg -l "\\$\\$streams"
# Look for metadata access patterns in UI components
echo "Checking metadata usage in UI components:"
fd -e tsx -e jsx | xargs rg "metadata" -A 3
Length of output: 9362
Script:
#!/bin/bash
# Check for prettyPrintPacket usage and metadata filtering patterns
echo "Checking prettyPrintPacket implementation and usage:"
ast-grep --pattern 'prettyPrintPacket($$$)'
echo "\nChecking metadata filtering patterns:"
rg "filteredKeys.*streams" -B2 -A2
echo "\nChecking metadata stream handling:"
rg "\\$\\$streams" --type ts --type tsx
Length of output: 2198
Script:
#!/bin/bash
# Check the implementation of prettyPrintPacket
echo "Checking prettyPrintPacket implementation:"
ast-grep --pattern 'function prettyPrintPacket($$$)' -l | xargs cat
# Check for any stream-related metadata handling
echo "\nChecking stream metadata handling:"
rg "\\$\\$streams" -g "*.{ts,tsx}" -B2 -A2
# Check for metadata type definitions
echo "\nChecking metadata type definitions:"
rg "type.*Metadata" -g "*.{ts,tsx}" -B1 -A3
Length of output: 15260
packages/core/src/v3/utils/ioSerialization.ts (4)
10-10
: LGTM: Good architectural improvement
The addition of an optional ApiClient
parameter enhances modularity through dependency injection, making the code more testable and flexible.
Also applies to: 40-44
166-179
: LGTM: Proper parameter propagation
The client parameter is correctly propagated to the underlying importPacket
function, maintaining the dependency injection pattern.
217-228
: Skip: Past review comment still applies
The previous review comment about adding explicit error handling for missing client is still valid and applicable.
421-421
: LGTM: Improved type safety
The explicit check for Buffer being a function improves type safety.
apps/webapp/app/services/routeBuilders/apiBuilder.server.ts (4)
1-54
: LGTM! Well-structured type definitions.
The type definitions are well-organized and make effective use of TypeScript's conditional types to ensure type safety across the API route builders.
309-309
: Replace console.error with logger.error for consistent logging.
As mentioned in past reviews, use logger.error for consistent logging across the application.
398-404
: Improve content-length validation.
The current content-length validation needs improvement to handle invalid values correctly.
532-536
: LGTM! Clean utility function implementation.
The wrapResponse function is well-implemented, providing a clean abstraction for CORS response handling.
packages/core/test/runStream.test.ts (3)
11-88
: LGTM! Well-structured test implementations.
The test implementations provide clear and maintainable test doubles. The DelayedTestShapeProvider
is particularly well-designed for testing async behavior with its time-based emission of shapes.
90-568
: LGTM! Comprehensive test coverage.
The test suite thoroughly covers various scenarios including:
- Basic run subscription
- Payload and outputs handling
- Stream persistence with
closeOnComplete: false
- Multiple streams and updates
- Edge cases and error conditions
570-605
: 🛠️ Refactor suggestion
Move helper functions to a shared test utilities file.
The helper functions convertAsyncIterableToArray
and collectNResults
are well-implemented but should be moved to a shared location for better maintainability and reuse across test files.
Create a new file test/utils.ts
and move both helper functions there:
-export async function convertAsyncIterableToArray<T>(iterable: AsyncIterable<T>): Promise<T[]> {
- const result: T[] = [];
- for await (const item of iterable) {
- result.push(item);
- }
- return result;
-}
-
-async function collectNResults<T>(
- iterable: AsyncIterable<T>,
- count: number,
- timeoutMs: number = 1000
-): Promise<T[]> {
- // ... implementation ...
-}
// test/utils.ts
export async function convertAsyncIterableToArray<T>(iterable: AsyncIterable<T>): Promise<T[]> {
const result: T[] = [];
for await (const item of iterable) {
result.push(item);
}
return result;
}
export async function collectNResults<T>(
iterable: AsyncIterable<T>,
count: number,
timeoutMs: number = 1000
): Promise<T[]> {
// ... implementation ...
}
appendKey(key: string, value: DeserializedJson): void { | ||
throw new Error("Method not implemented."); | ||
} | ||
removeFromKey(key: string, value: DeserializedJson): void { | ||
throw new Error("Method not implemented."); | ||
} | ||
incrementKey(key: string, value: number): void { | ||
throw new Error("Method not implemented."); | ||
} | ||
decrementKey(key: string, value: number): void { | ||
throw new Error("Method not implemented."); | ||
} | ||
stream<T>(key: string, value: AsyncIterable<T>): Promise<AsyncIterable<T>> { | ||
throw new Error("Method not implemented."); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider consistent error handling across all methods
The class has an inconsistency where enterWithMetadata
has an empty implementation while all other methods throw "not implemented" errors. Consider making the behavior consistent across all methods.
incrementKey(key: string, value: number): void { | ||
throw new Error("Method not implemented."); | ||
} | ||
decrementKey(key: string, value: number): void { | ||
throw new Error("Method not implemented."); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Missing test coverage for NoopRunMetadataManager's increment/decrement operations
The NoopRunMetadataManager
implements the RunMetadataManager
interface which requires incrementKey
and decrementKey
methods, but there are no tests covering these operations. Since this is a noop implementation that throws errors, tests should verify this behavior.
- Add test cases in
packages/core/src/v3/runMetadata/
directory to verify that:incrementKey
throws "Method not implemented" errordecrementKey
throws "Method not implemented" error
🔗 Analysis chain
Verify test coverage for increment/decrement operations
These new methods for incrementing and decrementing numeric values should be covered by unit tests to ensure they behave consistently with other implementations of the RunMetadataManager
interface.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for test files covering NoopRunMetadataManager
fd -e test.ts -e spec.ts | xargs rg "NoopRunMetadataManager.*increment|NoopRunMetadataManager.*decrement"
Length of output: 107
Script:
#!/bin/bash
# First, let's find the test files for NoopRunMetadataManager
fd -e test.ts -e spec.ts | xargs rg "NoopRunMetadataManager"
# Then, let's check the interface definition to understand the test requirements
fd RunMetadataManager.ts | xargs cat
Length of output: 99
Script:
#!/bin/bash
# Let's try a broader search to find the interface and test files
rg -l "RunMetadataManager" --type ts
# Also search for any test files in the same directory as noopManager.ts
fd . "$(dirname $(fd noopManager.ts))" -e test.ts -e spec.ts
Length of output: 393
Script:
#!/bin/bash
# Let's check the interface definition in types.ts
cat packages/core/src/v3/runMetadata/types.ts
# And check if there are any tests in the manager.ts file's tests
fd manager.test.ts -p packages/core/src/v3/runMetadata/ | xargs cat
Length of output: 1033
stream<T>(key: string, value: AsyncIterable<T>): Promise<AsyncIterable<T>> { | ||
throw new Error("Method not implemented."); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Add JSDoc documentation for the stream method and fix signature mismatch
The method signature in noopManager needs to be updated to match the interface definition and other implementations. Changes needed:
- Add the optional
signal?: AbortSignal
parameter - Update the value parameter type to
AsyncIterable<T> | ReadableStream<T>
- Add JSDoc documentation explaining the purpose and behavior
Example locations requiring attention:
packages/core/src/v3/runMetadata/noopManager.ts
: Update method signature and add documentation- Current signature:
stream<T>(key: string, value: AsyncIterable<T>)
- Required signature:
stream<T>(key: string, value: AsyncIterable<T> | ReadableStream<T>, signal?: AbortSignal)
🔗 Analysis chain
Add JSDoc documentation for the stream method
This new streaming capability is a significant feature aligned with the PR's real-time functionality objectives. Consider adding JSDoc documentation to explain:
- The purpose of the stream method
- Expected behavior in actual implementations
- Why a no-op implementation might be used
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Verify consistent method signature across implementations
ast-grep --pattern 'stream<$_>($_): Promise<AsyncIterable<$_>>'
# Check for existing documentation
rg -B 2 "stream<.*>" | rg "\/\*\*|\/\/"
Length of output: 185
Script:
#!/bin/bash
# Search for stream method implementations
ast-grep --pattern 'stream<$_>($_, $_): Promise<AsyncIterable<$_>>'
# Check for existing documentation with fixed regex
rg -B 2 'stream<.*>' | rg '(\*\*|//)'
# Alternative search for stream method and surrounding context
rg -B 3 -A 1 'stream<.*>'
Length of output: 1912
Script:
#!/bin/bash
# Let's check the implementation in manager.ts for documentation and signature
rg -B 5 -A 10 'public async stream<T>' packages/core/src/v3/runMetadata/manager.ts
# Check the interface definition in types.ts for documentation
rg -B 5 -A 5 'stream<T>' packages/core/src/v3/runMetadata/types.ts
# Look for any documentation in the SDK implementation
rg -B 5 -A 5 'async function stream<T>' packages/trigger-sdk/src/v3/metadata.ts
Length of output: 1185
removeFromKey(key: string, value: DeserializedJson): void { | ||
this.#getManager().removeFromKey(key, value); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Align removeFromKey implementation with appendKey
For consistency with appendKey, this method should:
- Have a public access modifier
- Include input validation
Apply this diff:
- removeFromKey(key: string, value: DeserializedJson): void {
+ public removeFromKey(key: string, value: DeserializedJson): void {
+ if (!key) {
+ throw new Error("Key must not be empty");
+ }
+ if (value === undefined || value === null) {
+ throw new Error("Remove value must not be null or undefined");
+ }
this.#getManager().removeFromKey(key, value);
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
removeFromKey(key: string, value: DeserializedJson): void { | |
this.#getManager().removeFromKey(key, value); | |
} | |
public removeFromKey(key: string, value: DeserializedJson): void { | |
if (!key) { | |
throw new Error("Key must not be empty"); | |
} | |
if (value === undefined || value === null) { | |
throw new Error("Remove value must not be null or undefined"); | |
} | |
this.#getManager().removeFromKey(key, value); | |
} |
function incrementMetadataKey(key: string, value: number = 1) { | ||
runMetadata.incrementKey(key, value); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add type validation for numeric operations
Both increment and decrement functions should validate their input to prevent runtime errors with non-numeric values.
Apply this validation to both functions:
function incrementMetadataKey(key: string, value: number = 1) {
+ if (!Number.isFinite(value)) {
+ throw new Error("Increment value must be a finite number");
+ }
runMetadata.incrementKey(key, value);
}
function decrementMetadataKey(key: string, value: number = 1) {
+ if (!Number.isFinite(value)) {
+ throw new Error("Decrement value must be a finite number");
+ }
runMetadata.decrementKey(key, value);
}
Also applies to: 139-141
function safeParseJSON(data: string): unknown { | ||
try { | ||
return JSON.parse(data); | ||
} catch (error) { | ||
return data; | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Enhance error handling in safeParseJSON
The current implementation silently returns the raw data on parsing errors, which might mask issues.
function safeParseJSON(data: string): unknown {
try {
return JSON.parse(data);
} catch (error) {
+ console.warn(`Failed to parse JSON: ${error instanceof Error ? error.message : 'Unknown error'}`);
+ if (process.env.NODE_ENV === 'development') {
+ console.debug('Raw data:', data);
+ }
return data;
}
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
function safeParseJSON(data: string): unknown { | |
try { | |
return JSON.parse(data); | |
} catch (error) { | |
return data; | |
} | |
} | |
function safeParseJSON(data: string): unknown { | |
try { | |
return JSON.parse(data); | |
} catch (error) { | |
console.warn(`Failed to parse JSON: ${error instanceof Error ? error.message : 'Unknown error'}`); | |
if (process.env.NODE_ENV === 'development') { | |
console.debug('Raw data:', data); | |
} | |
return data; | |
} | |
} |
createSubscription(runId: string, streamKey: string, baseUrl?: string): StreamSubscription { | ||
const url = `${baseUrl ?? this.baseUrl}/realtime/v1/streams/${runId}/${streamKey}`; | ||
return new SSEStreamSubscription(url, this.options); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add input validation in createSubscription method
The createSubscription
method should validate its input parameters before creating a new subscription.
createSubscription(runId: string, streamKey: string, baseUrl?: string): StreamSubscription {
+ if (!runId) {
+ throw new Error("runId is required");
+ }
+ if (!streamKey) {
+ throw new Error("streamKey is required");
+ }
const url = `${baseUrl ?? this.baseUrl}/realtime/v1/streams/${runId}/${streamKey}`;
return new SSEStreamSubscription(url, this.options);
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
createSubscription(runId: string, streamKey: string, baseUrl?: string): StreamSubscription { | |
const url = `${baseUrl ?? this.baseUrl}/realtime/v1/streams/${runId}/${streamKey}`; | |
return new SSEStreamSubscription(url, this.options); | |
} | |
createSubscription(runId: string, streamKey: string, baseUrl?: string): StreamSubscription { | |
if (!runId) { | |
throw new Error("runId is required"); | |
} | |
if (!streamKey) { | |
throw new Error("streamKey is required"); | |
} | |
const url = `${baseUrl ?? this.baseUrl}/realtime/v1/streams/${runId}/${streamKey}`; | |
return new SSEStreamSubscription(url, this.options); | |
} |
if (run.metadata && "$$streams" in run.metadata && Array.isArray(run.metadata.$$streams)) { | ||
for (const streamKey of run.metadata.$$streams) { | ||
if (typeof streamKey !== "string") { | ||
continue; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Improve type checking for stream metadata
The type checking for stream metadata could be more robust and provide better error handling.
- if (run.metadata && "$$streams" in run.metadata && Array.isArray(run.metadata.$$streams)) {
- for (const streamKey of run.metadata.$$streams) {
- if (typeof streamKey !== "string") {
- continue;
- }
+ const streams = run.metadata?.$$streams;
+ if (!streams || !Array.isArray(streams)) {
+ return;
+ }
+
+ for (const streamKey of streams) {
+ if (typeof streamKey !== "string") {
+ console.warn(`Invalid stream key type: ${typeof streamKey}`);
+ continue;
+ }
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
if (run.metadata && "$$streams" in run.metadata && Array.isArray(run.metadata.$$streams)) { | |
for (const streamKey of run.metadata.$$streams) { | |
if (typeof streamKey !== "string") { | |
continue; | |
} | |
const streams = run.metadata?.$$streams; | |
if (!streams || !Array.isArray(streams)) { | |
return; | |
} | |
for (const streamKey of streams) { | |
if (typeof streamKey !== "string") { | |
console.warn(`Invalid stream key type: ${typeof streamKey}`); | |
continue; | |
} |
TSearchParamsSchema extends z.AnyZodObject | undefined = undefined, | ||
THeadersSchema extends z.AnyZodObject | undefined = undefined | ||
>( | ||
options: PATRouteBuilderOptions<TParamsSchema, TSearchParamsSchema, THeadersSchema>, | ||
handler: PATHandlerFunction<TParamsSchema, TSearchParamsSchema, THeadersSchema> | ||
) { | ||
return async function loader({ request, params }: LoaderFunctionArgs) { | ||
const { | ||
params: paramsSchema, | ||
searchParams: searchParamsSchema, | ||
headers: headersSchema, | ||
corsStrategy = "none", | ||
} = options; | ||
|
||
if (corsStrategy !== "none" && request.method.toUpperCase() === "OPTIONS") { | ||
return apiCors(request, json({})); | ||
} | ||
|
||
try { | ||
const authenticationResult = await authenticateApiRequestWithPersonalAccessToken(request); | ||
|
||
if (!authenticationResult) { | ||
return wrapResponse( | ||
request, | ||
json({ error: "Invalid or Missing API key" }, { status: 401 }), | ||
corsStrategy !== "none" | ||
); | ||
} | ||
|
||
let parsedParams: any = undefined; | ||
if (paramsSchema) { | ||
const parsed = paramsSchema.safeParse(params); | ||
if (!parsed.success) { | ||
return wrapResponse( | ||
request, | ||
json( | ||
{ error: "Params Error", details: fromZodError(parsed.error).details }, | ||
{ status: 400 } | ||
), | ||
corsStrategy !== "none" | ||
); | ||
} | ||
parsedParams = parsed.data; | ||
} | ||
|
||
let parsedSearchParams: any = undefined; | ||
if (searchParamsSchema) { | ||
const searchParams = Object.fromEntries(new URL(request.url).searchParams); | ||
const parsed = searchParamsSchema.safeParse(searchParams); | ||
if (!parsed.success) { | ||
return wrapResponse( | ||
request, | ||
json( | ||
{ error: "Query Error", details: fromZodError(parsed.error).details }, | ||
{ status: 400 } | ||
), | ||
corsStrategy !== "none" | ||
); | ||
} | ||
parsedSearchParams = parsed.data; | ||
} | ||
|
||
let parsedHeaders: any = undefined; | ||
if (headersSchema) { | ||
const rawHeaders = Object.fromEntries(request.headers); | ||
const headers = headersSchema.safeParse(rawHeaders); | ||
if (!headers.success) { | ||
return wrapResponse( | ||
request, | ||
json( | ||
{ error: "Headers Error", details: fromZodError(headers.error).details }, | ||
{ status: 400 } | ||
), | ||
corsStrategy !== "none" | ||
); | ||
} | ||
parsedHeaders = headers.data; | ||
} | ||
|
||
const result = await handler({ | ||
params: parsedParams, | ||
searchParams: parsedSearchParams, | ||
headers: parsedHeaders, | ||
authentication: authenticationResult, | ||
request, | ||
}); | ||
return wrapResponse(request, result, corsStrategy !== "none"); | ||
} catch (error) { | ||
console.error("Error in API route:", error); | ||
if (error instanceof Response) { | ||
return wrapResponse(request, error, corsStrategy !== "none"); | ||
} | ||
return wrapResponse( | ||
request, | ||
json({ error: "Internal Server Error" }, { status: 500 }), | ||
corsStrategy !== "none" | ||
); | ||
} | ||
}; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider extracting shared validation logic.
There's significant code duplication between createLoaderPATApiRoute
and createLoaderApiRoute
in the parameter validation logic. Consider extracting this into a shared utility function.
function validateRequestParams<T extends z.AnyZodObject>({
schema,
data,
errorType,
request,
corsStrategy,
}: {
schema: T;
data: unknown;
errorType: string;
request: Request;
corsStrategy: "all" | "none";
}): Response | { success: true; data: z.infer<T> } {
const parsed = schema.safeParse(data);
if (!parsed.success) {
return wrapResponse(
request,
json(
{ error: `${errorType} Error`, details: fromZodError(parsed.error).details },
{ status: 400 }
),
corsStrategy !== "none"
);
}
return { success: true, data: parsed.data };
}
if (authorization) { | ||
const { action, resource, superScopes } = authorization; | ||
const $resource = resource(parsedParams, parsedSearchParams, parsedHeaders); | ||
|
||
logger.debug("Checking authorization", { | ||
action, | ||
resource: $resource, | ||
superScopes, | ||
scopes: authenticationResult.scopes, | ||
}); | ||
|
||
if (!checkAuthorization(authenticationResult, action, $resource, superScopes)) { | ||
return wrapResponse( | ||
request, | ||
json({ error: "Unauthorized" }, { status: 403 }), | ||
corsStrategy !== "none" | ||
); | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Maintain consistent authorization error messages.
The authorization error message in the action route is less detailed compared to the loader routes. Consider maintaining consistency in error reporting.
- if (!checkAuthorization(authenticationResult, action, $resource, superScopes)) {
+ const authorizationResult = checkAuthorization(
+ authenticationResult,
+ action,
+ $resource,
+ superScopes
+ );
+ if (!authorizationResult.authorized) {
return wrapResponse(
request,
- json({ error: "Unauthorized" }, { status: 403 }),
+ json(
+ {
+ error: `Unauthorized: ${authorizationResult.reason}`,
+ code: "unauthorized",
+ param: "access_token",
+ type: "authorization",
+ },
+ { status: 403 }
+ ),
corsStrategy !== "none"
);
}
Committable suggestion skipped: line range outside the PR's diff.
0887e71
to
c92ecbb
Compare
Add STREAM_ORIGIN to environment schema. Improve performance in RealtimeStreams by using TextDecoderStream for simpler text decoding and implementing batching of XADD commands for Redis streams. Limit stream size using MAXLEN option. Update environment variable repository with new variable type. Adjust import statements for Redis key and value types.
c92ecbb
to
722bb45
Compare
@trigger.dev/react-hooks
@trigger.dev/rsc
package that exports a build extension to add RSC support to trigger.dev buildsruns: true
permissions