-
Notifications
You must be signed in to change notification settings - Fork 72
Implement VS Code telemetry for BI Copilot Insights #1361
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
WalkthroughAdds telemetry instrumentation across the AI generation flow: new telemetry events/components, async telemetry gating, project metrics collection, feedback/generation-response helpers, AgentExecutor and stream context extended to track generationStartTime and usage, RPC hooks for review telemetry, and minor auth/telemetry-wrapper adjustments. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant AgentExecutor as Agent Executor
participant StreamHandler as Stream Handler
participant Telemetry as Telemetry System
participant RPC as AI Panel / RPC
User->>AgentExecutor: request generation
AgentExecutor->>AgentExecutor: record generationStartTime
AgentExecutor->>Telemetry: emit GENERATION_SUBMITTED (msgId, plan_mode, project metrics)
AgentExecutor->>StreamHandler: start stream (pass generationStartTime, usage promise)
StreamHandler->>AgentExecutor: partial results + usage updates
alt User aborts
User->>AgentExecutor: abort
AgentExecutor->>Telemetry: emit GENERATION_ABORTED (start/end, modified files)
else Stream error
StreamHandler-->>AgentExecutor: error
AgentExecutor->>Telemetry: emit GENERATION_FAILED (error, timings)
else Success
StreamHandler-->>AgentExecutor: complete (resolve usage)
AgentExecutor->>Telemetry: emit GENERATION_COMPLETED (metrics, tokens, timings)
end
RPC->>Telemetry: emit KEPT/DISCARD on review actions
AgentExecutor->>User: deliver results
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Poem
🚥 Pre-merge checks | ✅ 3 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
workspaces/ballerina/ballerina-extension/src/features/telemetry/index.ts (1)
79-99: Type mismatch:loginMethodandbiIntelIdcan beundefined.Both
getLoginMethod()andgetBiIntelId()returnPromise<T | undefined>, but the return type declares{ [key: string]: string }. When these values areundefined, the telemetry properties object will contain non-string values, violating the type contract.🔧 Proposed fix using nullish coalescing
const userType = await getLoginMethod(); const biIntelId = await getBiIntelId(); return { ...params, 'ballerina.version': extension ? extension.ballerinaVersion : '', 'scope': component, 'idpId': process.env.VSCODE_CHOREO_USER_IDP_ID ? process.env.VSCODE_CHOREO_USER_IDP_ID : '', 'isWSO2User': isWSO2User ? 'true' : 'false', 'timeZone': Intl.DateTimeFormat().resolvedOptions().timeZone, 'AnonymousUser': isAnonymous ? 'true' : 'false', 'correlationId': CORRELATION_ID, 'component': CHOREO_COMPONENT_ID, 'project': CHOREO_PROJECT_ID, 'org': CHOREO_ORG_ID, - 'loginMethod': userType, - 'biIntelId': biIntelId, + 'loginMethod': userType ?? '', + 'biIntelId': biIntelId ?? '', };
🤖 Fix all issues with AI agents
In `@workspaces/ballerina/ballerina-extension/src/features/ai/utils/feedback.ts`:
- Around line 33-40: The telemetry payload currently sends the full chat
transcript via the chatThread property; instead remove or stop serializing full
messages and replace it with non-PII summaries like chatThreadCount (use
content.messages.length), chatThreadSize (total characters or JSON length), and
an optional chatThreadPreview that is a truncated/redacted string of the first N
characters or first message; alternatively gate full transcript behind explicit
consent. Update the object construction (the block that sets
feedbackType/feedbackMessage/hasFeedbackText/feedbackTextLength/hasChatThread/chatThread)
to drop chatThread and add chatThreadCount, chatThreadSize, and
chatThreadPreview/redaction logic, and ensure hasChatThread remains consistent
with content.messages.length.
- Around line 27-41: The submitFeedback function currently calls the async
sendTelemetryEvent without awaiting it and lacks a success return value; update
submitFeedback to await sendTelemetryEvent (reference: submitFeedback and
sendTelemetryEvent) inside the try block and then return true after the awaited
call, keeping the existing catch to return false on error so the declared
Promise<boolean> is honored.
In
`@workspaces/ballerina/ballerina-extension/src/features/ai/utils/generation-response.ts`:
- Around line 41-44: Update the JSDoc for the discard-telemetry comment in
generation-response.ts: change "discard" to "discards" in the summary sentence
and correct the `@param` messageId description to say "The message identifier for
the discarded generation" (instead of "kept generation") so the documentation
accurately reflects the discard function and the messageId parameter.
In
`@workspaces/ballerina/ballerina-extension/src/features/telemetry/activator.ts`:
- Around line 38-59: The switch cases in the langClient.onNotification handler
declare const variables directly inside cases which violates
noSwitchDeclarations; wrap the TM_EVENT_TYPE_ERROR and
TM_EVENT_TYPE_FEATURE_USAGE cases in their own block scopes (e.g., add { ... }
around the case bodies) so the consts errorEvent and usageEvent are declared
inside those blocks; keep the existing calls to getTelemetryProperties, props
assignments, and commented reporter.sendTelemetryEvent lines unchanged, only add
the braces around the case bodies to prevent cross-case variable leakage.
In
`@workspaces/ballerina/ballerina-extension/src/features/telemetry/common/project-metrics.ts`:
- Around line 25-75: The getProjectMetrics function currently reads every .bal
file fully with fs.promises.readFile to count lines; change it to (1) add a
simple cache (e.g., a module-level Map keyed by workspacePath or "global" with
stored {fileCount,lineCount,timestamp}) with a configurable TTL and
invalidate/update the cache when TTL expires or when a file-watcher notices
changes, and (2) replace full-file reads inside both code paths (the
workspacePath-specific branch and the workspaceFolders branch) with streamed
line counting using fs.createReadStream + readline (or an equivalent buffered
reader) to compute line counts without loading the entire file into memory;
ensure you increment totalFileCount and totalLineCount the same way and update
the cache before returning.
In
`@workspaces/wso2-platform/wso2-platform-extension/src/logger/telemetry-wrapper.ts`:
- Around line 36-39: Fix the typo in the TODO comment near the commented-out
reporter.sendTelemetryErrorEvent call: change "telemerty" to "telemetry" in the
TODO text so it reads "Enable once when the language server telemetry complete"
adjacent to the reporter.sendTelemetryErrorEvent(...) block in
telemetry-wrapper.ts.
- Around line 44-48: Fix the typo "telemerty" → "telemetry" in the TODO comment
near the telemetry reporting block (the comment above
reporter.sendDangerousTelemetryErrorEvent) so the comment reads "Enable once
when the language server telemetry complete"; search for and correct any other
occurrences of "telemerty" in telemetry-wrapper.ts to ensure consistency.
| export async function submitFeedback(content: SubmitFeedbackRequest): Promise<boolean> { | ||
| try { | ||
| sendTelemetryEvent( | ||
| extension.ballerinaExtInstance, | ||
| TM_EVENT_BALLERINA_AI_GENERATION_FEEDBACK, | ||
| CMP_BALLERINA_AI_GENERATION, | ||
| { | ||
| feedbackType: content.positive ? 'positive' : 'negative', | ||
| feedbackMessage: content.feedbackText || '', | ||
| hasFeedbackText: content.feedbackText ? 'true' : 'false', | ||
| feedbackTextLength: content.feedbackText?.length.toString() || '0', | ||
| hasChatThread: content.messages.length > 0 ? 'true' : 'false', | ||
| chatThread: JSON.stringify(content.messages), | ||
| } | ||
| ); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
fd -n "\.ts$" workspaces/ballerina/ballerina-extension/src/features | head -20Repository: wso2/vscode-extensions
Length of output: 296
🏁 Script executed:
fd -n "telemetry" workspaces/ballerina/ballerina-extension/src --type fRepository: wso2/vscode-extensions
Length of output: 296
🏁 Script executed:
cat workspaces/ballerina/ballerina-extension/src/features/ai/utils/feedback.tsRepository: wso2/vscode-extensions
Length of output: 2023
🏁 Script executed:
fd "telemetry" workspaces/ballerina/ballerina-extension/src/features --type fRepository: wso2/vscode-extensions
Length of output: 48
🏁 Script executed:
cat workspaces/ballerina/ballerina-extension/src/features/telemetry/index.tsRepository: wso2/vscode-extensions
Length of output: 5125
Await sendTelemetryEvent and return true on success.
Line 32–36: sendTelemetryEvent is async but not awaited, so errors won't be caught by the try/catch. Additionally, the function declares Promise<boolean> as its return type but has no explicit return statement on the success path, resulting in undefined instead of true.
Proposed fix
export async function submitFeedback(content: SubmitFeedbackRequest): Promise<boolean> {
try {
- sendTelemetryEvent(
+ await sendTelemetryEvent(
extension.ballerinaExtInstance,
TM_EVENT_BALLERINA_AI_GENERATION_FEEDBACK,
CMP_BALLERINA_AI_GENERATION,
{
feedbackType: content.positive ? 'positive' : 'negative',
feedbackMessage: content.feedbackText || '',
hasFeedbackText: content.feedbackText ? 'true' : 'false',
feedbackTextLength: content.feedbackText?.length.toString() || '0',
hasChatThread: content.messages.length > 0 ? 'true' : 'false',
chatThread: JSON.stringify(content.messages),
}
);
+ return true;
} catch (error) {
console.error("Error submitting feedback:", error);
return false;
}
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export async function submitFeedback(content: SubmitFeedbackRequest): Promise<boolean> { | |
| try { | |
| sendTelemetryEvent( | |
| extension.ballerinaExtInstance, | |
| TM_EVENT_BALLERINA_AI_GENERATION_FEEDBACK, | |
| CMP_BALLERINA_AI_GENERATION, | |
| { | |
| feedbackType: content.positive ? 'positive' : 'negative', | |
| feedbackMessage: content.feedbackText || '', | |
| hasFeedbackText: content.feedbackText ? 'true' : 'false', | |
| feedbackTextLength: content.feedbackText?.length.toString() || '0', | |
| hasChatThread: content.messages.length > 0 ? 'true' : 'false', | |
| chatThread: JSON.stringify(content.messages), | |
| } | |
| ); | |
| export async function submitFeedback(content: SubmitFeedbackRequest): Promise<boolean> { | |
| try { | |
| await sendTelemetryEvent( | |
| extension.ballerinaExtInstance, | |
| TM_EVENT_BALLERINA_AI_GENERATION_FEEDBACK, | |
| CMP_BALLERINA_AI_GENERATION, | |
| { | |
| feedbackType: content.positive ? 'positive' : 'negative', | |
| feedbackMessage: content.feedbackText || '', | |
| hasFeedbackText: content.feedbackText ? 'true' : 'false', | |
| feedbackTextLength: content.feedbackText?.length.toString() || '0', | |
| hasChatThread: content.messages.length > 0 ? 'true' : 'false', | |
| chatThread: JSON.stringify(content.messages), | |
| } | |
| ); | |
| return true; | |
| } catch (error) { | |
| console.error("Error submitting feedback:", error); | |
| return false; | |
| } | |
| } |
🤖 Prompt for AI Agents
In `@workspaces/ballerina/ballerina-extension/src/features/ai/utils/feedback.ts`
around lines 27 - 41, The submitFeedback function currently calls the async
sendTelemetryEvent without awaiting it and lacks a success return value; update
submitFeedback to await sendTelemetryEvent (reference: submitFeedback and
sendTelemetryEvent) inside the try block and then return true after the awaited
call, keeping the existing catch to return false on error so the declared
Promise<boolean> is honored.
| { | ||
| feedbackType: content.positive ? 'positive' : 'negative', | ||
| feedbackMessage: content.feedbackText || '', | ||
| hasFeedbackText: content.feedbackText ? 'true' : 'false', | ||
| feedbackTextLength: content.feedbackText?.length.toString() || '0', | ||
| hasChatThread: content.messages.length > 0 ? 'true' : 'false', | ||
| chatThread: JSON.stringify(content.messages), | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid sending the full chat transcript in telemetry properties.
chatThread serializes full user messages, which can include PII or large payloads. Prefer counts, sizes, or truncated/redacted content (or gate behind explicit consent) to keep telemetry compliant and lightweight.
🤖 Prompt for AI Agents
In `@workspaces/ballerina/ballerina-extension/src/features/ai/utils/feedback.ts`
around lines 33 - 40, The telemetry payload currently sends the full chat
transcript via the chatThread property; instead remove or stop serializing full
messages and replace it with non-PII summaries like chatThreadCount (use
content.messages.length), chatThreadSize (total characters or JSON length), and
an optional chatThreadPreview that is a truncated/redacted string of the first N
characters or first message; alternatively gate full transcript behind explicit
consent. Update the object construction (the block that sets
feedbackType/feedbackMessage/hasFeedbackText/feedbackTextLength/hasChatThread/chatThread)
to drop chatThread and add chatThreadCount, chatThreadSize, and
chatThreadPreview/redaction logic, and ensure hasChatThread remains consistent
with content.messages.length.
| /** | ||
| * Sends a telemetry event when the user discard an AI-generated response. | ||
| * | ||
| * @param messageId - The message identifier for the kept generation |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix copy-paste errors in JSDoc.
The documentation has two issues:
- Line 42: "discard" should be "discards" (grammar)
- Line 44: Says "for the kept generation" but this is the discard function
📝 Proposed fix
/**
- * Sends a telemetry event when the user discard an AI-generated response.
+ * Sends a telemetry event when the user discards an AI-generated response.
*
- * `@param` messageId - The message identifier for the kept generation
+ * `@param` messageId - The message identifier for the discarded generation
*/📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| /** | |
| * Sends a telemetry event when the user discard an AI-generated response. | |
| * | |
| * @param messageId - The message identifier for the kept generation | |
| /** | |
| * Sends a telemetry event when the user discards an AI-generated response. | |
| * | |
| * `@param` messageId - The message identifier for the discarded generation | |
| */ |
🤖 Prompt for AI Agents
In
`@workspaces/ballerina/ballerina-extension/src/features/ai/utils/generation-response.ts`
around lines 41 - 44, Update the JSDoc for the discard-telemetry comment in
generation-response.ts: change "discard" to "discards" in the summary sentence
and correct the `@param` messageId description to say "The message identifier for
the discarded generation" (instead of "kept generation") so the documentation
accurately reflects the discard function and the messageId parameter.
| langClient.onNotification('telemetry/event', async (event: LSTelemetryEvent) => { | ||
| let props: { [key: string]: string; }; | ||
| switch (event.type) { | ||
| case TM_EVENT_TYPE_ERROR: | ||
| const errorEvent: LSErrorTelemetryEvent = <LSErrorTelemetryEvent>event; | ||
| props = getTelemetryProperties(ballerinaExtInstance, event.component, getMessageObject(TM_EVENT_TYPE_ERROR)); | ||
| props = await getTelemetryProperties(ballerinaExtInstance, event.component, | ||
| getMessageObject(TM_EVENT_TYPE_ERROR)); | ||
| props["ballerina.langserver.error.description"] = errorEvent.message; | ||
| props["ballerina.langserver.error.stacktrace"] = errorEvent.errorStackTrace; | ||
| props["ballerina.langserver.error.message"] = errorEvent.errorMessage; | ||
| reporter.sendTelemetryEvent(TM_ERROR_LANG_SERVER, props); | ||
| // TODO: Enable once when the language server telemerty complete | ||
| // reporter.sendTelemetryEvent(TM_ERROR_LANG_SERVER, props); | ||
| break; | ||
| case TM_EVENT_TYPE_FEATURE_USAGE: | ||
| const usageEvent: LSFeatureUsageTelemetryEvent = <LSFeatureUsageTelemetryEvent>event; | ||
| props = getTelemetryProperties(ballerinaExtInstance, event.component, | ||
| props = await getTelemetryProperties(ballerinaExtInstance, event.component, | ||
| getMessageObject(TM_EVENT_TYPE_FEATURE_USAGE)); | ||
| props["ballerina.langserver.feature.name"] = usageEvent.featureName; | ||
| props["ballerina.langserver.feature.class"] = usageEvent.featureClass; | ||
| props["ballerina.langserver.feature.message"] = usageEvent.featureMessage; | ||
| reporter.sendTelemetryEvent(TM_FEATURE_USAGE_LANG_SERVER, props); | ||
| // TODO: Enable once when the language server telemerty complete | ||
| // reporter.sendTelemetryEvent(TM_FEATURE_USAGE_LANG_SERVER, props); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# Verify file exists and check content around lines 38-59
fd -n "activator.ts" -p "telemetry"Repository: wso2/vscode-extensions
Length of output: 296
🏁 Script executed:
# Search for Biome config files in the repository
fd -n "biome" --type f | grep -E '\.(json|js|cjs)$'Repository: wso2/vscode-extensions
Length of output: 296
🏁 Script executed:
# Find the activator.ts file
fd "activator.ts" workspaces/ballerina/Repository: wso2/vscode-extensions
Length of output: 1139
🏁 Script executed:
# Find all Biome config files
fd "biome" --type fRepository: wso2/vscode-extensions
Length of output: 447
🏁 Script executed:
# Check if ballerina workspace has its own biome config
fd "biome.json" workspaces/ballerina/Repository: wso2/vscode-extensions
Length of output: 48
🏁 Script executed:
# Read the telemetry activator.ts file to verify lines 38-59
wc -l workspaces/ballerina/ballerina-extension/src/features/telemetry/activator.tsRepository: wso2/vscode-extensions
Length of output: 146
🏁 Script executed:
# Read the file content with line numbers, focusing on lines 38-65
sed -n '38,65p' workspaces/ballerina/ballerina-extension/src/features/telemetry/activator.ts | cat -nRepository: wso2/vscode-extensions
Length of output: 2001
🏁 Script executed:
# Read the root biome.json to check for noSwitchDeclarations rule
cat biome.jsonRepository: wso2/vscode-extensions
Length of output: 1841
🏁 Script executed:
# Also check if ballerina extension has its own biome.json that might override
ls -la workspaces/ballerina/ballerina-extension/ | grep biomeRepository: wso2/vscode-extensions
Length of output: 48
Wrap switch-case declarations in blocks (noSwitchDeclarations).
Biome enforces noSwitchDeclarations as part of recommended correctness rules. Both const errorEvent (line 42) and const usageEvent (line 52) are declared directly inside switch cases without blocks. Wrap each case in {} to avoid cross-case variable access and satisfy lint.
Proposed fix
switch (event.type) {
- case TM_EVENT_TYPE_ERROR:
- const errorEvent: LSErrorTelemetryEvent = <LSErrorTelemetryEvent>event;
- props = await getTelemetryProperties(ballerinaExtInstance, event.component,
- getMessageObject(TM_EVENT_TYPE_ERROR));
- props["ballerina.langserver.error.description"] = errorEvent.message;
- props["ballerina.langserver.error.stacktrace"] = errorEvent.errorStackTrace;
- props["ballerina.langserver.error.message"] = errorEvent.errorMessage;
- // TODO: Enable once when the language server telemerty complete
- // reporter.sendTelemetryEvent(TM_ERROR_LANG_SERVER, props);
- break;
- case TM_EVENT_TYPE_FEATURE_USAGE:
- const usageEvent: LSFeatureUsageTelemetryEvent = <LSFeatureUsageTelemetryEvent>event;
- props = await getTelemetryProperties(ballerinaExtInstance, event.component,
- getMessageObject(TM_EVENT_TYPE_FEATURE_USAGE));
- props["ballerina.langserver.feature.name"] = usageEvent.featureName;
- props["ballerina.langserver.feature.class"] = usageEvent.featureClass;
- props["ballerina.langserver.feature.message"] = usageEvent.featureMessage;
- // TODO: Enable once when the language server telemerty complete
- // reporter.sendTelemetryEvent(TM_FEATURE_USAGE_LANG_SERVER, props);
- break;
+ case TM_EVENT_TYPE_ERROR: {
+ const errorEvent: LSErrorTelemetryEvent = <LSErrorTelemetryEvent>event;
+ props = await getTelemetryProperties(ballerinaExtInstance, event.component,
+ getMessageObject(TM_EVENT_TYPE_ERROR));
+ props["ballerina.langserver.error.description"] = errorEvent.message;
+ props["ballerina.langserver.error.stacktrace"] = errorEvent.errorStackTrace;
+ props["ballerina.langserver.error.message"] = errorEvent.errorMessage;
+ // TODO: Enable once when the language server telemerty complete
+ // reporter.sendTelemetryEvent(TM_ERROR_LANG_SERVER, props);
+ break;
+ }
+ case TM_EVENT_TYPE_FEATURE_USAGE: {
+ const usageEvent: LSFeatureUsageTelemetryEvent = <LSFeatureUsageTelemetryEvent>event;
+ props = await getTelemetryProperties(ballerinaExtInstance, event.component,
+ getMessageObject(TM_EVENT_TYPE_FEATURE_USAGE));
+ props["ballerina.langserver.feature.name"] = usageEvent.featureName;
+ props["ballerina.langserver.feature.class"] = usageEvent.featureClass;
+ props["ballerina.langserver.feature.message"] = usageEvent.featureMessage;
+ // TODO: Enable once when the language server telemerty complete
+ // reporter.sendTelemetryEvent(TM_FEATURE_USAGE_LANG_SERVER, props);
+ break;
+ }🧰 Tools
🪛 Biome (2.3.13)
[error] 42-42: Other switch clauses can erroneously access this declaration.
Wrap the declaration in a block to restrict its access to the switch clause.
The declaration is defined in this switch clause:
Safe fix: Wrap the declaration in a block.
(lint/correctness/noSwitchDeclarations)
[error] 52-52: Other switch clauses can erroneously access this declaration.
Wrap the declaration in a block to restrict its access to the switch clause.
The declaration is defined in this switch clause:
Safe fix: Wrap the declaration in a block.
(lint/correctness/noSwitchDeclarations)
🤖 Prompt for AI Agents
In `@workspaces/ballerina/ballerina-extension/src/features/telemetry/activator.ts`
around lines 38 - 59, The switch cases in the langClient.onNotification handler
declare const variables directly inside cases which violates
noSwitchDeclarations; wrap the TM_EVENT_TYPE_ERROR and
TM_EVENT_TYPE_FEATURE_USAGE cases in their own block scopes (e.g., add { ... }
around the case bodies) so the consts errorEvent and usageEvent are declared
inside those blocks; keep the existing calls to getTelemetryProperties, props
assignments, and commented reporter.sendTelemetryEvent lines unchanged, only add
the braces around the case bodies to prevent cross-case variable leakage.
| export async function getProjectMetrics(workspacePath?: string): Promise<ProjectMetrics> { | ||
| // If a specific workspace path is provided, use it; otherwise use workspace folders | ||
| if (workspacePath) { | ||
| const files = await vscode.workspace.findFiles( | ||
| new vscode.RelativePattern(workspacePath, '**/*.bal'), | ||
| new vscode.RelativePattern(workspacePath, '**/target/**') | ||
| ); | ||
|
|
||
| let totalFileCount = 0; | ||
| let totalLineCount = 0; | ||
|
|
||
| for (const fileUri of files) { | ||
| try { | ||
| totalFileCount++; | ||
| const fileContent = await fs.promises.readFile(fileUri.fsPath, 'utf8'); | ||
| const lineCount = fileContent.split('\n').length; | ||
| totalLineCount += lineCount; | ||
| } catch (error) { | ||
| console.warn(`Failed to read file ${fileUri.fsPath}:`, error); | ||
| } | ||
| } | ||
|
|
||
| return { | ||
| fileCount: totalFileCount, | ||
| lineCount: totalLineCount | ||
| }; | ||
| } | ||
|
|
||
| const workspaceFolders = vscode.workspace.workspaceFolders; | ||
|
|
||
| if (!workspaceFolders || workspaceFolders.length === 0) { | ||
| return { fileCount: 0, lineCount: 0 }; | ||
| } | ||
| const files = await vscode.workspace.findFiles( | ||
| '**/*.bal', | ||
| '**/target/**' | ||
| ); | ||
|
|
||
| let totalFileCount = 0; | ||
| let totalLineCount = 0; | ||
|
|
||
| for (const fileUri of files) { | ||
| try { | ||
| totalFileCount++; | ||
| const fileContent = await fs.promises.readFile(fileUri.fsPath, 'utf8'); | ||
| const lineCount = fileContent.split('\n').length; | ||
| totalLineCount += lineCount; | ||
| } catch (error) { | ||
| console.warn(`Failed to read file ${fileUri.fsPath}:`, error); | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
rg -n "getProjectMetrics" -g "*.ts" --max-count=50Repository: wso2/vscode-extensions
Length of output: 865
🏁 Script executed:
cd workspaces/ballerina && find . -name "project-metrics.ts" -type fRepository: wso2/vscode-extensions
Length of output: 136
🏁 Script executed:
sed -n '85,105p' workspaces/ballerina/ballerina-extension/src/features/ai/agent/index.tsRepository: wso2/vscode-extensions
Length of output: 1097
🏁 Script executed:
sed -n '440,460p' workspaces/ballerina/ballerina-extension/src/features/ai/agent/AgentExecutor.tsRepository: wso2/vscode-extensions
Length of output: 957
🏁 Script executed:
sed -n '70,110p' workspaces/ballerina/ballerina-extension/src/features/ai/agent/index.tsRepository: wso2/vscode-extensions
Length of output: 2129
🏁 Script executed:
sed -n '1,75p' workspaces/ballerina/ballerina-extension/src/features/telemetry/common/project-metrics.tsRepository: wso2/vscode-extensions
Length of output: 2603
Consider optimizing metric collection for telemetry events.
The function reads all .bal files sequentially with full content load (via fs.promises.readFile()) just to count lines. This approach is called on query submission and generation completion, which while not ultra-frequent, still incurs unnecessary I/O overhead. For telemetry purposes, consider: (1) caching metrics with TTL or file-watcher invalidation to avoid repeated scans, or (2) using a simpler approach that counts lines without loading entire file contents into memory (e.g., buffered stream reading).
🤖 Prompt for AI Agents
In
`@workspaces/ballerina/ballerina-extension/src/features/telemetry/common/project-metrics.ts`
around lines 25 - 75, The getProjectMetrics function currently reads every .bal
file fully with fs.promises.readFile to count lines; change it to (1) add a
simple cache (e.g., a module-level Map keyed by workspacePath or "global" with
stored {fileCount,lineCount,timestamp}) with a configurable TTL and
invalidate/update the cache when TTL expires or when a file-watcher notices
changes, and (2) replace full-file reads inside both code paths (the
workspacePath-specific branch and the workspaceFolders branch) with streamed
line counting using fs.createReadStream + readline (or an equivalent buffered
reader) to compute line counts without loading the entire file into memory;
ensure you increment totalFileCount and totalLineCount the same way and update
the cache before returning.
workspaces/wso2-platform/wso2-platform-extension/src/logger/telemetry-wrapper.ts
Outdated
Show resolved
Hide resolved
workspaces/wso2-platform/wso2-platform-extension/src/logger/telemetry-wrapper.ts
Outdated
Show resolved
Hide resolved
|
We dont use Capital letters in the tags (keys). Please refer the jaeger screenshots in https://ballerina.io/learn/supported-observability-tools-and-platforms/jaeger/. the tag keys should have lower case characters with For example, isPlanMode-> plan_mode When you have common names like Please refer this blog as well. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@workspaces/ballerina/ballerina-extension/src/features/telemetry/index.ts`:
- Around line 97-98: When setting telemetry properties 'user.login_method' and
'user.bi_intel_id' in the telemetry payload, guard against getLoginMethod() and
getBiIntelId() returning undefined by normalizing their values (e.g., default to
an empty string or null) before assignment; update the code around the telemetry
construction (where 'user.login_method' and 'user.bi_intel_id' are set) to call
getLoginMethod() and getBiIntelId(), check for undefined, and assign a safe
default instead of letting undefined serialize into the string "undefined".
🧹 Nitpick comments (1)
workspaces/ballerina/ballerina-extension/src/features/ai/agent/AgentExecutor.ts (1)
444-472: Consider addinggeneration.duration_msfor consistency with abort/error telemetry.The abort (line 306) and error (line 419) telemetry events include
generation.duration_ms, but the completion event does not. Adding this derived metric would improve consistency and simplify downstream analysis.Proposed enhancement
sendTelemetryEvent( extension.ballerinaExtInstance, TM_EVENT_BALLERINA_AI_GENERATION_COMPLETED, CMP_BALLERINA_AI_GENERATION, { 'message.id': context.messageId, 'generation.modified_files_count': context.modifiedFiles.length.toString(), 'generation.start_time': context.generationStartTime.toString(), 'generation.end_time': generationEndTime.toString(), + 'generation.duration_ms': (generationEndTime - context.generationStartTime).toString(), 'plan_mode': isPlanModeEnabled.toString(),
workspaces/ballerina/ballerina-extension/src/features/telemetry/index.ts
Outdated
Show resolved
Hide resolved
d728fa1 to
3b4b541
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In
`@workspaces/ballerina/ballerina-extension/src/features/ai/agent/AgentExecutor.ts`:
- Around line 449-452: Awaiting context.usage can yield null/undefined and
directly accessing tokenUsage.inputTokens will throw; update the code around
context.usage in AgentExecutor (the tokenUsage handling) to guard against null
by assigning a safe default or using optional chaining—e.g., resolve tokenUsage
= (await context.usage) ?? {}; then derive inputTokens/outputTokens/totalTokens
with defaults (inputTokens = tokenUsage.inputTokens ?? 0, etc.) or use optional
chaining (tokenUsage?.inputTokens ?? 0) so inputTokens, outputTokens and
totalTokens never access properties on null.
🧹 Nitpick comments (1)
workspaces/ballerina/ballerina-extension/src/features/ai/agent/AgentExecutor.ts (1)
464-464: Consider using hierarchical naming forplan_mode.The property
'plan_mode'is inconsistent with other properties that use hierarchical prefixes (e.g.,'generation.start_time','generation.end_time'). Based on reviewer feedback requesting hierarchical names for common fields, this should be'generation.plan_mode'for filtering consistency.♻️ Proposed fix for naming consistency
- 'plan_mode': isPlanModeEnabled.toString(), + 'generation.plan_mode': isPlanModeEnabled.toString(),
workspaces/ballerina/ballerina-extension/src/features/ai/agent/AgentExecutor.ts
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@workspaces/ballerina/ballerina-extension/src/features/ai/agent/index.ts`:
- Around line 93-112: The call to sendTelemetryEvent in agent/index.ts is not
awaited or error-handled, which can cause unhandled promise rejections; update
the call (the invocation that uses extension.ballerinaExtInstance,
TM_EVENT_BALLERINA_AI_GENERATION_SUBMITTED, CMP_BALLERINA_AI_GENERATION and the
telemetry payload) to either await the promise inside the surrounding async
function or explicitly handle rejections (e.g., append .catch(...) to log errors
via the existing logger/processLogger) so failures are captured without leaking
unhandled rejections.
In
`@workspaces/ballerina/ballerina-extension/src/rpc-managers/ai-panel/rpc-manager.ts`:
- Around line 466-468: sendGenerationKeptTelemetry is calling the async
sendTelemetryEvent without awaiting or handling rejections, which can cause
unhandled promise rejections; update the helper (sendGenerationKeptTelemetry in
generation-response.ts / rpc-manager.ts usage) to be async and await
sendTelemetryEvent or explicitly handle the returned promise with .catch(),
wrapping the call in try/catch to swallow/log errors via the existing logger and
ensure the helper returns a Promise<void> so callers (like the call in
rpc-manager.ts) are safe from unhandled errors.
| // Get project metrics and chat history for telemetry | ||
| const projectMetrics = await getProjectMetrics(workspaceId); | ||
| const chatHistory = chatStateStorage.getChatHistoryForLLM(workspaceId, threadId); | ||
|
|
||
| // Send telemetry event for query submission | ||
| sendTelemetryEvent( | ||
| extension.ballerinaExtInstance, | ||
| TM_EVENT_BALLERINA_AI_GENERATION_SUBMITTED, | ||
| CMP_BALLERINA_AI_GENERATION, | ||
| { | ||
| 'message.id': config.generationId, | ||
| 'command': Command.Agent, | ||
| 'plan_mode': (params.isPlanMode ?? false).toString(), | ||
| 'project.files_before': projectMetrics.fileCount.toString(), | ||
| 'project.lines_before': projectMetrics.lineCount.toString(), | ||
| 'file_attachments': (params.fileAttachmentContents?.length > 0).toString(), | ||
| 'chat.has_history': (chatHistory.length > 0).toString(), | ||
| 'chat.history_length': chatHistory.length.toString(), | ||
| } | ||
| ); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Handle the async telemetry promise to avoid unhandled rejections.
Line 98 calls sendTelemetryEvent without awaiting or handling errors; failures won’t be caught by the surrounding try.
✅ Safer fire-and-forget (no blocking)
- sendTelemetryEvent(
+ void sendTelemetryEvent(
extension.ballerinaExtInstance,
TM_EVENT_BALLERINA_AI_GENERATION_SUBMITTED,
CMP_BALLERINA_AI_GENERATION,
{
'message.id': config.generationId,
'command': Command.Agent,
'plan_mode': (params.isPlanMode ?? false).toString(),
'project.files_before': projectMetrics.fileCount.toString(),
'project.lines_before': projectMetrics.lineCount.toString(),
'file_attachments': (params.fileAttachmentContents?.length > 0).toString(),
'chat.has_history': (chatHistory.length > 0).toString(),
'chat.history_length': chatHistory.length.toString(),
}
- );
+ ).catch(err => console.warn('[Telemetry] submit failed', err));🤖 Prompt for AI Agents
In `@workspaces/ballerina/ballerina-extension/src/features/ai/agent/index.ts`
around lines 93 - 112, The call to sendTelemetryEvent in agent/index.ts is not
awaited or error-handled, which can cause unhandled promise rejections; update
the call (the invocation that uses extension.ballerinaExtInstance,
TM_EVENT_BALLERINA_AI_GENERATION_SUBMITTED, CMP_BALLERINA_AI_GENERATION and the
telemetry payload) to either await the promise inside the surrounding async
function or explicitly handle rejections (e.g., append .catch(...) to log errors
via the existing logger/processLogger) so failures are captured without leaking
unhandled rejections.
| // Send telemetry for generation kept | ||
| sendGenerationKeptTelemetry(latestReview.id); | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure telemetry helpers handle async errors.
Line 467 calls sendGenerationKeptTelemetry, which internally invokes async sendTelemetryEvent without awaiting. That can lead to unhandled promise rejections. Consider handling the promise inside the helper.
♻️ Suggested fix in generation-response.ts
-export function sendGenerationKeptTelemetry(messageId: string): void {
- sendTelemetryEvent(
+export function sendGenerationKeptTelemetry(messageId: string): void {
+ void sendTelemetryEvent(
extension.ballerinaExtInstance,
TM_EVENT_BALLERINA_AI_GENERATION_KEPT,
CMP_BALLERINA_AI_GENERATION,
{
'message.id': messageId,
}
- );
+ ).catch(err => console.warn('[Telemetry] kept failed', err));
}🤖 Prompt for AI Agents
In
`@workspaces/ballerina/ballerina-extension/src/rpc-managers/ai-panel/rpc-manager.ts`
around lines 466 - 468, sendGenerationKeptTelemetry is calling the async
sendTelemetryEvent without awaiting or handling rejections, which can cause
unhandled promise rejections; update the helper (sendGenerationKeptTelemetry in
generation-response.ts / rpc-manager.ts usage) to be async and await
sendTelemetryEvent or explicitly handle the returned promise with .catch(),
wrapping the call in try/catch to swallow/log errors via the existing logger and
ensure the helper returns a Promise<void> so callers (like the call in
rpc-manager.ts) are safe from unhandled errors.
3b4b541 to
36738e4
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@workspaces/ballerina/ballerina-extension/src/features/telemetry/index.ts`:
- Around line 67-77: sendTelemetryException is now async because it awaits
getTelemetryProperties, but all ~20 call sites call it fire-and-forget; decide
one behavior and make it consistent: either (A) preserve fire-and-forget by
removing async/await from sendTelemetryException (inline use then on
getTelemetryProperties or build properties synchronously) so the function
returns void and callers need not change, or (B) keep it async and update every
caller (e.g., in extension.ts, notebookController.ts, doc.ts, pack.ts, run.ts,
test.ts, etc.) to await sendTelemetryException or handle the returned Promise;
locate sendTelemetryException and getTelemetryProperties in telemetry/index.ts
to implement the chosen fix and update callers across the codebase accordingly.
- Around line 55-65: sendTelemetryEvent is now async but called fire-and-forget
everywhere; wrap the function body in a try-catch so any thrown/rejected
promises are handled inside the function (instead of relying on callers).
Specifically, in sendTelemetryEvent(...) catch errors from await
getTelemetryProperties(...) and
extension.telemetryReporter.sendTelemetryEvent(...) and log them via
extension.logger or extension.telemetryReporter as appropriate, then return;
keep the existing gating (isTelemetryEnabled / codeServerEnv /
shouldSendToAppInsights) intact so behavior doesn’t change except that
rejections are absorbed and reported.
🧹 Nitpick comments (3)
workspaces/ballerina/ballerina-extension/src/features/telemetry/common/project-metrics.ts (2)
20-23: Consider using snake_case for telemetry property names.Per reviewer feedback in this PR, telemetry property/tag keys should use lowercase with
_separators (e.g.,file_count,line_count) rather than camelCase. This aligns with OpenTelemetry naming best practices and simplifies filtering in observability tools.♻️ Proposed naming change
export interface ProjectMetrics { - fileCount: number; - lineCount: number; + file_count: number; + line_count: number; }Note: This change will require updating all usages of these properties in consuming files (e.g.,
AgentExecutor.ts,agent/index.ts).
36-43: Consider parallel file reading for improved performance.Files are currently read sequentially. For workspaces with many
.balfiles, parallelizing reads (with a concurrency limit to avoid file descriptor exhaustion) can improve performance.♻️ Proposed parallel reading with concurrency limit
+const CONCURRENCY_LIMIT = 10; + +async function countLinesInFile(filePath: string): Promise<number> { + try { + const content = await fs.promises.readFile(filePath, 'utf8'); + return content.split('\n').length; + } catch (error) { + console.warn(`Failed to read file ${filePath}:`, error); + return 0; + } +} + export async function getProjectMetrics(workspacePath?: string): Promise<ProjectMetrics> { const includePattern = workspacePath ? new vscode.RelativePattern(workspacePath, '**/*.bal') : '**/*.bal'; const excludePattern = workspacePath ? new vscode.RelativePattern(workspacePath, '**/target/**') : '**/target/**'; const files = await vscode.workspace.findFiles(includePattern, excludePattern); - let totalLineCount = 0; - for (const fileUri of files) { - try { - const fileContent = await fs.promises.readFile(fileUri.fsPath, 'utf8'); - totalLineCount += fileContent.split('\n').length; - } catch (error) { - console.warn(`Failed to read file ${fileUri.fsPath}:`, error); - } - } + // Process files in batches to limit concurrency + let totalLineCount = 0; + for (let i = 0; i < files.length; i += CONCURRENCY_LIMIT) { + const batch = files.slice(i, i + CONCURRENCY_LIMIT); + const counts = await Promise.all(batch.map(f => countLinesInFile(f.fsPath))); + totalLineCount += counts.reduce((sum, c) => sum + c, 0); + } return { fileCount: files.length, lineCount: totalLineCount }; }workspaces/ballerina/ballerina-extension/src/features/telemetry/index.ts (1)
88-96: Consider standardizing telemetry property naming conventions.The new properties (
user.login_method,user.bi_intel_id) follow the recommended lowercase with dot/underscore separators. However, existing properties likeidpId,isWSO2User, andAnonymousUseruse camelCase/PascalCase, creating inconsistency. Consider a follow-up task to align all property names if downstream consumers can accommodate the change.
| export async function sendTelemetryEvent(extension: BallerinaExtension, eventName: string, componentName: string, | ||
| customDimensions: { [key: string]: string; } = {}, measurements: { [key: string]: number; } = {}) { | ||
| // temporarily disabled in codeserver due to GDPR issue | ||
| if (extension.isTelemetryEnabled() && !extension.getCodeServerContext().codeServerEnv) { | ||
| extension.telemetryReporter.sendTelemetryEvent(eventName, getTelemetryProperties(extension, componentName, | ||
| customDimensions), measurements); | ||
| // Only send whitelisted AI telemetry events to Application Insights | ||
| if (shouldSendToAppInsights(eventName, componentName)) { | ||
| extension.telemetryReporter.sendTelemetryEvent(eventName, await getTelemetryProperties(extension, componentName, | ||
| customDimensions), measurements); | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
rg -n --type=ts -C3 'sendTelemetryEvent\(' -g '!**/telemetry/index.ts'Repository: wso2/vscode-extensions
Length of output: 50378
🏁 Script executed:
rg -n 'getTelemetryProperties' --type=ts -A10 'workspaces/ballerina/ballerina-extension/src/features/telemetry/index.ts'Repository: wso2/vscode-extensions
Length of output: 1415
🏁 Script executed:
rg -n 'async function getLoginMethod|async function getBiIntelId' --type=ts -A15 'workspaces/ballerina/ballerina-extension/src/features/telemetry/'Repository: wso2/vscode-extensions
Length of output: 48
🏁 Script executed:
rg -n 'getLoginMethod|getBiIntelId' --type=ts -B2 -A10 'workspaces/ballerina/ballerina-extension/src/features/telemetry/'Repository: wso2/vscode-extensions
Length of output: 3852
🏁 Script executed:
rg -n 'export.*function getLoginMethod|export.*function getBiIntelId' --type=ts -A15 'workspaces/ballerina/ballerina-extension/src/utils/ai/'Repository: wso2/vscode-extensions
Length of output: 48
🏁 Script executed:
find workspaces/ballerina/ballerina-extension/src/utils -name "*auth*" -type fRepository: wso2/vscode-extensions
Length of output: 127
🏁 Script executed:
cat workspaces/ballerina/ballerina-extension/src/utils/ai/auth.tsRepository: wso2/vscode-extensions
Length of output: 15271
🏁 Script executed:
rg -n 'export.*function checkDevantEnvironment' --type=ts -A20 'workspaces/ballerina/ballerina-extension/src/views/ai-panel/'Repository: wso2/vscode-extensions
Length of output: 48
🏁 Script executed:
rg -n 'checkDevantEnvironment' --type=ts -A10 'workspaces/ballerina/ballerina-extension/src/views/'Repository: wso2/vscode-extensions
Length of output: 2479
🏁 Script executed:
cat -n workspaces/ballerina/ballerina-extension/src/features/telemetry/index.ts | head -80 | tail -35Repository: wso2/vscode-extensions
Length of output: 2104
Add error handling for the async function or update call sites.
The function is now async but all 30+ callers invoke it without await or .catch() handlers. While getTelemetryProperties has internal error handling in its dependencies, this fire-and-forget pattern is fragile and can lead to unhandled promise rejections if future changes introduce errors. Either wrap the function body in a try-catch to prevent rejections, or ensure callers handle the returned promise.
🤖 Prompt for AI Agents
In `@workspaces/ballerina/ballerina-extension/src/features/telemetry/index.ts`
around lines 55 - 65, sendTelemetryEvent is now async but called fire-and-forget
everywhere; wrap the function body in a try-catch so any thrown/rejected
promises are handled inside the function (instead of relying on callers).
Specifically, in sendTelemetryEvent(...) catch errors from await
getTelemetryProperties(...) and
extension.telemetryReporter.sendTelemetryEvent(...) and log them via
extension.logger or extension.telemetryReporter as appropriate, then return;
keep the existing gating (isTelemetryEnabled / codeServerEnv /
shouldSendToAppInsights) intact so behavior doesn’t change except that
rejections are absorbed and reported.
| export async function sendTelemetryException(extension: BallerinaExtension, error: Error, componentName: string, | ||
| params: { [key: string]: string } = {}) { | ||
| // temporarily disabled in codeserver due to GDPR issue | ||
| if (extension.isTelemetryEnabled() && !extension.getCodeServerContext().codeServerEnv) { | ||
| extension.telemetryReporter.sendTelemetryException(error, getTelemetryProperties(extension, componentName, | ||
| params)); | ||
| // Only send whitelisted AI telemetry exceptions to Application Insights | ||
| if (shouldSendToAppInsights('', componentName)) { | ||
| extension.telemetryReporter.sendTelemetryException(error, await getTelemetryProperties(extension, componentName, | ||
| params)); | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
rg -n --type=ts -C2 'sendTelemetryException\(' -g '!**/telemetry/index.ts'Repository: wso2/vscode-extensions
Length of output: 15695
🏁 Script executed:
# Check a few representative call sites to see if they await sendTelemetryException
head -n 110 workspaces/ballerina/ballerina-extension/src/core/extension.ts | tail -n 40Repository: wso2/vscode-extensions
Length of output: 1262
🏁 Script executed:
# Check another one in project cmds
head -n 160 workspaces/ballerina/ballerina-extension/src/features/project/cmds/doc.ts | tail -n 60Repository: wso2/vscode-extensions
Length of output: 2338
🏁 Script executed:
# Check why sendTelemetryException is async - look at the full telemetry index implementation
cat -n workspaces/ballerina/ballerina-extension/src/features/telemetry/index.tsRepository: wso2/vscode-extensions
Length of output: 5934
All callers of sendTelemetryException are not awaiting the Promise—verify if fire-and-forget behavior is intentional.
The function is now async (awaiting getTelemetryProperties at line 73), but across all 20+ call sites—in extension.ts, notebookController.ts, project commands (doc.ts, pack.ts, run.ts, test.ts, etc.), and other files—the calls are made without await or any Promise handling. None of the callers wait for the telemetry operation to complete. If fire-and-forget is the intended behavior, the function should not be async; otherwise, callers must await to ensure telemetry data is collected before the containing block exits.
🤖 Prompt for AI Agents
In `@workspaces/ballerina/ballerina-extension/src/features/telemetry/index.ts`
around lines 67 - 77, sendTelemetryException is now async because it awaits
getTelemetryProperties, but all ~20 call sites call it fire-and-forget; decide
one behavior and make it consistent: either (A) preserve fire-and-forget by
removing async/await from sendTelemetryException (inline use then on
getTelemetryProperties or build properties synchronously) so the function
returns void and callers need not change, or (B) keep it async and update every
caller (e.g., in extension.ts, notebookController.ts, doc.ts, pack.ts, run.ts,
test.ts, etc.) to await sendTelemetryException or handle the returned Promise;
locate sendTelemetryException and getTelemetryProperties in telemetry/index.ts
to implement the chosen fix and update callers across the codebase accordingly.
3c37b33 to
0d54530
Compare
Purpose
Add telemetry tracking for BI Copilot AI features to collect usage metrics and insights.
Resolves wso2/product-ballerina-integrator#1627
Goals
Approach
Added the following telemetry events:
ballerina.ai.generation.submitted- When user submits a promptballerina.ai.generation.completed- When generation succeedsballerina.ai.generation.failed- When generation failsballerina.ai.generation.aborted- When user stops generationballerina.ai.generation.kept- When user accepts codeballerina.ai.generation.discard- When user declines codeSummary by CodeRabbit
New Features
Chores