-
Notifications
You must be signed in to change notification settings - Fork 24
Pre release candidate #272
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: feature/annotations
Are you sure you want to change the base?
Conversation
- Introduced the Email Summarizer Plugin that automatically sends email summaries upon conversation completion. - Implemented SMTP Email Service for sending emails, supporting HTML and plain text formats with TLS/SSL encryption. - Added configuration options for SMTP settings in the .env.template and plugins.yml.template. - Created comprehensive documentation for plugin development and usage, including a new plugin generation script. - Enhanced testing coverage for the Email Summarizer Plugin and SMTP Email Service to ensure reliability and functionality.
- Removed the static PLUGINS dictionary and replaced it with a dynamic discovery mechanism for plugins. - Implemented a new setup process for plugins, allowing for configuration via individual setup scripts. - Added the Email Summarizer plugin with a dedicated setup script for SMTP configuration. - Enhanced the main setup flow to support community plugins and their configuration. - Cleaned up unused functions related to plugin configuration and streamlined the overall plugin setup process.
- Updated the .env.template to include new configuration options for the Home Assistant and Email Summarizer plugins, including server URLs, tokens, and additional settings. - Refactored Docker Compose files to correctly mount plugin configuration paths. - Introduced comprehensive documentation for plugin configuration architecture, detailing the separation of concerns for orchestration, settings, and secrets. - Added individual configuration files for the Home Assistant and Email Summarizer plugins, ensuring proper management of non-secret settings and environment variable references. - Improved the plugin loading process to merge configurations from multiple sources, enhancing flexibility and maintainability.
- Updated the plugin setup script to run interactively, enabling plugins to prompt for user input during configuration. - Removed output capturing to facilitate real-time interaction and improved error messaging to include exit codes for better debugging.
- Introduced `setup_utils.py` containing functions for reading environment variables, prompting user input, and masking sensitive values. - Refactored existing code in `wizard.py` and `init.py` to utilize these shared utilities, improving code reuse and maintainability. - Updated documentation to include usage examples for the new utilities in plugin setup scripts, enhancing developer experience and clarity.
- Introduced a three-file separation for plugin configuration to improve security:
- `backends/advanced/.env` for secrets (gitignored)
- `config/plugins.yml` for orchestration with environment variable references
- `plugins/{plugin_id}/config.yml` for non-secret defaults
- Updated documentation to emphasize the importance of using `${ENV_VAR}` syntax for sensitive data and provided examples of correct usage.
- Enhanced the Email Summarizer plugin setup process to automatically update `config/plugins.yml` with environment variable references, ensuring secrets are not hardcoded.
- Added new fields to the User model for notification email management and improved error logging in user-related functions.
- Refactored audio chunk utilities to use a consistent method for fetching conversation metadata.
- Added a new parameter `transcript_version_id` to the `open_conversation_job` function to support streaming transcript versioning. - Enhanced error handling in `check_enrolled_speakers_job` and `recognise_speakers_job` to allow conversations to proceed even when the speaker service is unavailable, improving resilience. - Updated `send_to_adv.py` to support dynamic WebSocket and HTTP protocols based on environment settings, enhancing configuration flexibility. - Introduced a background task in `send_to_adv.py` to handle incoming messages from the backend, ensuring connection stability and logging interim results.
…nd error handling - Updated the controller to filter out invalid settings instead of raising an error for each unknown key, allowing for more flexible input. - Added a check to reject requests with no valid settings provided, enhancing robustness. - Adjusted logging to reflect the filtered settings being saved.
…duplication and tracking
📝 WalkthroughWalkthroughThis PR refactors Chronicle's plugin system and job tracking infrastructure. It introduces a three-file separation for secrets management, replaces audio_uuid tracking with external_source_id/external_source_type for external file deduplication, transitions job management from session-based to client-based identification, and adds a new Email Summarizer plugin with SMTP support and interactive setup wizard. Plugin discovery and configuration loading are now dynamic via shared utilities. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant ConversationJob as Conversation Job Worker
participant PluginRouter as Plugin Router
participant EmailPlugin as Email Summarizer Plugin
participant SMTP as SMTP Email Service
participant MongoDB as MongoDB
Client->>ConversationJob: trigger conversation.complete
ConversationJob->>PluginRouter: dispatch_event(conversation.complete)
PluginRouter->>EmailPlugin: on_conversation_complete(context)
EmailPlugin->>MongoDB: _get_user_email(user_id)
MongoDB-->>EmailPlugin: notification_email
EmailPlugin->>EmailPlugin: _generate_summary(transcript)
EmailPlugin->>EmailPlugin: _format_subject(created_at)
EmailPlugin->>SMTP: send_email(to, subject, body_text, body_html)
SMTP->>SMTP: _send_smtp(msg, to_email)
SMTP-->>EmailPlugin: success/failure
EmailPlugin-->>PluginRouter: PluginResult
PluginRouter-->>ConversationJob: result
sequenceDiagram
participant PluginService
participant FileSystem as Plugin Directory
participant PluginModule as Plugin Module
participant OrchestrConfig as Orchestration Config
participant EnvFile as .env File
participant PluginInstance as Plugin Instance
PluginService->>FileSystem: discover_plugins()
FileSystem-->>PluginService: plugin list
loop For each plugin
PluginService->>PluginModule: import plugin class
PluginModule-->>PluginService: BasePlugin subclass
PluginService->>OrchestrConfig: read enabled, events, condition
OrchestrConfig-->>PluginService: orchestration config
PluginService->>PluginService: load_plugin_config(plugin_id, orch_config)
PluginService->>EnvFile: expand ${ENV_VAR} references
EnvFile-->>PluginService: expanded config
PluginService->>PluginInstance: instantiate(merged_config)
PluginInstance-->>PluginService: initialized plugin
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Comment |
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 18
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
backends/advanced/src/advanced_omi_backend/controllers/websocket_controller.py (1)
418-424: Store streaming audio format for accurate finalize/flush.
_finalize_streaming_sessionreadsclient_state.stream_audio_format, but it’s never set, so it falls back to defaults. If clients use non-default formats, finalize/flush can mis-handle buffered audio.🔧 Suggested fix
@@ async def _initialize_streaming_session(...): - # Store audio format in Redis session (not in ClientState) + # Store audio format in Redis session and keep a copy for finalize/flush + client_state.stream_audio_format = audio_format from advanced_omi_backend.services.audio_stream.producer import get_audio_stream_producer @@ async def _finalize_streaming_session(...): - audio_format = getattr(client_state, 'stream_audio_format', {}) + audio_format = getattr(client_state, 'stream_audio_format', {}) @@ - if hasattr(client_state, 'stream_session_id'): - delattr(client_state, 'stream_session_id') + for attr in ('stream_session_id', 'stream_audio_format'): + if hasattr(client_state, attr): + delattr(client_state, attr)Also applies to: 476-483, 510-513
backends/advanced/webui/src/pages/Queue.tsx (1)
209-212: Stale debug logging referencesaudio_uuid.The debug logging still references
job.meta?.audio_uuidwhich has been removed from the backend. This won't cause errors (it will just logundefined), but the log statement is misleading.🔧 Proposed fix to remove stale reference
openConvJobs.forEach(job => { console.log(` Job ID: ${job.job_id}`); console.log(` Status: ${job.status}`); - console.log(` meta.audio_uuid: ${job.meta?.audio_uuid}`); console.log(` meta.conversation_id: ${job.meta?.conversation_id}`); });backends/advanced/src/advanced_omi_backend/controllers/queue_controller.py (1)
411-507: Fix return value when speaker recognition is disabled.If
speaker_enabledis false,speaker_jobstaysNone, but the return block dereferencesspeaker_job.id, which will raiseAttributeErrorand abort the pipeline. ReturnNone(or omit the key) and update typing accordingly.🐛 Proposed fix
- return { - 'speaker_recognition': speaker_job.id, - 'memory': memory_job.id, - 'title_summary': title_summary_job.id, - 'event_dispatch': event_dispatch_job.id - } + return { + 'speaker_recognition': speaker_job.id if speaker_job else None, + 'memory': memory_job.id, + 'title_summary': title_summary_job.id, + 'event_dispatch': event_dispatch_job.id + }
🤖 Fix all issues with AI agents
In `@backends/advanced/.env.template`:
- Around line 104-106: The FROM_NAME value in the dotenv template can be split
by spaces during parsing; update the dotenv template so the FROM_NAME assignment
uses quotes (e.g., FROM_NAME="Chronicle AI") to ensure the entire value is read
as one string; locate the FROM_NAME entry in the dotenv template (the FROM_NAME
variable shown alongside FROM_EMAIL) and surround its value with double quotes.
In `@backends/advanced/Docs/plugin-configuration.md`:
- Around line 72-81: The HA_TOKEN example on the Docs/plugin-configuration.md
contains a JWT-like string that triggers secret scanners; replace the
token-looking example value (the HA_TOKEN entry) with a clearly non-secret
placeholder like "<YOUR_HOME_ASSISTANT_TOKEN>" or "REPLACE_WITH_TOKEN" so the
example is obviously not a real credential and will not be flagged by gitleaks.
- Around line 83-95: The fenced code block under the "Configuration Loading
Process" section in plugin-configuration.md is unlabeled and triggers
markdownlint MD040; add a language identifier (e.g., ```text) to the opening
fence so the block becomes a labeled fenced code block and close it with the
matching backticks, ensuring the block around the four-step list is updated
accordingly.
In `@backends/advanced/docs/plugin-development-guide.md`:
- Around line 41-47: The fenced code block showing the plugin directory tree
(the block that begins with ``` and lines "plugins/my_awesome_plugin/ ...
README.md") is missing a language tag causing MD040; update that fenced block to
use a language tag (use "text") so it starts with ```text and ends with ``` to
satisfy markdownlint and keep the directory tree unchanged.
In `@backends/advanced/scripts/create_plugin.py`:
- Around line 46-53: The script currently skips deletion when --force is set and
simply writes template files into plugin_dir (see plugin_dir.exists(),
plugin_dir.mkdir(...), plugin_name, force), leaving stale files; update the
logic so that when force is true and plugin_dir.exists() you recursively remove
the existing directory (e.g., use shutil.rmtree on plugin_dir) before creating
the fresh plugin_dir, optionally prompting the user for confirmation or honoring
a --yes/--force flag to skip confirmation, then proceed to
plugin_dir.mkdir(parents=True, exist_ok=True) and write the template files.
In `@backends/advanced/src/advanced_omi_backend/auth.py`:
- Around line 227-229: The three debug messages using logger.error that
reference existing_admin/id/user_id should be removed or downgraded to
logger.debug to avoid polluting error logs; update the calls around logger (in
the block that logs existing_admin.id and existing_admin.user_id) to either
delete those logger.error lines entirely or replace them with logger.debug(...)
and consolidate redundant output (omit the duplicate user_id log since
User.user_id returns str(self.id)). Ensure you only change the logging level or
remove the statements and do not alter the underlying existing_admin or User
model behavior.
In `@backends/advanced/src/advanced_omi_backend/controllers/audio_controller.py`:
- Around line 32-36: In generate_client_id replace the error-level PII log with
a debug-level, non-PII message: remove logging of user.email and instead log
only non-sensitive info such as the masked or truncated user id and its type
(e.g., logger.debug with user_id_suffix and type(user.id)) or remove logging
entirely; update the call in generate_client_id so it no longer emits email at
error level and uses logger.debug without PII.
In `@backends/advanced/src/advanced_omi_backend/models/conversation.py`:
- Around line 381-384: The factory is injecting "memories" and "memory_count"
into the dict for the Conversation model, but Conversation has no "memories"
attribute and memory_count is a `@computed_field` (derived from
active_memory.memory_count), so these keys are ignored by Pydantic/Beanie;
either remove the lines setting "memories": [] and "memory_count": 0 from the
factory that constructs Conversation, or if they were intentionally left for
backwards compatibility, add a clear comment in that factory explaining why
these unused keys are preserved (or instead add real fields to the Conversation
model if they must be persisted).
In
`@backends/advanced/src/advanced_omi_backend/plugins/email_summarizer/email_service.py`:
- Around line 113-130: The SMTP connection in _send_smtp is created without a
timeout and can hang; update both places that instantiate the SMTP client (the
smtplib.SMTP(...) calls inside the if self.use_tls and the else branch that
assigns smtp_server) to pass a timeout (e.g., timeout=self.timeout or
timeout=10) so the socket won't block indefinitely—mirror how
_test_smtp_connection uses timeout=10; ensure any SMTP_SSL instantiation
(if/when used) also receives the same timeout.
In
`@backends/advanced/src/advanced_omi_backend/plugins/email_summarizer/README.md`:
- Around line 56-83: The README's Step 3 currently shows SMTP and email options
under config/plugins.yml which conflicts with the three-file config model
described in backends/advanced/Docs/plugin-configuration.md; update the README
and its example to place plugin-specific settings into
plugins/email_summarizer/config.yml (keeping config/plugins.yml only for
orchestration/enabled flag and events/condition), or alternatively update
plugin-configuration.md to document that plugin-specific overrides may live in
config/plugins.yml—ensure the example in README uses the same canonical location
(plugins/email_summarizer/config.yml) and mention where to set only the enabled
flag/events (config/plugins.yml) so the two docs are consistent.
- Around line 110-126: The fenced code blocks in README.md (sections "Subject
Line", "Email Body", and the file-tree under plugins/email_summarizer/) are
missing language identifiers; update each triple-backtick fence around the
example subject, email body, and directory listing to use a language tag (e.g.,
```text) so markdown lint MD040 is satisfied, ensuring you modify the code
fences in the Subject Line and Email Body examples and the
plugins/email_summarizer/ file-tree block (also apply the same change to the
similar blocks around lines referenced in the PR comment).
In
`@backends/advanced/src/advanced_omi_backend/plugins/email_summarizer/templates.py`:
- Around line 55-56: The transcript and summary are inserted into HTML without
escaping, creating XSS risk; update the code in templates.py to HTML-escape
user-generated strings (both transcript and summary) using a safe escaper (e.g.,
html.escape(..., quote=True)) before converting newlines to <br> and before
embedding into the template; ensure you call escape on transcript first, then do
transcript.replace('\n','<br>') to produce transcript_html and likewise escape
summary, and use those escaped variables in the template rendering.
In `@backends/advanced/src/advanced_omi_backend/utils/gdrive_audio_utils.py`:
- Around line 92-96: The dedupe check in gdrive_audio_utils.py only filters by
Conversation.external_source_id and external_source_type, causing cross-user
skips; modify the function that contains this check to accept a user_id param,
include Conversation.user_id == user_id in the find_one filter alongside the
existing conditions, and update the caller in audio_routes.py (where the
function is invoked around line 37) to pass current_user.id (or the appropriate
identifier). Ensure the function signature and all call sites are updated
consistently so deduplication is scoped per user.
In `@backends/advanced/src/scripts/cleanup_state.py`:
- Around line 611-615: The log message currently always states "Cleared total of
{total_jobs} Redis jobs" which can be misleading when some queues failed; update
the logging logic around stats.redis_jobs_count/total_jobs and failed_queues so
that if failed_queues is non-empty you either (a) change the info message to
explicitly say it reflects only successfully cleaned queues (e.g., "Cleared
total of {total_jobs} Redis jobs from successfully cleaned queues") or (b)
suppress the info and replace it with a single warning that includes both
total_jobs and the list failed_queues; ensure the logger.info/logger.warning
calls reference the same total_jobs/stat value and that failed_queues is
included in the warning message to make the partial failure explicit.
In `@config/README.md`:
- Around line 110-122: The fenced code block that documents config files
(containing lines like "config/plugins.yml", "backends/advanced/.env", and
"plugins/{id}/config.yml") is missing a language tag and triggers markdownlint
MD040; update the opening fence from ``` to include a language (e.g., ```text)
so the block becomes ```text and keep the closing ``` as-is to satisfy the
linter.
In `@extras/local-omi-bt/send_to_adv.py`:
- Line 47: The httpx.AsyncClient call in send_to_adv.py currently disables SSL
verification (verify=False); change this to be configurable (e.g., read a
VERIFY_SSL boolean env var or reuse existing SSL context logic used for
WebSocket) and pass the resulting boolean or ssl context into httpx.AsyncClient,
and add a short comment explaining why disabling verification may be acceptable
for self-signed certs in dev but must be enabled in prod; update the AsyncClient
instantiation in the async with block to use that config (instead of
verify=False) and ensure tests or startup logs reflect the configured value.
- Around line 14-24: The code reads BACKEND_HOST and USE_HTTPS before calling
load_dotenv(), so .env values are not applied; move the load_dotenv(env_path)
call to run before reading BACKEND_HOST and USE_HTTPS (i.e., place it above
where BACKEND_HOST and USE_HTTPS are defined) so websocket_uri and backend_url
are built using environment variables loaded from the .env file.
In `@setup_utils.py`:
- Around line 92-98: In mask_value, the length check uses the original value
while the masking uses value_clean (stripped), causing inconsistency for values
with surrounding whitespace; change the code to compute value_clean =
value.strip() first and then use value_clean in the conditional (and subsequent
masking/substring operations) so the length check and masking operate on the
same trimmed string (refer to mask_value and its return calculation).
🧹 Nitpick comments (9)
backends/advanced/src/advanced_omi_backend/controllers/system_controller.py (1)
287-317: Surface ignored keys to avoid silent partial updates.Skipping unknown keys is fine for round-trip payloads, but silent ignores can hide typos and create partial updates. Consider returning or logging the ignored keys so clients can self-correct.
♻️ Proposed tweak
- filtered_settings = {} + filtered_settings = {} + ignored_keys = [] for key, value in settings.items(): if key not in valid_keys: - continue # Skip unknown keys instead of rejecting + ignored_keys.append(key) + continue # Skip unknown keys instead of rejecting @@ if save_diarization_settings(current_settings): logger.info(f"Updated and saved diarization settings: {filtered_settings}") return { "message": "Diarization settings saved successfully", "settings": current_settings, + "ignored_keys": ignored_keys, "status": "success" }extras/local-omi-bt/send_to_adv.py (3)
69-70: Bareexceptclause catches all exceptions.This catches
BaseExceptionincludingSystemExitandKeyboardInterrupt. Catch the specific exception type instead.♻️ Proposed fix
try: error_data = response.json() error_msg = error_data.get('detail', error_msg) - except: + except (json.JSONDecodeError, KeyError): pass
141-141: Token in URL query parameter may appear in server logs.While passing JWT tokens via query parameters is common for WebSocket authentication (since WebSocket doesn't support custom headers during handshake), be aware that the token may be logged by proxies or the server's access logs.
Consider documenting this trade-off or ensuring server-side logging sanitizes query parameters.
206-221: Consider waiting for server acknowledgment afteraudio-stop.After sending
audio-stop, the connection closes immediately via thefinallyblock. The server may not have finished processing the final audio chunks. Consider adding a brief wait or listening for a completion acknowledgment before cleanup.♻️ Suggested approach
await websocket.send(json.dumps(audio_stop) + '\n') logger.info(f"Sent audio-stop event. Total chunks: {chunk_count}") + # Give server time to process final chunks before closing + await asyncio.sleep(1.0) + finally: # Clean up receive task receive_task.cancel()Alternatively, if the backend sends a completion message, wait for that instead of a fixed sleep.
backends/advanced/src/advanced_omi_backend/workers/speaker_jobs.py (1)
281-297: Keep success payload keys consistent on skip path.The skip-success response omits
segment_count, which is present in the normal success payload. If any downstream expects it, this can trigger KeyErrors. Consider returning a defaultsegment_count: 0(and other expected keys) to preserve the response schema.🛠️ Suggested patch
return { "success": True, "conversation_id": conversation_id, "version_id": version_id, "speaker_recognition_enabled": True, "speaker_service_unavailable": True, "identified_speakers": [], + "segment_count": 0, "skip_reason": f"Speaker service unavailable: {error_type}", "processing_time_seconds": time.time() - start_time }backends/advanced/scripts/create_plugin.py (1)
31-34: Optional: Remove extraneousfprefixes from strings without placeholders.Several print statements use f-strings but contain no placeholders. While harmless, removing the
fprefix improves clarity.♻️ Example fix
- print(f"❌ Error: Plugin name must be alphanumeric with underscores") + print("❌ Error: Plugin name must be alphanumeric with underscores") print(f" Got: {plugin_name}") - print(f" Example: my_awesome_plugin") + print(" Example: my_awesome_plugin")Apply similar changes to lines 369-394 where strings have no placeholders.
Also applies to: 369-394
backends/advanced/src/advanced_omi_backend/plugins/email_summarizer/email_service.py (1)
120-122: Consider using SMTP_SSL for port 465.The comment mentions port 465 SSL, but
smtplib.SMTPis used instead ofsmtplib.SMTP_SSL. If users configure port 465 withsmtp_use_tls=False, the connection may fail since port 465 typically requires implicit SSL from the start.♻️ Proposed fix to handle implicit SSL
else: - # Direct connection (for port 465 SSL or no encryption) - smtp_server = smtplib.SMTP(self.host, self.port) + # Direct connection - use SMTP_SSL for port 465, plain SMTP otherwise + if self.port == 465: + smtp_server = smtplib.SMTP_SSL(self.host, self.port, timeout=30) + else: + smtp_server = smtplib.SMTP(self.host, self.port, timeout=30)backends/advanced/src/advanced_omi_backend/plugins/email_summarizer/plugin.py (1)
47-47: MakeSUPPORTED_ACCESS_LEVELSimmutable and aClassVar.A mutable list at class scope can be changed across instances; using
ClassVar+ tuple prevents accidental mutation and aligns with typing expectations.♻️ Proposed change
-from typing import Any, Dict, List, Optional +from typing import Any, Dict, List, Optional, ClassVar, Tuple @@ - SUPPORTED_ACCESS_LEVELS: List[str] = ['conversation'] + SUPPORTED_ACCESS_LEVELS: ClassVar[Tuple[str, ...]] = ("conversation",)wizard.py (1)
288-309: Run plugin setup from the plugin directory to avoid relative-path breakage.If a plugin’s
setup.pyrelies on relative paths (templates/configs), running it from repo root can fail. Consider settingcwdto the plugin directory and pass an absolute requirements path. Please verify existing plugin setup scripts to confirm assumptions.♻️ Proposed change
- result = subprocess.run( - ['uv', 'run', '--with-requirements', 'setup-requirements.txt', 'python', str(setup_path)], - cwd=str(Path.cwd()) - ) + repo_root = Path.cwd() + requirements = repo_root / "setup-requirements.txt" + result = subprocess.run( + ['uv', 'run', '--with-requirements', str(requirements), 'python', str(setup_path)], + cwd=str(plugin_info['dir']) + )
| # Email sender information | ||
| FROM_EMAIL=[email protected] | ||
| FROM_NAME=Chronicle AI |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Quote FROM_NAME to avoid dotenv parsing truncation.
Values with spaces can be split; quoting ensures consistent parsing.
🔧 Suggested fix
-FROM_NAME=Chronicle AI
+FROM_NAME="Chronicle AI"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Email sender information | |
| [email protected] | |
| FROM_NAME=Chronicle AI | |
| # Email sender information | |
| [email protected] | |
| FROM_NAME="Chronicle AI" |
🧰 Tools
🪛 dotenv-linter (4.0.0)
[warning] 106-106: [ValueWithoutQuotes] This value needs to be surrounded in quotes
(ValueWithoutQuotes)
🤖 Prompt for AI Agents
In `@backends/advanced/.env.template` around lines 104 - 106, The FROM_NAME value
in the dotenv template can be split by spaces during parsing; update the dotenv
template so the FROM_NAME assignment uses quotes (e.g., FROM_NAME="Chronicle
AI") to ensure the entire value is read as one string; locate the FROM_NAME
entry in the dotenv template (the FROM_NAME variable shown alongside FROM_EMAIL)
and surround its value with double quotes.
| # Email Summarizer Plugin | ||
| SMTP_HOST=smtp.gmail.com | ||
| SMTP_PORT=587 | ||
| [email protected] | ||
| SMTP_PASSWORD=your-app-password-here | ||
| # Home Assistant Plugin | ||
| HA_URL=http://homeassistant.local:8123 | ||
| HA_TOKEN=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9... | ||
| ``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Replace token-looking example to avoid secret scanners.
Line 80 includes a JWT-like value and was flagged by gitleaks. Use an obvious placeholder to prevent CI failures and confusion.
🔐 Suggested fix
-HA_TOKEN=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...
+HA_TOKEN=your-long-lived-token-here📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Email Summarizer Plugin | |
| SMTP_HOST=smtp.gmail.com | |
| SMTP_PORT=587 | |
| SMTP_USERNAME=[email protected] | |
| SMTP_PASSWORD=your-app-password-here | |
| # Home Assistant Plugin | |
| HA_URL=http://homeassistant.local:8123 | |
| HA_TOKEN=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9... | |
| ``` | |
| # Email Summarizer Plugin | |
| SMTP_HOST=smtp.gmail.com | |
| SMTP_PORT=587 | |
| SMTP_USERNAME=[email protected] | |
| SMTP_PASSWORD=your-app-password-here | |
| # Home Assistant Plugin | |
| HA_URL=http://homeassistant.local:8123 | |
| HA_TOKEN=your-long-lived-token-here |
🧰 Tools
🪛 Gitleaks (8.30.0)
[high] 80-80: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.
(generic-api-key)
🤖 Prompt for AI Agents
In `@backends/advanced/Docs/plugin-configuration.md` around lines 72 - 81, The
HA_TOKEN example on the Docs/plugin-configuration.md contains a JWT-like string
that triggers secret scanners; replace the token-looking example value (the
HA_TOKEN entry) with a clearly non-secret placeholder like
"<YOUR_HOME_ASSISTANT_TOKEN>" or "REPLACE_WITH_TOKEN" so the example is
obviously not a real credential and will not be flagged by gitleaks.
| ## Configuration Loading Process | ||
|
|
||
| When a plugin is initialized, Chronicle merges configuration from all three sources: | ||
|
|
||
| ``` | ||
| 1. Load plugins/{plugin_id}/config.yml | ||
| ↓ | ||
| 2. Expand ${ENV_VAR} references from .env | ||
| ↓ | ||
| 3. Merge orchestration settings from config/plugins.yml | ||
| ↓ | ||
| 4. Pass complete config to plugin constructor | ||
| ``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add a language identifier to the fenced block (MD040).
Markdownlint flags the unlabeled fence in the config loading process.
✏️ Suggested fix
-```
+```text
1. Load plugins/{plugin_id}/config.yml
↓
2. Expand ${ENV_VAR} references from .env
↓
3. Merge orchestration settings from config/plugins.yml
↓
4. Pass complete config to plugin constructor
-```
+```📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ## Configuration Loading Process | |
| When a plugin is initialized, Chronicle merges configuration from all three sources: | |
| ``` | |
| 1. Load plugins/{plugin_id}/config.yml | |
| ↓ | |
| 2. Expand ${ENV_VAR} references from .env | |
| ↓ | |
| 3. Merge orchestration settings from config/plugins.yml | |
| ↓ | |
| 4. Pass complete config to plugin constructor | |
| ``` | |
| ## Configuration Loading Process | |
| When a plugin is initialized, Chronicle merges configuration from all three sources: | |
🧰 Tools
🪛 markdownlint-cli2 (0.18.1)
87-87: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🤖 Prompt for AI Agents
In `@backends/advanced/Docs/plugin-configuration.md` around lines 83 - 95, The
fenced code block under the "Configuration Loading Process" section in
plugin-configuration.md is unlabeled and triggers markdownlint MD040; add a
language identifier (e.g., ```text) to the opening fence so the block becomes a
labeled fenced code block and close it with the matching backticks, ensuring the
block around the four-step list is updated accordingly.
| This creates: | ||
| ``` | ||
| plugins/my_awesome_plugin/ | ||
| ├── __init__.py # Plugin exports | ||
| ├── plugin.py # Main plugin logic | ||
| └── README.md # Plugin documentation | ||
| ``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add a language tag to the fenced block (MD040).
Markdownlint flags the directory tree block without a language. Consider using text to keep lint clean.
🔧 Suggested fix
-```
+```text
plugins/my_awesome_plugin/
├── __init__.py # Plugin exports
├── plugin.py # Main plugin logic
└── README.md # Plugin documentation
-```
+```📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| This creates: | |
| ``` | |
| plugins/my_awesome_plugin/ | |
| ├── __init__.py # Plugin exports | |
| ├── plugin.py # Main plugin logic | |
| └── README.md # Plugin documentation | |
| ``` | |
| This creates: |
🧰 Tools
🪛 markdownlint-cli2 (0.18.1)
42-42: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🤖 Prompt for AI Agents
In `@backends/advanced/docs/plugin-development-guide.md` around lines 41 - 47, The
fenced code block showing the plugin directory tree (the block that begins with
``` and lines "plugins/my_awesome_plugin/ ... README.md") is missing a language
tag causing MD040; update that fenced block to use a language tag (use "text")
so it starts with ```text and ends with ``` to satisfy markdownlint and keep the
directory tree unchanged.
| if plugin_dir.exists() and not force: | ||
| print(f"❌ Error: Plugin '{plugin_name}' already exists at {plugin_dir}") | ||
| print(f" Use --force to overwrite") | ||
| sys.exit(1) | ||
|
|
||
| # Create plugin directory | ||
| print(f"📁 Creating plugin directory: {plugin_dir}") | ||
| plugin_dir.mkdir(parents=True, exist_ok=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
--force flag doesn't remove existing files before overwriting.
When --force is used, the script writes new template files but doesn't remove existing files that aren't part of the template (e.g., if a developer added custom modules). This could leave stale files in the plugin directory.
Consider either:
- Removing the plugin directory before recreating (with user confirmation)
- Documenting this behavior in the help text
🔧 Suggested fix (option 1)
# Check if plugin already exists
if plugin_dir.exists() and not force:
print(f"❌ Error: Plugin '{plugin_name}' already exists at {plugin_dir}")
print(f" Use --force to overwrite")
sys.exit(1)
+ elif plugin_dir.exists() and force:
+ import shutil
+ print(f"⚠️ Removing existing plugin directory: {plugin_dir}")
+ shutil.rmtree(plugin_dir)
# Create plugin directory
print(f"📁 Creating plugin directory: {plugin_dir}")
plugin_dir.mkdir(parents=True, exist_ok=True)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if plugin_dir.exists() and not force: | |
| print(f"❌ Error: Plugin '{plugin_name}' already exists at {plugin_dir}") | |
| print(f" Use --force to overwrite") | |
| sys.exit(1) | |
| # Create plugin directory | |
| print(f"📁 Creating plugin directory: {plugin_dir}") | |
| plugin_dir.mkdir(parents=True, exist_ok=True) | |
| if plugin_dir.exists() and not force: | |
| print(f"❌ Error: Plugin '{plugin_name}' already exists at {plugin_dir}") | |
| print(f" Use --force to overwrite") | |
| sys.exit(1) | |
| elif plugin_dir.exists() and force: | |
| import shutil | |
| print(f"⚠️ Removing existing plugin directory: {plugin_dir}") | |
| shutil.rmtree(plugin_dir) | |
| # Create plugin directory | |
| print(f"📁 Creating plugin directory: {plugin_dir}") | |
| plugin_dir.mkdir(parents=True, exist_ok=True) |
🧰 Tools
🪛 Ruff (0.14.13)
48-48: f-string without any placeholders
Remove extraneous f prefix
(F541)
🤖 Prompt for AI Agents
In `@backends/advanced/scripts/create_plugin.py` around lines 46 - 53, The script
currently skips deletion when --force is set and simply writes template files
into plugin_dir (see plugin_dir.exists(), plugin_dir.mkdir(...), plugin_name,
force), leaving stale files; update the logic so that when force is true and
plugin_dir.exists() you recursively remove the existing directory (e.g., use
shutil.rmtree on plugin_dir) before creating the fresh plugin_dir, optionally
prompting the user for confirmation or honoring a --yes/--force flag to skip
confirmation, then proceed to plugin_dir.mkdir(parents=True, exist_ok=True) and
write the template files.
| stats.redis_jobs_count = total_jobs | ||
| logger.info(f"Cleared total of {total_jobs} Redis jobs") | ||
|
|
||
| if failed_queues: | ||
| logger.warning(f"Failed to clean queues: {', '.join(failed_queues)}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Clarify totals when some queues fail.
When failed_queues is non-empty, the log Cleared total of {total_jobs} Redis jobs can read like full success. Consider making the message conditional or explicitly noting it’s the count from successfully cleaned queues.
🔧 Suggested adjustment
- stats.redis_jobs_count = total_jobs
- logger.info(f"Cleared total of {total_jobs} Redis jobs")
-
- if failed_queues:
- logger.warning(f"Failed to clean queues: {', '.join(failed_queues)}")
+ stats.redis_jobs_count = total_jobs
+ if failed_queues:
+ logger.warning(f"Failed to clean queues: {', '.join(failed_queues)}")
+ logger.info(f"Cleared {total_jobs} Redis jobs from successfully cleaned queues")
+ else:
+ logger.info(f"Cleared total of {total_jobs} Redis jobs")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| stats.redis_jobs_count = total_jobs | |
| logger.info(f"Cleared total of {total_jobs} Redis jobs") | |
| if failed_queues: | |
| logger.warning(f"Failed to clean queues: {', '.join(failed_queues)}") | |
| stats.redis_jobs_count = total_jobs | |
| if failed_queues: | |
| logger.warning(f"Failed to clean queues: {', '.join(failed_queues)}") | |
| logger.info(f"Cleared {total_jobs} Redis jobs from successfully cleaned queues") | |
| else: | |
| logger.info(f"Cleared total of {total_jobs} Redis jobs") |
🤖 Prompt for AI Agents
In `@backends/advanced/src/scripts/cleanup_state.py` around lines 611 - 615, The
log message currently always states "Cleared total of {total_jobs} Redis jobs"
which can be misleading when some queues failed; update the logging logic around
stats.redis_jobs_count/total_jobs and failed_queues so that if failed_queues is
non-empty you either (a) change the info message to explicitly say it reflects
only successfully cleaned queues (e.g., "Cleared total of {total_jobs} Redis
jobs from successfully cleaned queues") or (b) suppress the info and replace it
with a single warning that includes both total_jobs and the list failed_queues;
ensure the logger.info/logger.warning calls reference the same total_jobs/stat
value and that failed_queues is included in the warning message to make the
partial failure explicit.
| ``` | ||
| config/plugins.yml ← Orchestration (enabled, events) | ||
| ← Env var references: ${SMTP_PASSWORD} | ||
| ← Safe to commit ✅ | ||
| backends/advanced/.env ← Actual secrets | ||
| ← SMTP_PASSWORD=abc123 | ||
| ← Gitignored, never committed ✅ | ||
| plugins/{id}/config.yml ← Plugin defaults | ||
| ← Non-secret settings | ||
| ← Can also use ${ENV_VAR} ✅ | ||
| ``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add a language tag to the fenced block (markdownlint MD040).
This block currently lacks a language spec, which trips markdownlint.
📝 Proposed fix
-```
+```text
config/plugins.yml ← Orchestration (enabled, events)
← Env var references: ${SMTP_PASSWORD}
← Safe to commit ✅
backends/advanced/.env ← Actual secrets
← SMTP_PASSWORD=abc123
← Gitignored, never committed ✅
plugins/{id}/config.yml ← Plugin defaults
← Non-secret settings
← Can also use ${ENV_VAR} ✅
-```
+```🧰 Tools
🪛 markdownlint-cli2 (0.18.1)
110-110: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🤖 Prompt for AI Agents
In `@config/README.md` around lines 110 - 122, The fenced code block that
documents config files (containing lines like "config/plugins.yml",
"backends/advanced/.env", and "plugins/{id}/config.yml") is missing a language
tag and triggers markdownlint MD040; update the opening fence from ``` to
include a language (e.g., ```text) so the block becomes ```text and keep the
closing ``` as-is to satisfy the linter.
| BACKEND_HOST = os.getenv("BACKEND_HOST", "100.83.66.30:8000") | ||
| USE_HTTPS = os.getenv("USE_HTTPS", "false").lower() == "true" | ||
|
|
||
| # Use appropriate protocol based on USE_HTTPS setting | ||
| ws_protocol = "wss" if USE_HTTPS else "ws" | ||
| http_protocol = "https" if USE_HTTPS else "http" | ||
|
|
||
| websocket_uri = f"{ws_protocol}://{BACKEND_HOST}/ws?codec=pcm" | ||
| backend_url = f"{http_protocol}://{BACKEND_HOST}" | ||
| env_path = ".env" | ||
| load_dotenv(env_path) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
load_dotenv() is called after environment variables are read.
BACKEND_HOST and USE_HTTPS are read on lines 14-15 before load_dotenv() is called on line 24. This means .env values for these variables won't be loaded unless they're already set in the shell environment.
🐛 Proposed fix
+# Load environment variables first
+env_path = ".env"
+load_dotenv(env_path)
+
# Configuration
BACKEND_HOST = os.getenv("BACKEND_HOST", "100.83.66.30:8000")
USE_HTTPS = os.getenv("USE_HTTPS", "false").lower() == "true"
# Use appropriate protocol based on USE_HTTPS setting
ws_protocol = "wss" if USE_HTTPS else "ws"
http_protocol = "https" if USE_HTTPS else "http"
websocket_uri = f"{ws_protocol}://{BACKEND_HOST}/ws?codec=pcm"
backend_url = f"{http_protocol}://{BACKEND_HOST}"
-env_path = ".env"
-load_dotenv(env_path)🤖 Prompt for AI Agents
In `@extras/local-omi-bt/send_to_adv.py` around lines 14 - 24, The code reads
BACKEND_HOST and USE_HTTPS before calling load_dotenv(), so .env values are not
applied; move the load_dotenv(env_path) call to run before reading BACKEND_HOST
and USE_HTTPS (i.e., place it above where BACKEND_HOST and USE_HTTPS are
defined) so websocket_uri and backend_url are built using environment variables
loaded from the .env file.
| logger.info(f"Authenticating with backend as: {username}") | ||
|
|
||
| async with httpx.AsyncClient(timeout=10.0) as client: | ||
| async with httpx.AsyncClient(timeout=10.0, verify=False) as client: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SSL verification disabled for HTTP client.
verify=False disables SSL certificate validation, which could expose the application to man-in-the-middle attacks in production. While this aligns with the SSL context approach for WebSocket (lines 146-149) to support self-signed certificates, consider making this configurable or adding a comment explaining when this is acceptable.
🔧 Suggested improvement
- async with httpx.AsyncClient(timeout=10.0, verify=False) as client:
+ # verify=False allows self-signed certs; mirrors SSL context config for WSS
+ async with httpx.AsyncClient(timeout=10.0, verify=not USE_HTTPS or False) as client:Or more explicitly, consider a separate VERIFY_SSL environment variable to control this behavior independently.
🧰 Tools
🪛 Ruff (0.14.13)
47-47: Probable use of httpx call with verify=False disabling SSL certificate checks
(S501)
🤖 Prompt for AI Agents
In `@extras/local-omi-bt/send_to_adv.py` at line 47, The httpx.AsyncClient call in
send_to_adv.py currently disables SSL verification (verify=False); change this
to be configurable (e.g., read a VERIFY_SSL boolean env var or reuse existing
SSL context logic used for WebSocket) and pass the resulting boolean or ssl
context into httpx.AsyncClient, and add a short comment explaining why disabling
verification may be acceptable for self-signed certs in dev but must be enabled
in prod; update the AsyncClient instantiation in the async with block to use
that config (instead of verify=False) and ensure tests or startup logs reflect
the configured value.
| if not value or len(value) <= show_chars * 2: | ||
| return value | ||
|
|
||
| # Strip whitespace for masking | ||
| value_clean = value.strip() | ||
|
|
||
| return f"{value_clean[:show_chars]}{'*' * min(15, len(value_clean) - show_chars * 2)}{value_clean[-show_chars:]}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor inconsistency: whitespace handling in mask_value.
The length check on line 92 uses the original value, but the masking on lines 96-98 uses value_clean (stripped). This could cause unexpected behavior if the value has significant leading/trailing whitespace.
🔧 Suggested fix
def mask_value(value: str, show_chars: int = 5) -> str:
- if not value or len(value) <= show_chars * 2:
+ if not value:
+ return value
+
+ # Strip whitespace for consistent handling
+ value_clean = value.strip()
+
+ if len(value_clean) <= show_chars * 2:
return value
- # Strip whitespace for masking
- value_clean = value.strip()
-
return f"{value_clean[:show_chars]}{'*' * min(15, len(value_clean) - show_chars * 2)}{value_clean[-show_chars:]}"🤖 Prompt for AI Agents
In `@setup_utils.py` around lines 92 - 98, In mask_value, the length check uses
the original value while the masking uses value_clean (stripped), causing
inconsistency for values with surrounding whitespace; change the code to compute
value_clean = value.strip() first and then use value_clean in the conditional
(and subsequent masking/substring operations) so the length check and masking
operate on the same trimmed string (refer to mask_value and its return
calculation).
Summary by CodeRabbit
Release Notes
New Features
Bug Fixes
Documentation
Configuration
✏️ Tip: You can customize this high-level summary in your review settings.