-
Notifications
You must be signed in to change notification settings - Fork 24
Release v0.1.0 #234
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Release v0.1.0 #234
Conversation
… 'friend-backend' to 'chronicle-backend' across various files, including setup instructions, Docker configurations, and service logs.
Update documentation and configuration to reflect the transition from… - Bypassing rule as its required upstream
* refactor memory * add config * docstring * more cleanup * code quality * code quality * unused return * DOTTED GET * Refactor Docker and CI configurations - Removed the creation of `memory_config.yaml` from the CI workflow to streamline the process. - Updated Docker Compose files to mount `config.yml` for model registry and memory settings in both services. - Added new dependencies for Google API clients in `uv.lock` to support upcoming features. * Update configuration files for model providers and Docker setup - Changed LLM, embedding, and STT providers in `config.yml` to OpenAI and Deepgram. - Removed read-only flag from `config.yml` in Docker Compose files to allow UI configuration saving. - Updated memory configuration endpoint to accept plain text for YAML input. * Update transcription job handling to format speaker IDs - Changed variable name from `speaker_name` to `speaker_id` for clarity. - Added logic to convert integer speaker IDs from Deepgram to string format for consistent speaker labeling. * Remove loading of backend .env file in test environment setup - Eliminated the code that loads the .env file from the backends/advanced directory, simplifying the environment configuration for tests. * Enhance configuration management and setup wizard - Updated README to reflect the new setup wizard process. - Added functionality to load and save `config.yml` in the setup wizard, including default configurations for LLM and memory providers. - Improved user feedback during configuration updates, including success messages for configuration file updates. - Enabled backup of existing `config.yml` before saving changes. * Enhance HTTPS configuration in setup wizard - Added functionality to check for existing SERVER_IP in the environment file and prompt the user to reuse or enter a new IP for SSL certificates. - Improved user prompts for server IP/domain input during HTTPS setup. - Updated default behavior to use existing IP or localhost based on user input. - Changed RECORD_ONLY_ENROLLED_SPEAKERS setting in the .env template to false for broader access. * Add source parameter to audio file writing in websocket controller - Included a new `source` parameter with the value "websocket" in the `_process_batch_audio_complete` function to enhance audio file context tracking. --------- Co-authored-by: 0xrushi <[email protected]>
* refactor memory * add config * docstring * more cleanup * code quality * code quality * unused return * DOTTED GET * Refactor Docker and CI configurations - Removed the creation of `memory_config.yaml` from the CI workflow to streamline the process. - Updated Docker Compose files to mount `config.yml` for model registry and memory settings in both services. - Added new dependencies for Google API clients in `uv.lock` to support upcoming features. * Update configuration files for model providers and Docker setup - Changed LLM, embedding, and STT providers in `config.yml` to OpenAI and Deepgram. - Removed read-only flag from `config.yml` in Docker Compose files to allow UI configuration saving. - Updated memory configuration endpoint to accept plain text for YAML input. * Update transcription job handling to format speaker IDs - Changed variable name from `speaker_name` to `speaker_id` for clarity. - Added logic to convert integer speaker IDs from Deepgram to string format for consistent speaker labeling. * Remove loading of backend .env file in test environment setup - Eliminated the code that loads the .env file from the backends/advanced directory, simplifying the environment configuration for tests. * Enhance configuration management and setup wizard - Updated README to reflect the new setup wizard process. - Added functionality to load and save `config.yml` in the setup wizard, including default configurations for LLM and memory providers. - Improved user feedback during configuration updates, including success messages for configuration file updates. - Enabled backup of existing `config.yml` before saving changes. * Enhance HTTPS configuration in setup wizard - Added functionality to check for existing SERVER_IP in the environment file and prompt the user to reuse or enter a new IP for SSL certificates. - Improved user prompts for server IP/domain input during HTTPS setup. - Updated default behavior to use existing IP or localhost based on user input. - Changed RECORD_ONLY_ENROLLED_SPEAKERS setting in the .env template to false for broader access. * Add source parameter to audio file writing in websocket controller - Included a new `source` parameter with the value "websocket" in the `_process_batch_audio_complete` function to enhance audio file context tracking. * Refactor error handling in system controller and update memory config routes - Replaced ValueError with HTTPException for better error handling in `save_diarization_settings` and `validate_memory_config` functions. - Introduced a new Pydantic model, `MemoryConfigRequest`, for validating memory configuration requests in the system routes. - Updated the `validate_memory_config` endpoint to accept the new request model, improving input handling and validation. --------- Co-authored-by: 0xrushi <[email protected]>
* obsidian support * neo4j comment * cleanup code * unused line * unused line * Fix MemoryEntry object usage in chat service * comment * feat(obsidian): add obsidian memory search integration to chat * unit test * use rq * neo4j service * typefix * test fix * cleanup * cleanup * version changes * profile * remove unused imports * Refactor memory configuration validation endpoints - Removed the deprecated `validate_memory_config_raw` endpoint and replaced it with a new endpoint that accepts plain text for validation. - Updated the existing `validate_memory_config` endpoint to clarify that it now accepts JSON input. - Adjusted the API call in the frontend to point to the new validation endpoint. * Refactor health check model configuration loading - Updated the health check function to load model configuration from the models registry instead of the root config. - Improved error handling by logging warnings when model configuration loading fails. --------- Co-authored-by: 0xrushi <[email protected]>
|
Important Review skippedAuto reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the 📝 WalkthroughWalkthroughThe PR refactors Chronicle's configuration system from distributed environment variables to a centralized Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Frontend
participant Backend API
participant Redis
participant RQ Worker
participant Neo4j DB
participant Embedding Service
User->>Frontend: Upload Obsidian Vault ZIP
Frontend->>Backend API: POST /api/obsidian/upload_zip
Backend API->>Backend API: Extract ZIP to /app/data/obsidian_jobs/{job_id}
Backend API->>Backend API: Count Markdown files
Backend API->>Redis: Store pending job metadata<br/>(status, total_files)
Backend API-->>Frontend: Return job_id, vault_path, total
User->>Frontend: Click "Start Ingestion"
Frontend->>Backend API: POST /api/obsidian/start
Backend API->>Redis: Retrieve pending job
Backend API->>RQ Worker: Enqueue ingest_obsidian_vault_job
Backend API-->>Frontend: Return RQ job_id
rect rgba(200, 220, 255, 0.3)
Note over RQ Worker,Neo4j DB: Async Ingestion Pipeline
loop For each Markdown file
RQ Worker->>Backend API: Parse note (frontmatter, content)
RQ Worker->>Backend API: Chunk content with overlap
RQ Worker->>Embedding Service: Generate embeddings for chunks
RQ Worker->>Neo4j DB: Ingest note, chunks, tags, links
RQ Worker->>Redis: Update job metadata (processed count)
end
RQ Worker->>Redis: Set job status = completed
end
Frontend->>Backend API: GET /api/obsidian/status?job_id=...
Backend API->>Redis: Fetch job status & metadata
Backend API-->>Frontend: Return status, progress%, error details
User->>Frontend: Query chat with Obsidian context
Frontend->>Backend API: POST /api/chat/send<br/>(include_obsidian_memory=true)
Backend API->>Backend API: Query Obsidian search<br/>(vector similarity)
Backend API->>Neo4j DB: Search notes by embedding
Neo4j DB-->>Backend API: Return relevant notes & chunks
Backend API->>Backend API: Format context with Obsidian notes
Backend API-->>Frontend: Stream LLM response with Obsidian context
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Possibly related PRs
Suggested reviewers
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
🎉 Robot Framework Test ResultsStatus: ✅ All tests passed!
📊 View ReportsGitHub Pages (Live Reports): Download Artifacts:
|
|
| Metric | Count |
|---|---|
| ✅ Passed | 90 |
| ❌ Failed | 1 |
| 📊 Total | 91 |
📊 View Reports
GitHub Pages (Live Reports):
Download Artifacts:
- robot-test-reports-html - HTML reports
- robot-test-results-xml - XML output
Adds COPY src/ src/ step after dependency installation to ensure all source files are available in the Docker image. This improves build caching while ensuring complete source code is present.
|
| Metric | Count |
|---|---|
| ✅ Passed | 90 |
| ❌ Failed | 1 |
| 📊 Total | 91 |
📊 View Reports
GitHub Pages (Live Reports):
Download Artifacts:
- robot-test-reports-html - HTML reports
- robot-test-results-xml - XML output
* Enhance configuration management and add new setup scripts - Updated .gitignore to include config.yml and its template. - Added config.yml.template for default configuration settings. - Introduced restart.sh script for service management. - Enhanced services.py to load config.yml and check for Obsidian/Neo4j integration. - Updated wizard.py to prompt for Obsidian/Neo4j configuration during setup and create config.yml from template if it doesn't exist. * Refactor transcription providers and enhance configuration management - Updated Docker Compose files to include the new Neo4j service configuration. - Added support for Obsidian/Neo4j integration in the setup process. - Refactored transcription providers to utilize a registry-driven approach for Deepgram and Parakeet. - Enhanced error handling and logging in transcription processes. - Improved environment variable management in test scripts to prioritize command-line overrides. - Removed deprecated Parakeet provider implementation and streamlined audio stream workers. * Update configuration management and enhance file structure, add test-matrix (#237) * Update configuration management and enhance file structure - Refactored configuration file paths to use a dedicated `config/` directory, including updates to `config.yml` and its template. - Modified service scripts to load the new configuration path for `config.yml`. - Enhanced `.gitignore` to include the new configuration files and templates. - Updated documentation to reflect changes in configuration file locations and usage. - Improved setup scripts to ensure proper creation and management of configuration files. - Added new test configurations for various provider combinations to streamline testing processes. * Add test requirements and clean up imports in wizard.py - Introduced a new `test-requirements.txt` file to manage testing dependencies. - Removed redundant import of `shutil` in `wizard.py` to improve code clarity. * Add ConfigManager for unified configuration management - Introduced a new `config_manager.py` module to handle reading and writing configurations from `config.yml` and `.env` files, ensuring backward compatibility. - Refactored `ChronicleSetup` in `backends/advanced/init.py` to utilize `ConfigManager` for loading and updating configurations, simplifying the setup process. - Removed redundant methods for loading and saving `config.yml` directly in `ChronicleSetup`, as these are now managed by `ConfigManager`. - Enhanced user feedback during configuration updates, including success messages for changes made to configuration files. * Refactor transcription provider configuration and enhance setup process - Updated `.env.template` to clarify speech-to-text configuration and removed deprecated options for Mistral. - Modified `docker-compose.yml` to streamline environment variable management by removing unused Mistral keys. - Enhanced `ChronicleSetup` in `init.py` to provide clearer user feedback and updated the transcription provider selection process to rely on `config.yml`. - Improved error handling in the websocket controller to determine the transcription provider from the model registry instead of environment variables. - Updated health check routes to reflect the new method of retrieving the transcription provider from `config.yml`. - Adjusted `config.yml.template` to include comments on transcription provider options for better user guidance. * Enhance ConfigManager with deep merge functionality - Updated the `update_memory_config` method to perform a deep merge of updates into the memory configuration, ensuring nested dictionaries are merged correctly. - Added a new `_deep_merge` method to handle recursive merging of dictionaries, improving configuration management capabilities. * Refactor run-test.sh and enhance memory extraction tests - Removed deprecated environment variable handling for TRANSCRIPTION_PROVIDER in `run-test.sh`, streamlining the configuration process. - Introduced a new `run-custom.sh` script for executing Robot tests with custom configurations, improving test flexibility. - Enhanced memory extraction tests in `audio_keywords.robot` and `memory_keywords.robot` to include detailed assertions and result handling. - Updated `queue_keywords.robot` to fail fast if a job is in a 'failed' state when expecting 'completed', improving error handling. - Refactored `test_env.py` to load environment variables with correct precedence, ensuring better configuration management. * unify tests to robot test, add some more clean up * Update health check configuration in docker-compose-test.yml (#241) - Increased the number of retries from 5 to 10 for improved resilience during service readiness checks. - Extended the start period from 30s to 60s to allow more time for services to initialize before health checks commence. * Add step to create test configuration file in robot-tests.yml - Introduced a new step in the GitHub Actions workflow to copy the test configuration file from tests/configs/deepgram-openai.yml to a new config/config.yml. - Added logging to confirm the creation of the test config file, improving visibility during the test setup process. * remove cache step since not required * coderabbit comments * Refactor ConfigManager error handling for configuration file loading - Updated the ConfigManager to raise RuntimeError exceptions when the configuration file is not found or is invalid, improving error visibility and user guidance. - Removed fallback behavior that previously returned the current directory, ensuring users are explicitly informed about missing or invalid configuration files. * Refactor _find_repo_root method in ConfigManager - Updated the _find_repo_root method to locate the repository root using the __file__ location instead of searching for config/config.yml, simplifying the logic and improving reliability. - Removed the previous error handling that raised a RuntimeError if the configuration file was not found, as the new approach assumes config_manager.py is always at the repo root. * Enhance speaker recognition service integration and error handling (#245) * Enhance speaker recognition service integration and error handling - Updated `docker-compose-test.yml` to enable speaker recognition in the test environment and added a new `speaker-service-test` service for testing purposes. - Refactored `run-test.sh` to improve the execution of Robot Framework tests from the repository root. - Enhanced error handling in `speaker_recognition_client.py` to return detailed error messages for connection issues. - Improved error logging in `speaker_jobs.py` to handle and report errors from the speaker recognition service more effectively. - Updated `Dockerfile` to copy the full source code after dependencies are cached, ensuring all necessary files are included in the image. * Remove integration tests workflow and enhance robot tests with HF_TOKEN verification - Deleted the `integration-tests.yml` workflow file to streamline CI processes. - Updated `robot-tests.yml` to include verification for the new `HF_TOKEN` secret, ensuring all required secrets are checked before running tests. * Fix key access in system admin tests to use string indexing for speakers data * Refactor Robot Framework tests and enhance error handling in memory services - Removed the creation of the test environment file from the GitHub Actions workflow to streamline setup. - Updated the Robot Framework tests to utilize a unified test script for improved consistency. - Enhanced error messages in the MemoryService class to provide more context on connection failures for LLM and vector store providers. - Added critical checks for API key presence in the OpenAIProvider class to ensure valid credentials are provided before proceeding. - Adjusted various test setup scripts to use a centralized BACKEND_DIR variable for better maintainability and clarity. * Refactor test container cleanup in run-robot-tests.sh - Updated the script to dynamically construct container names from docker-compose services, improving maintainability and reducing hardcoded values. - Enhanced the cleanup process for stuck test containers by utilizing the COMPOSE_PROJECT_NAME variable. * Enhance run-robot-tests.sh for improved logging and cleanup - Set absolute paths for consistent directory references to simplify navigation. - Capture container logs, status, and resource usage for better debugging. - Refactor cleanup process to utilize dynamic backend directory references, improving maintainability. - Ensure proper navigation back to the tests directory after operations. * Add speaker recognition configuration and update test script defaults - Introduced speaker recognition settings in config.yml.template, allowing for easy enable/disable and service URL configuration. - Updated run-robot-tests.sh to use a test-specific configuration file that disables speaker recognition for improved CI performance. - Modified deepgram-openai.yml to disable speaker recognition during CI tests to enhance execution speed. * Refactor speaker recognition configuration management - Updated docker-compose-test.yml to clarify speaker recognition settings, now controlled via config.yml for improved CI performance. - Enhanced model_registry.py to include a dedicated speaker_recognition field for better configuration handling. - Modified speaker_recognition_client.py to load configuration from config.yml, allowing for dynamic enabling/disabling of the speaker recognition service based on the configuration. * Add minimum worker count verification to infrastructure tests - Introduced a new keyword to verify that the minimum number of workers are registered, enhancing the robustness of health checks. - Updated the worker count validation test to include a wait mechanism for worker registration, improving test reliability. - Clarified comments regarding expected worker counts to reflect the distinction between RQ and audio stream workers. * Update configuration management and enhance model handling - Added OBSIDIAN_ENABLED configuration to ChronicleSetup for improved feature toggling. - Introduced speaker_recognition configuration handling in model_registry.py to streamline model loading. - Refactored imports in deepgram.py to improve clarity and reduce redundancy.
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (8)
backends/advanced/src/advanced_omi_backend/workers/conversation_jobs.py (2)
96-97: Move imports to top of file.Per coding guidelines: "ALL imports must be at the top of the file after the docstring. NEVER import modules in the middle of functions or files."
The
datetimeimport is already at line 10, making this duplicate. Move theConversationimport to the top of the file.🔎 Proposed refactor
At the top of the file, add:
from advanced_omi_backend.utils.conversation_utils import ( analyze_speech, extract_speakers_from_segments, is_meaningful_speech, mark_conversation_deleted, track_speech_activity, update_job_progress_metadata, ) +from advanced_omi_backend.models.conversation import Conversation, create_conversationThen remove the imports from inside the function:
# Update conversation in database with end reason and completion time - from advanced_omi_backend.models.conversation import Conversation - from datetime import datetime -
575-575: Remove duplicate asyncio import.
asynciois already imported at line 7. Per coding guidelines, "ALL imports must be at the top of the file after the docstring. NEVER import modules in the middle of functions or files."🔎 Proposed fix
# Generate all three summaries in parallel for efficiency - import asyncio - title, short_summary, detailed_summary = await asyncio.gather(extras/speaker-recognition/run-test.sh (1)
146-159: Critical: Cleanup not called when tests fail.When pytest fails with a non-signal error (e.g., test assertion failure, timeout), the script exits at line 158 without calling
cleanup(). This leaves test containers running because:
- There's no EXIT trap (intentionally excluded at line 24)
- The explicit cleanup at line 165 is only reached on successful completion
- Signal-based interrupts (SIGINT/SIGTERM) are handled by the trap, but normal failures are not
This will orphan containers and can interfere with subsequent test runs.
🔎 Proposed fix: Call cleanup before exit
} || { exit_code=$? if [ $exit_code -eq 124 ]; then print_error "Test timed out after 30 minutes" elif [ $exit_code -eq 130 ]; then print_warning "Test interrupted by user (Ctrl+C)" else print_error "Test failed with exit code $exit_code" fi + cleanup exit $exit_code }backends/advanced/src/advanced_omi_backend/workers/memory_jobs.py (4)
104-104: Remove redundant user fetch (already retrieved on line 64).The user is fetched twice with the same
user_id(lines 64 and 104). This is inefficient and unnecessary. Reuse theuservariable from the first fetch instead of fetching again.🔎 Suggested fix
if len(full_conversation) < 10: logger.warning(f"Conversation too short for memory processing: {conversation_id}") return {"success": False, "error": "Conversation too short"} # Check primary speakers filter - user = await get_user_by_id(user_id) if user and user.primary_speakers:The
uservariable from line 64 is already available in this scope.
152-158: Log the exception instead of silently ignoring it.The bare
except Exception: passviolates the project's error handling guidelines and the retrieved learning. Even though falling back to the default provider is acceptable, the exception should be logged for debugging purposes.As per coding guidelines: "Always raise errors, never silently ignore."
Based on learnings: "prefer logging.exception() inside except blocks to automatically log the full stack trace."
🔎 Suggested fix
# Determine memory provider from memory service memory_provider = conversation_model.MemoryProvider.CHRONICLE # Default try: memory_service_obj = get_memory_service() provider_name = memory_service_obj.__class__.__name__ if "OpenMemory" in provider_name: memory_provider = conversation_model.MemoryProvider.OPENMEMORY_MCP - except Exception: - pass + except Exception as e: + logger.warning( + f"Failed to determine memory provider, using default: {e}" + )
45-47: Move imports to file top.These imports at lines 45-47 are inside the function body but violate the project's import guidelines. There are no circular dependencies justifying their placement—they should be moved to the top of the file after the docstring, grouped with other local imports.
Per coding guidelines: "ALL imports must be at the top of the file after the docstring. NEVER import modules in the middle of functions or files."
Apply this fix
Move these imports to the top-level imports section (around line 18):
from advanced_omi_backend.models.job import BaseRQJob, JobPriority, async_job from advanced_omi_backend.services.memory.base import MemoryEntry +from advanced_omi_backend.models.conversation import Conversation +from advanced_omi_backend.services.memory import get_memory_service +from advanced_omi_backend.users import get_user_by_idThen remove the imports from inside the
process_memory_jobfunction (lines 45-47).
180-180: Move import to file top.This import is inside the function body, violating the project's import guidelines. There is no circular dependency preventing this import from being at the top of the file—
rqis already used at the module level inqueue_controller.py(whichmemory_jobs.pyimports from), andobsidian_jobs.pydemonstrates thatfrom rq import get_current_jobcan safely be placed at the top level.Per coding guidelines: "ALL imports must be at the top of the file after the docstring. NEVER import modules in the middle of functions or files."
Add the import to the top-level imports section (around line 18 after other third-party imports) and remove it from line 180.
Apply this fix
from advanced_omi_backend.controllers.queue_controller import ( JOB_RESULT_TTL, memory_queue, ) from advanced_omi_backend.models.job import BaseRQJob, JobPriority, async_job from advanced_omi_backend.services.memory.base import MemoryEntry +from rq import get_current_jobThen remove the import from line 180.
status.py (1)
71-71: Remove redundant import inside function body.
dotenv_valuesis already imported at the module level (line 21). As per coding guidelines, imports must be at the top of the file—never in the middle of functions.🔎 Proposed fix
# Handle speaker-recognition profiles if service_name == 'speaker-recognition': - from dotenv import dotenv_values env_file = service_path / '.env' if env_file.exists(): env_values = dotenv_values(env_file)
🟠 Major comments (22)
backends/advanced/src/advanced_omi_backend/workers/memory_jobs.py-92-97 (1)
92-97: Format line 95 to comply with 100-character limit.Line 95 exceeds the 100-character line length specified in the project's coding guidelines. Please reformat this condition for better readability.
As per coding guidelines: "Use Black formatter with 100-character line length."
🔎 Suggested formatting
- # Fallback: if segments have no text content but transcript exists, use transcript - # This handles cases where speaker recognition fails/is disabled - if len(full_conversation) < 10 and conversation_model.transcript and isinstance(conversation_model.transcript, str): - logger.info(f"Segments empty or too short, falling back to transcript text for {conversation_id}") + # Fallback: if segments have no text content but transcript exists, use transcript + # This handles cases where speaker recognition fails/is disabled + transcript = conversation_model.transcript + if ( + len(full_conversation) < 10 + and transcript + and isinstance(transcript, str) + ): + logger.info( + f"Segments empty or too short, falling back to transcript text " + f"for {conversation_id}" + ) full_conversation = conversation_model.transcriptbackends/advanced/src/advanced_omi_backend/services/memory/providers/llm_providers.py-299-301 (1)
299-301: Uselogging.exception()in except block.Per learnings, prefer
logging.exception()to automatically capture the full stack trace. Static analysis (TRY400) also flags this.🔎 Proposed fix
except Exception as e: - memory_logger.error(f"OpenAI embedding generation failed: {e}") + memory_logger.exception("OpenAI embedding generation failed") raisebackends/advanced/src/advanced_omi_backend/services/memory/providers/llm_providers.py-316-318 (1)
316-318: Fix f-string and uselogging.exception().Line 317 has an f-string without placeholders (F541) and should use
logging.exception()per learnings.🔎 Proposed fix
except asyncio.TimeoutError: - memory_logger.error(f"OpenAI connection test timed out after 10s - check network connectivity and API endpoint") + memory_logger.error("OpenAI connection test timed out after 10s - check network connectivity and API endpoint") return FalseNote:
logging.exception()is not needed here sinceTimeoutErrordoesn't have additional context, but the extraneousfprefix should be removed.backends/advanced/src/advanced_omi_backend/services/memory/config.py-311-315 (1)
311-315: Use bareraiseto preserve stack trace.Per learnings, when re-raising exceptions, use bare
raise(orraise ... from efor chaining). The currentraise eresets the traceback origin. Static analysis (TRY201) also flags this.🔎 Proposed fix
except Exception as e: memory_logger.exception( f"Failed to get embedding dimensions from registry for model '{embedding_model}'" ) - raise e + raisebackends/advanced/Dockerfile.k8s-36-37 (1)
36-37: Remove or update stale comment.The comment references copying memory config, but the COPY instruction was removed. Either remove the comment or update it to reflect the new config.yml mounting approach.
🔎 Proposed fix
-# Copy memory config (created by init.sh from template) - +# Note: config.yml is mounted at runtime via docker-compose/k8s volumesOr simply remove Lines 36-37 entirely if no documentation is needed.
backends/advanced/src/advanced_omi_backend/controllers/websocket_controller.py-305-320 (1)
305-320: Move import to top of file.The import at line 306 violates the coding guideline: "ALL imports must be at the top of the file after the docstring. NEVER import modules in the middle of functions."
As per coding guidelines, move this import to the top of the file with other local imports.
🔎 Suggested fix
Add at the top of the file (around line 21-26 with other local imports):
from advanced_omi_backend.model_registry import get_models_registryThen simplify the function:
- # Determine transcription provider from config.yml - from advanced_omi_backend.model_registry import get_models_registry - registry = get_models_registry()Committable suggestion skipped: line range outside the PR's diff.
backends/advanced/src/advanced_omi_backend/controllers/audio_controller.py-87-93 (1)
87-93: The fallback to a new UUID masks upstream issues in the gdrive flow.The
audio_uuidattribute is reliably attached upstream ingdrive_audio_utils.py(line 106:wrapped_file.audio_uuid = file_id) before files reach this controller. However, the fallback behavior (lines 89-91) is problematic: instead of raising an error ifaudio_uuidis missing for gdrive files, it silently generates a new UUID and logs a warning. This defeats error detection if the attribute attachment fails upstream.For the gdrive source, either assert the attribute exists or raise an explicit error:
if source == "gdrive": audio_uuid = getattr(file, "audio_uuid", None) if not audio_uuid: raise ValueError(f"Missing audio_uuid for gdrive file: {file.filename}") else: audio_uuid = str(uuid.uuid4())This ensures failures in the gdrive flow surface immediately rather than creating records with incorrect IDs.
backends/advanced/src/advanced_omi_backend/utils/file_utils.py-60-81 (1)
60-81: Uselogging.exception()in except blocks for automatic stack traces.The error handling uses
logging.error()instead oflogging.exception(), which misses the automatic stack trace logging. Additionally, the success return should be moved to an else block for clarity.Based on learnings, prefer
logging.exception()inside except blocks to automatically log the full stack trace, which improves debuggability.🔎 Proposed fix
# Extract zip file try: with zipfile.ZipFile(zip_path, 'r') as zf: zf.extractall(extract_dir) + except zipfile.BadZipFile as e: + error_msg = f"Invalid zip file: {zip_path} - {e}" + logger.exception(error_msg) + raise zipfile.BadZipFile(error_msg) from e + except zipfile.LargeZipFile as e: + error_msg = f"Zip file too large: {zip_path} - {e}" + logger.exception(error_msg) + raise ZipExtractionError(error_msg) from e + except PermissionError as e: + error_msg = f"Permission denied extracting zip file: {zip_path} - {e}" + logger.exception(error_msg) + raise ZipExtractionError(error_msg) from e + except Exception as e: + error_msg = f"Error extracting zip file {zip_path}: {e}" + logger.exception(error_msg) + raise ZipExtractionError(error_msg) from e + else: logger.info(f"Successfully extracted {zip_path} to {extract_dir}") return extract_dir - except zipfile.BadZipFile as e: - error_msg = f"Invalid zip file: {zip_path} - {e}" - logger.error(error_msg) - raise zipfile.BadZipFile(error_msg) from e - except zipfile.LargeZipFile as e: - error_msg = f"Zip file too large: {zip_path} - {e}" - logger.error(error_msg) - raise ZipExtractionError(error_msg) from e - except PermissionError as e: - error_msg = f"Permission denied extracting zip file: {zip_path} - {e}" - logger.error(error_msg) - raise ZipExtractionError(error_msg) from e - except Exception as e: - error_msg = f"Error extracting zip file {zip_path}: {e}" - logger.error(error_msg) - raise ZipExtractionError(error_msg) from ebackends/advanced/src/advanced_omi_backend/llm_client.py-14-15 (1)
14-15: Remove unused imports.
_load_root_configand_resolve_valueare imported but not used in this file. The factory now relies solely onget_models_registry().🔎 Proposed fix
-from advanced_omi_backend.services.memory.config import load_config_yml as _load_root_config -from advanced_omi_backend.services.memory.config import resolve_value as _resolve_value - from advanced_omi_backend.model_registry import get_models_registrybackends/advanced/docker-compose.yml-210-214 (1)
210-214: Pre-create thechronicle-networkDocker network before running services.The
chronicle-networkis configured as external in multiple docker-compose files (backends/advanced, extras/speaker-recognition, extras/openmemory-mcp). The setup script and documentation don't mention this prerequisite, but users will encounter a "network not found" error without it. Add either:
- An automated step in the setup script to create this network, or
- Explicit documentation in the getting-started guide with:
docker network create chronicle-network --driver bridgebackends/advanced/src/advanced_omi_backend/chat_service.py-336-345 (1)
336-345: Uselogging.exception()and bareraisefor better stack traces.Based on project learnings, use
logging.exception()in except blocks to automatically log the full stack trace. Also, use bareraiseinstead ofraise eto preserve the original traceback.🔎 Proposed fix
except ObsidianSearchError as exc: - logger.error( + logger.exception( "Failed to get Obsidian context (%s stage): %s", exc.stage, exc, ) raise except Exception as e: - logger.error(f"Failed to get Obsidian context: {e}") - raise e + logger.exception(f"Failed to get Obsidian context: {e}") + raiseconfig_manager.py-107-114 (1)
107-114: Chain exceptions withraise ... from eto preserve context.Based on learnings, always chain exceptions when re-raising to preserve the original context for debugging.
🔎 Proposed fix
except yaml.YAMLError as e: raise RuntimeError( f"Invalid YAML in configuration file {self.config_yml_path}: {e}" - ) + ) from e except Exception as e: raise RuntimeError( f"Failed to load configuration file {self.config_yml_path}: {e}" - ) + ) from econfig_manager.py-180-182 (1)
180-182: Uselogging.exception()for .env update failures.🔎 Proposed fix
except Exception as e: - logger.error(f"Failed to update .env file: {e}") + logger.exception("Failed to update .env file") raisebackends/advanced/src/advanced_omi_backend/services/obsidian_service.py-449-451 (1)
449-451: Uselogging.exception()instead oflogging.error()for database errors.This ensures the full stack trace is captured for debugging database issues.
🔎 Proposed fix
except Exception as e: - logger.error(f"Obsidian search failed: {e}") + logger.exception("Obsidian search failed") raise ObsidianSearchError("database", str(e)) from ebackends/advanced/src/advanced_omi_backend/routers/modules/obsidian_routes.py-151-153 (1)
151-153: Chain HTTPException withfrom e.🔎 Proposed fix
except Exception as e: logger.exception(f"Failed to start job {job_id}: {e}") - raise HTTPException(status_code=500, detail=f"Failed to start job: {e}") + raise HTTPException(status_code=500, detail=f"Failed to start job: {e}") from ebackends/advanced/src/advanced_omi_backend/services/transcription/__init__.py-129-148 (1)
129-148: Remove DEBUG logging statements before release.These debug logs expose internal response structure details and should be removed or converted to DEBUG level with a guard before v0.1.0 release.
🔎 Proposed fix
- # DEBUG: Log Deepgram response structure - if "results" in data and "channels" in data.get("results", {}): - channels = data["results"]["channels"] - if channels and "alternatives" in channels[0]: - alt = channels[0]["alternatives"][0] - logger.info(f"DEBUG Registry: Deepgram alternative keys: {list(alt.keys())}") - # Extract normalized shape text, words, segments = "", [], [] extract = (op.get("response", {}) or {}).get("extract") or {} if extract: text = _dotted_get(data, extract.get("text")) or "" words = _dotted_get(data, extract.get("words")) or [] segments = _dotted_get(data, extract.get("segments")) or [] - - # DEBUG: Log what we extracted - logger.info(f"DEBUG Registry: Extracted {len(segments)} segments from response") - if segments and len(segments) > 0: - logger.info(f"DEBUG Registry: First segment keys: {list(segments[0].keys()) if isinstance(segments[0], dict) else 'not a dict'}") - logger.info(f"DEBUG Registry: First segment: {segments[0]}")config_manager.py-132-134 (1)
132-134: Uselogging.exception()instead oflogging.error()before re-raising.This captures the full stack trace for debugging save failures.
🔎 Proposed fix
except Exception as e: - logger.error(f"Failed to save config.yml: {e}") + logger.exception("Failed to save config.yml") raisebackends/advanced/src/advanced_omi_backend/services/obsidian_service.py-406-408 (1)
406-408: Uselogging.exception()instead oflogging.error()to include stack trace.Based on learnings, prefer
logging.exception()inside except blocks to automatically log the full stack trace for better debuggability.🔎 Proposed fix
except Exception as exc: - logger.error("Obsidian search embedding failed: %s", exc) + logger.exception("Obsidian search embedding failed") raise ObsidianSearchError("embedding", str(exc)) from excbackends/advanced/src/advanced_omi_backend/routers/modules/obsidian_routes.py-165-166 (1)
165-166: Chain HTTPException for NoSuchJobError.🔎 Proposed fix
except NoSuchJobError: - raise HTTPException(status_code=404, detail="Job not found") + raise HTTPException(status_code=404, detail="Job not found") from NoneUsing
from Nonehere explicitly suppresses the original exception since NoSuchJobError is expected and doesn't need chaining.backends/advanced/src/advanced_omi_backend/routers/modules/obsidian_routes.py-88-93 (1)
88-93: Chain HTTPExceptions withfrom efor proper exception context.🔎 Proposed fix
except zipfile.BadZipFile as e: logger.exception(f"Invalid zip file: {e}") - raise HTTPException(status_code=400, detail=f"Invalid zip file: {e}") + raise HTTPException(status_code=400, detail=f"Invalid zip file: {e}") from e except ZipExtractionError as e: - logger.error(f"Error extracting zip file: {e}") - raise HTTPException(status_code=500, detail=f"Failed to extract zip file: {e}") + logger.exception("Error extracting zip file") + raise HTTPException(status_code=500, detail=f"Failed to extract zip file: {e}") from ebackends/advanced/src/advanced_omi_backend/routers/modules/obsidian_routes.py-46-48 (1)
46-48: Uselogging.exception()and chain the HTTPException.Based on learnings, use
logging.exception()for automatic stack trace and chain exceptions properly.🔎 Proposed fix
except Exception as e: - logger.error(f"Ingestion failed: {e}") - raise HTTPException(status_code=500, detail=str(e)) + logger.exception("Ingestion failed") + raise HTTPException(status_code=500, detail=str(e)) from ebackends/advanced/src/advanced_omi_backend/routers/modules/obsidian_routes.py-221-223 (1)
221-223: Replace bareexceptwith specific exception type.Bare
exceptcatches everything includingSystemExitandKeyboardInterrupt. This is flagged as E722 by static analysis.🔎 Proposed fix
try: job_data = json.loads(pending_data) return { "job_id": job_id, "status": "ready", "total": job_data.get("total", 0), "processed": 0, "percent": 0, "errors": [], "vault_path": job_data.get("vault_path") } - except: - raise HTTPException(status_code=500, detail="Failed to get job status") + except (json.JSONDecodeError, TypeError, KeyError) as e: + logger.exception("Failed to parse pending job data") + raise HTTPException(status_code=500, detail="Failed to get job status") from e raise HTTPException(status_code=404, detail="Job not found")
🟡 Minor comments (10)
backends/advanced/webui/src/components/MemorySettings.tsx-224-224 (1)
224-224: Disabling buttons on any error creates a UX deadlock.When an error occurs (either from loading or validation), the user cannot click Validate or Save to retry because
!!errordisables the buttons. SincevalidateConfigandsaveConfigclear the error at the start of their execution, the user has no way to trigger that clearing.Scenario:
- Config loads successfully
- User edits the YAML
- User clicks Validate → validation fails → error is set
- User fixes the YAML but cannot click Validate again (button disabled)
- User must click Reset/Reload just to clear the error
Consider clearing the error when the user edits the config, or removing the error-based disabling:
🔎 Option 1: Clear error on config change
<textarea value={configYaml} - onChange={(e) => setConfigYaml(e.target.value)} + onChange={(e) => { + setConfigYaml(e.target.value) + if (error) setError('') + }} placeholder="Loading configuration..."🔎 Option 2: Remove error from disabled condition
<button onClick={validateConfig} - disabled={validating || !configYaml.trim() || !!error} + disabled={validating || !configYaml.trim()} ... > <button onClick={saveConfig} - disabled={saving || !configYaml.trim() || !!error} + disabled={saving || !configYaml.trim()} ... >Also applies to: 242-242
backends/advanced/pyproject.toml-30-32 (1)
30-32: Upgrade google-auth-oauthlib to address transitive security vulnerability.The google-api-python-client and google-auth-httplib2 versions are valid. However, google-auth-oauthlib 1.0.0 has a transitive security risk through its dependency on oauthlib, which is affected by CVE-2022-36087 (DoS vulnerability). Upgrade google-auth-oauthlib to at least version 1.2.3. Additionally, consider adding upper bounds (e.g.,
<3.0.0) to all three packages to prevent unexpected breaking changes from major version updates.backends/advanced/pyproject.toml-59-59 (1)
59-59: Use a pinned PyPI version instead of tracking themainbranch.The dependency currently tracks
git rev = "main", which lacks stability guarantees. Since mem0ai has stable releases available on PyPI (latest: 1.0.1), consider switching to a pinned version constraint likemem0ai = "1.0.1"ormem0ai = ">=0.1.0,<2.0.0"for better reproducibility and stability.README.md-7-7 (1)
7-7: Wrap bare URL in angle brackets for markdown compliance.The bare URL
http://localhost:5173should be formatted as<http://localhost:5173>to comply with markdown linting rules.🔎 Proposed fix
-Run setup wizard, start services, access at http://localhost:5173 +Run setup wizard, start services, access at <http://localhost:5173>tests/resources/audio_keywords.robot-98-99 (1)
98-99: Invalid Robot Framework syntax for dictionary method call.The
${result.get('error_message', 'Unknown error')}syntax won't work as expected. Robot Framework's${}variable syntax doesn't support Python method calls directly. Use inline Python evaluation with${{ }}syntax instead.🔎 Proposed fix
# Verify memory extraction succeeded Should Be True ${result}[success] - ... Memory extraction failed: ${result.get('error_message', 'Unknown error')} + ... Memory extraction failed: ${{ $result.get('error_message', 'Unknown error') }}Committable suggestion skipped: line range outside the PR's diff.
backends/advanced/src/advanced_omi_backend/workers/audio_stream_parakeet_worker.py-30-36 (1)
30-36: Remove unused variable; clarify that missing provider configuration will cause failure.The
service_urlvariable (line 32) is retrieved but never used. Additionally, the warning messages are misleading—they suggest graceful degradation, butParakeetStreamConsumer.__init__()will actually raise aRuntimeErrorif the transcription provider cannot be loaded from config.yml. Either remove the warning messages entirely or clarify that missing configuration causes immediate failure.Recommended fix
- service_url = os.getenv("PARAKEET_ASR_URL") - if not service_url: - logger.warning("PARAKEET_ASR_URL environment variable not set") - logger.warning("Ensure config.yml has a default 'stt' model configured for Parakeet") - logger.warning("Audio transcription will use alternative providers if configured in config.yml") + # Provider configuration is validated when ParakeetStreamConsumer is createdwizard.py-451-462 (1)
451-462: Redundant re-check of obsidian_enabled overwrites user's selection.Lines 453-462 re-check for Obsidian configuration by reading from
.envafterrun_service_setupcompletes. This overwrites theobsidian_enabledvariable that was set by user input earlier (line 411). If the.envwasn't written correctly, this could incorrectly report Obsidian as disabled even when the user enabled it.🔎 Proposed fix - preserve user selection or clarify intent
If the intent is to verify configuration was written correctly:
- # Check for Obsidian/Neo4j configuration - obsidian_enabled = False + # Verify Obsidian/Neo4j configuration was written correctly + obsidian_configured = False if 'advanced' in selected_services and 'advanced' not in failed_services: backend_env_path = Path('backends/advanced/.env') if backend_env_path.exists(): neo4j_host = read_env_value(str(backend_env_path), 'NEO4J_HOST') obsidian_enabled_flag = read_env_value(str(backend_env_path), 'OBSIDIAN_ENABLED') if neo4j_host and not is_placeholder(neo4j_host, 'your-neo4j-host-here', 'your_neo4j_host_here'): - obsidian_enabled = True + obsidian_configured = True elif obsidian_enabled_flag == 'true': - obsidian_enabled = True + obsidian_configured = True + + # Use the verified configuration status for messaging + if obsidian_configured:Committable suggestion skipped: line range outside the PR's diff.
backends/advanced/src/advanced_omi_backend/routers/modules/health_routes.py-51-52 (1)
51-52: PotentialAttributeErrorwhen_vs_defisNone.If
_vs_defisNone, accessing_vs_def.model_params.get(...)will raise anAttributeError. The current ternary only guards the outer access but still evaluates_vs_def.model_paramswhen_vs_defexists butmodel_paramsmight beNone.🔎 Proposed fix
-QDRANT_BASE_URL = (_vs_def.model_params.get("host") if _vs_def else "qdrant") -QDRANT_PORT = str(_vs_def.model_params.get("port") if _vs_def else "6333") +QDRANT_BASE_URL = (_vs_def.model_params.get("host") if _vs_def and _vs_def.model_params else "qdrant") +QDRANT_PORT = str(_vs_def.model_params.get("port") if _vs_def and _vs_def.model_params else "6333")backends/advanced/src/advanced_omi_backend/services/obsidian_service.py-283-285 (1)
283-285: Swallow exceptions silently losing error details.When embedding generation fails, the exception is logged but an empty list is returned. This silently hides failures from callers who might want to know why no chunks were produced. Consider either re-raising or returning an error indicator.
Based on learnings, when logging exceptions inside except blocks,
logging.exception()is appropriate (which is used here), but returning an empty list makes debugging harder for upstream callers.config_manager.py-362-366 (1)
362-366: Singleton getter docstring is misleading aboutservice_pathbehavior.The docstring states the function "uses cached instance or creates new one" when
service_pathisNone, but does not document that passing a non-Noneservice_pathreplaces the existing singleton. While the implementation correctly reflects this logic (line 364), the docstring should clarify that any non-Noneservice_pathwill replace the cached instance. Note: this function is not currently called elsewhere in the codebase.
| @router.post("/upload_audio_from_gdrive") | ||
| async def upload_audio_from_drive_folder( | ||
| gdrive_folder_id: str = Query(..., description="Google Drive Folder ID containing audio files (e.g., the string after /folders/ in the URL)"), | ||
| current_user: User = Depends(current_superuser), | ||
| device_name: str = Query(default="upload"), | ||
| auto_generate_client: bool = Query(default=True), | ||
| ): | ||
| try: | ||
| files = await download_audio_files_from_drive(gdrive_folder_id) | ||
| except AudioValidationError as e: | ||
| raise HTTPException(status_code=400, detail=str(e)) | ||
|
|
||
| return await audio_controller.upload_and_process_audio_files( | ||
| current_user, files, device_name, auto_generate_client, source="gdrive" | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Preserve exception context when re-raising.
The new Google Drive upload endpoint is well-implemented, but the error handling should preserve the original exception context when re-raising.
Based on learnings, always chain exceptions with raise ... from e to preserve the original context for better debuggability.
🔎 Proposed fix
try:
files = await download_audio_files_from_drive(gdrive_folder_id)
except AudioValidationError as e:
- raise HTTPException(status_code=400, detail=str(e))
+ raise HTTPException(status_code=400, detail=str(e)) from e🧰 Tools
🪛 Ruff (0.14.10)
23-23: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
30-30: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling
(B904)
🤖 Prompt for AI Agents
In backends/advanced/src/advanced_omi_backend/routers/modules/audio_routes.py
around lines 20 to 34, the except block that converts AudioValidationError into
an HTTPException currently discards the original exception context; change the
re-raise to chain the original exception by using "raise
HTTPException(status_code=400, detail=str(e)) from e" so the original traceback
is preserved for debugging and logging.
| # NOTE: Speech was already validated during streaming, so we skip this check | ||
| # to avoid false negatives from aggregated results lacking proper word-level data | ||
| logger.info("✅ Conversation has meaningful speech (validated during streaming), proceeding with post-processing") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find all Python files to understand the codebase structure
echo "=== File structure ==="
find . -name "*.py" -type f | grep -E "(speech|detection|conversation)" | head -20
# Search for streaming validation and speech activity tracking
echo "=== Searching for streaming speech validation logic ==="
rg -n "track_speech_activity|is_meaningful_speech" --type=py -A 3 -B 1
# Search for speech detection thresholds and configuration
echo "=== Searching for speech detection thresholds ==="
rg -n "SPEECH.*THRESHOLD|MIN.*WORDS|MIN.*DURATION|MEANINGFUL|NOISE" --type=py -B 2 -A 2
# Examine the conversation_jobs.py file around the modified lines
echo "=== Checking conversation_jobs.py context ==="
rg -n "final validation|streaming.*validation|meaningful_speech" backends/advanced/src/advanced_omi_backend/workers/conversation_jobs.py -B 3 -A 3Repository: chronicler-ai/chronicle
Length of output: 12456
🏁 Script executed:
#!/bin/bash
# Read is_meaningful_speech implementation
echo "=== is_meaningful_speech function ==="
rg -n "def is_meaningful_speech" backends/advanced/src/advanced_omi_backend/utils/conversation_utils.py -A 25
# Read track_speech_activity implementation
echo "=== track_speech_activity function ==="
rg -n "async def track_speech_activity" backends/advanced/src/advanced_omi_backend/utils/conversation_utils.py -A 30
# Check what happens at end of conversation - check lines around 430
echo "=== Lines 420-450 in conversation_jobs.py ==="
sed -n '420,450p' backends/advanced/src/advanced_omi_backend/workers/conversation_jobs.py
# Check if there's any gating before post-processing
echo "=== Check post-processing gating logic ==="
rg -n "post.processing|transcription.*pipeline|process_conversation_async" backends/advanced/src/advanced_omi_backend/workers/conversation_jobs.py -B 3 -A 3Repository: chronicler-ai/chronicle
Length of output: 4808
🏁 Script executed:
#!/bin/bash
# Check the main conversation loop for any meaningful speech gating
echo "=== Check for early exit conditions in conversation loop ==="
sed -n '300,380p' backends/advanced/src/advanced_omi_backend/workers/conversation_jobs.py
# Look for where combined results are checked
echo "=== Check if combined results are validated before post-processing ==="
rg -n "combined.*meaningful|if.*combined|check.*combined" backends/advanced/src/advanced_omi_backend/workers/conversation_jobs.py -B 2 -A 2
# Check full flow from end of loop to post-processing trigger
echo "=== Check what triggers post-processing after the loop ==="
sed -n '405,475p' backends/advanced/src/advanced_omi_backend/workers/conversation_jobs.pyRepository: chronicler-ai/chronicle
Length of output: 6767
Add meaningful speech validation before post-processing.
The final validation check was removed, and the streaming validation (track_speech_activity) is insufficient. It only detects new word count increases—not whether the conversation meets meaningful speech thresholds.
For example, a 1-word utterance would trigger "new speech detected," but the conversation would still proceed to the expensive post-processing pipeline.
The is_meaningful_speech() function exists (imported but unused) with proper thresholds:
SPEECH_DETECTION_MIN_WORDS: 10SPEECH_DETECTION_MIN_DURATION: 10.0SPEECH_DETECTION_MIN_CONFIDENCE: 0.7
Add a check before post-processing (around line 432) to validate accumulated speech meets these thresholds, or ensure track_speech_activity() enforces meaningful speech gates during the streaming loop.
🎉 Robot Framework Test ResultsStatus: ✅ All tests passed!
📊 View ReportsGitHub Pages (Live Reports): Download Artifacts:
|
* Refactor configuration management in wizard and ChronicleSetup - Updated wizard.py to read Obsidian/Neo4j configuration from config.yml, enhancing flexibility and error handling. - Refactored ChronicleSetup to utilize ConfigManager for loading and verifying config.yml, ensuring a single source of truth. - Improved user feedback for missing configuration files and streamlined the setup process for memory and transcription providers. * Fix string formatting for error message in ChronicleSetup
|
| Metric | Count |
|---|---|
| ✅ Passed | 90 |
| ❌ Failed | 2 |
| 📊 Total | 92 |
📊 View Reports
GitHub Pages (Live Reports):
Download Artifacts:
- robot-test-reports-html - HTML reports
- robot-test-results-xml - XML output
Summary by CodeRabbit
Release Notes
New Features
Improvements
Documentation
✏️ Tip: You can customize this high-level summary in your review settings.