Skip to content

Releases: smkrv/ha-text-ai

v2.3.0 Pre-release

30 Dec 14:23

Choose a tag to compare

v2.3.0 Pre-release Pre-release
Pre-release

What's New in 2.3.0

🚀 New Features

  • Structured JSON Output: New structured_output and json_schema parameters for the ask_question service
  • Google Gemini Support: Full integration with Google Gemini API via google-genai library
  • Enhanced Service Response: ask_question service now returns structured response data including tokens used, model info, and timestamps
  • Edit Integration Settings: You can now change provider, API key, endpoint, and model for existing integrations without recreating them
    • Two-step options flow: first select provider, then configure connection settings
    • Automatic default endpoint and model when switching providers
    • Integration auto-reloads after saving changes

🐛 Bug Fixes

  • Fixed config flow error: Resolved '500 Internal Server Error' when editing existing integrations (HA 2024.1+ compatibility)
  • Fixed OptionsFlowHandler initialization for modern Home Assistant versions

📦 Dependencies

  • Added google-genai>=1.16.0 for Gemini support

⚠️ Pre-release Notice

This is a pre-release version for testing. Please report any issues on GitHub.

v2.2.0

21 Dec 21:08

Choose a tag to compare

Release v2.2.0: Configurable API Timeout

🚀 New Feature

Configurable API Timeout (#8)

  • Custom Timeout Setting: Added new api_timeout configuration option allowing users to set API request timeout from 5 to 600 seconds (default: 30 seconds)
  • Local LLM Support: Enables use of slower local LLM instances like Ollama that require more than 30 seconds to generate responses
  • Full UI Integration: Timeout can be configured during initial setup and modified via Options Flow without reconfiguration

🔧 Technical Details

Configuration Options

Parameter Type Range Default
api_timeout integer 5-600 seconds 30 seconds

Files Modified

  • const.py - Added timeout constants (CONF_API_TIMEOUT, DEFAULT_API_TIMEOUT, MIN_API_TIMEOUT, MAX_API_TIMEOUT)
  • config_flow.py - Added timeout field to provider form and options flow
  • api_client.py - Parameterized timeout in API client
  • coordinator.py - Use configurable timeout in message processing
  • __init__.py - Read and pass timeout from configuration

Breaking Changes

  • None - this release maintains full backward compatibility
  • Existing configurations will use the default 30-second timeout

🌐 Internationalization

Updated Translations

  • 8 Language Support: Added translations for new timeout setting in English, Russian, German, Spanish, Italian, Hindi, Serbian, and Chinese

📋 Usage Example

# In Home Assistant configuration or via UI
api_timeout: 120  # 2 minutes for slower local models

🙏 Community

Thanks to @Skyview79 for the feature request (#8)!


Full Changelog: v2.1.9...v2.2.0

v2.1.9

02 Sep 20:32

Choose a tag to compare

Release v2.1.9: Response Variables Support and Production Audit

🚀 Major Features

Response Variables Support

  • Direct AI Response Access: Eliminates the 255-character sensor limitation by returning response data directly from service calls
  • Enhanced Service Architecture: Improved async processing with metadata-rich responses including tokens used, model information, and timestamps
  • Backward Compatibility: Existing sensor-based workflows continue to work seamlessly

Enhanced Service Framework

  • Metadata-Rich Responses: Services now return comprehensive data including response text, token usage, model used, processing time, and success indicators
  • Improved Error Handling: Better exception handling with detailed error messages and proper error propagation

🔒 Security & Performance

Security Enhancements

  • API Key Protection: Enhanced API key handling with improved sanitization in logs
  • Secure Logging: Removed sensitive data from debug logs while maintaining useful debugging information
  • Input Validation: Enhanced parameter validation with type checking

Performance Optimizations

  • Context Managers: Added proper resource management with async context managers
  • Atomic File Operations: Implemented atomic file writes with backup and corruption handling
  • Memory Management: Improved memory usage monitoring and cleanup
  • Async Improvements: Better async/await patterns with proper semaphore usage

🌐 Internationalization

Updated Translations

  • 8 Language Support: Enhanced localization for English, Russian, German, Spanish, Italian, Hindi, Serbian, and Chinese
  • Response Variables Documentation: Updated all language files with new functionality descriptions
  • Consistent Terminology: Standardized technical terms across all translations

✅ Compliance & Quality

Home Assistant Compliance

  • Hassfest Validation: Fixed all hassfest validation errors in services.yaml
  • Service Schema Compliance: Added required target configurations for all services
  • Standards Adherence: Ensured full compliance with Home Assistant integration standards

Production Readiness

  • Comprehensive Code Audit: Full security and performance review of all components
  • Quality Improvements: Enhanced code quality with better error handling, logging, and resource management
  • Testing: Improved reliability through better exception handling and edge case coverage

📋 Technical Details

Breaking Changes

  • None - this release maintains full backward compatibility

New Service Response Format

Services now return structured data:

response_text: "AI response content"
tokens_used: 150
model_used: "gpt-4"
processing_time: 2.3
success: true
timestamp: "2025-01-09T17:30:00Z"

Enhanced get_history service now supports advanced filtering and sorting options:

service: ha_text_ai.get_history
data:
  instance: sensor.ha_text_ai_my_assistant
  limit: 10
  filter_model: "gemini-2.0-flash"
  start_date: "2025-01-01T00:00:00Z"
  include_metadata: true
  sort_order: "newest"

Migration Guide

  • Existing Users: No action required - existing automations continue to work
  • New Features: Use response variables in automations to access AI responses directly
  • Documentation: Updated README with comprehensive examples and migration guide

🔧 Developer Notes

API Changes

  • Enhanced service return values with metadata
  • Improved error response structure
  • Better async resource management

Dependencies

  • Updated minimum requirements for better security and performance
  • All dependencies verified for compatibility

Full Changelog: v2.1.8...v2.1.9

v2.1.8

01 Sep 22:32

Choose a tag to compare

Release v2.1.8: Response Variables Support and Production Audit

🚀 Major Features

Response Variables Support

  • Direct AI Response Access: Eliminates the 255-character sensor limitation by returning response data directly from service calls
  • Enhanced Service Architecture: Improved async processing with metadata-rich responses including tokens used, model information, and timestamps
  • Backward Compatibility: Existing sensor-based workflows continue to work seamlessly

Enhanced Service Framework

  • Metadata-Rich Responses: Services now return comprehensive data including response text, token usage, model used, processing time, and success indicators
  • Improved Error Handling: Better exception handling with detailed error messages and proper error propagation

🔒 Security & Performance

Security Enhancements

  • API Key Protection: Enhanced API key handling with improved sanitization in logs
  • Secure Logging: Removed sensitive data from debug logs while maintaining useful debugging information
  • Input Validation: Enhanced parameter validation with type checking

Performance Optimizations

  • Context Managers: Added proper resource management with async context managers
  • Atomic File Operations: Implemented atomic file writes with backup and corruption handling
  • Memory Management: Improved memory usage monitoring and cleanup
  • Async Improvements: Better async/await patterns with proper semaphore usage

🌐 Internationalization

Updated Translations

  • 8 Language Support: Enhanced localization for English, Russian, German, Spanish, Italian, Hindi, Serbian, and Chinese
  • Response Variables Documentation: Updated all language files with new functionality descriptions
  • Consistent Terminology: Standardized technical terms across all translations

✅ Compliance & Quality

Home Assistant Compliance

  • Hassfest Validation: Fixed all hassfest validation errors in services.yaml
  • Service Schema Compliance: Added required target configurations for all services
  • Standards Adherence: Ensured full compliance with Home Assistant integration standards

Production Readiness

  • Comprehensive Code Audit: Full security and performance review of all components
  • Quality Improvements: Enhanced code quality with better error handling, logging, and resource management
  • Testing: Improved reliability through better exception handling and edge case coverage

📋 Technical Details

Breaking Changes

  • None - this release maintains full backward compatibility

New Service Response Format

Services now return structured data:

response_text: "AI response content"
tokens_used: 150
model_used: "gpt-4"
processing_time: 2.3
success: true
timestamp: "2025-01-09T17:30:00Z"

Migration Guide

  • Existing Users: No action required - existing automations continue to work
  • New Features: Use response variables in automations to access AI responses directly
  • Documentation: Updated README with comprehensive examples and migration guide

🔧 Developer Notes

API Changes

  • Enhanced service return values with metadata
  • Improved error response structure
  • Better async resource management

Dependencies

  • Updated minimum requirements for better security and performance
  • All dependencies verified for compatibility

Full Changelog: v2.1.7...v2.1.8

v2.1.7

01 Sep 20:47

Choose a tag to compare

v2.1.7 Release Notes:

This release was accidentally deleted and has now been restored. Apologies for any inconvenience.

New Features

  • Added Google Gemini support

Google Gemini Integration Update

  • Fixed issue #6 by upgrading to google-genai 1.16.0
  • Improved endpoint configuration with abstract API and custom endpoint option
  • Refactored internal Gemini integration classes for better stability
  • Added Google Gemini support, thanks to DJAMIRSAM for testing and bug reports

Bug Fixes

  • Google Gemini Integration: Fixed compatibility issues with Google Gemini API (#6)

For more details on the integration, check out the discussion on the Home Assistant Community forum

🚀 Google Gemini Integration

  • Support for Google's Gemini
  • Enhanced natural language processing for Home Assistant automations
  • Optimized for smart home control and information queries

🔧 Technical Implementation

  • Seamless integration with existing HA Text AI conversation flows

Getting Started with Gemini

To configure the new Gemini provider:

  1. Update directly to v2.1.7 or through HACS
  2. Configure your Google Gemini API key in the integration settings
  3. Select Gemini as your preferred model in HA Text AI configuration

What's Changed

  • Add Gemini API provider support to HA Text AI integration

Full Changelog: v2.1.6...v2.1.7

v2.1.6

19 May 22:52

Choose a tag to compare

v2.1.6 Release Notes:

New Features

  • Added Google Gemini support, thanks to (@Azzedde)
  • Enhanced AI capabilities through Google's latest language model

Bug Fixes

  • Google Gemini Integration: Fixed compatibility issues with Google Gemini API by correcting field name format and improving message handling to meet API requirements. This should resolve error responses when using Gemini models. (#6)

For more details on the integration, check out the discussion on the Home Assistant Community forum

🚀 Google Gemini Integration

  • Support for Google's Gemini
  • Enhanced natural language processing for Home Assistant automations
  • Optimized for smart home control and information queries

🔧 Technical Implementation

  • Seamless integration with existing HA Text AI conversation flows

Getting Started with Gemini

To configure the new Gemini provider:

  1. Update directly to v2.1.6 or through HACS
  2. Configure your Google Gemini API key in the integration settings
  3. Select Gemini as your preferred model in HA Text AI configuration

What's Changed

  • Add Gemini API provider support to HA Text AI integration

Full Changelog: v2.1.5...v2.1.6

v2.1.5

19 May 22:23

Choose a tag to compare

v2.1.5 Release Notes:

New Features

  • Added Google Gemini support, thanks to (@Azzedde)
  • Enhanced AI capabilities through Google's latest language model

Bug Fixes

  • Google Gemini Integration: Fixed compatibility issues with Google Gemini API by correcting field name format from camelCase to snake_case and improving message handling to meet API requirements. This resolves error responses when using Gemini models. (#6)

For more details on the integration, check out the discussion on the Home Assistant Community forum

🚀 Google Gemini Integration

  • Support for Google's Gemini
  • Enhanced natural language processing for Home Assistant automations
  • Optimized for smart home control and information queries

🔧 Technical Implementation

  • Seamless integration with existing HA Text AI conversation flows

Getting Started with Gemini

To configure the new Gemini provider:

  1. Update directly to v2.1.5 or through HACS
  2. Configure your Google Gemini API key in the integration settings
  3. Select Gemini as your preferred model in HA Text AI configuration

What's Changed

  • Add Gemini API provider support to HA Text AI integration

Full Changelog: v2.1.4...v2.1.5

v2.1.4

19 May 20:23

Choose a tag to compare

v2.1.4 Release Notes:

New Features

  • Added Google Gemini support, thanks to (@Azzedde)
  • Enhanced AI capabilities through Google's latest language model

Bug Fixes

  • Maintains full functional compatibility with v2.1.2

Previous Version Features (v2.1.2)

  • Resolved UI-level token limit calculation bug
  • Maintained full functional compatibility with v2.1.1

For more details on the integration, check out the discussion on the Home Assistant Community forum

🚀 Google Gemini Integration

  • Support for Google's Gemini
  • Enhanced natural language processing for Home Assistant automations
  • Optimized for smart home control and information queries

🔧 Technical Implementation

  • Seamless integration with existing HA Text AI conversation flows

Getting Started with Gemini

To configure the new Gemini provider:

  1. Update directly to v2.1.3 or through HACS
  2. Configure your Google Gemini API key in the integration settings
  3. Select Gemini as your preferred model in HA Text AI configuration

What's Changed

  • Add Gemini API provider support to HA Text AI integration by @Azzedde in #5

New Contributors

Full Changelog: v2.1.2...v2.1.4

What's Changed

  • Add Gemini API provider support to HA Text AI integration by @Azzedde in #5

New Contributors

Full Changelog: v2.1.2...v2.1.4

v2.1.2

29 Jan 15:15

Choose a tag to compare

v2.1.2 Release Notes:

Bug Fixes

  • Resolved UI-level token limit calculation bug
  • Maintains full functional compatibility with v2.1.1

Previous Version Features (v2.1.1)

For more details on the integration, check out the discussion on the Home Assistant Community forum

🔄 Major Architectural Changes

  • Complete refactoring of token handling mechanism
  • Elimination of custom token calculation approach
  • Direct max_tokens parameter passing to LLM APIs

🎯 Key Technical Improvements

  • Enhanced cross-provider compatibility
  • Expanded support for large-context language models
  • Robust and predictable token limit management
  • Significant codebase simplification
  • DeepSeek provider full integration

Provider Updates

DeepSeek — NEW Integration

DeepSeek is a cutting-edge AI provider specializing in advanced language models optimized for both conversational and reasoning tasks. This integration brings:

  • High-performance model inference
  • Cost-effective API endpoints
  • Enterprise-grade reliability
  • Flexible deployment options

Full Changelog: v2.1.1...v2.1.2

v2.1.1

28 Jan 13:20

Choose a tag to compare

v2.1.1 - Token Handling & DeepSeek Provider Integration

For more details on the integration, check out the discussion on the Home Assistant Community forum

🔄 Major Architectural Changes

  • Complete refactoring of token handling mechanism
  • Elimination of custom token calculation approach
  • Direct max_tokens parameter passing to LLM APIs

🎯 Key Technical Improvements

  • Enhanced cross-provider compatibility
  • Expanded support for large-context language models
  • Robust and predictable token limit management
  • Significant codebase simplification
  • DeepSeek provider full integration

🙏 Community Acknowledgments

Heartfelt gratitude to @estiens for identifying token handling complexities and providing comprehensive feedback that drove these critical improvements (#1).

Provider Updates

DeepSeek — NEW Integration

DeepSeek is a cutting-edge AI provider specializing in advanced language models optimized for both conversational and reasoning tasks. This integration brings:

  • High-performance model inference
  • Cost-effective API endpoints
  • Enterprise-grade reliability
  • Flexible deployment options

New Model Support

DeepSeek

  • deepseek-chat (DeepSeek-V3) — NEW Model
    A state-of-the-art conversational AI model designed for natural, context-aware dialogues. Features include:

    • Enhanced context retention
    • Multi-turn conversation support
    • Emotion-aware responses
    • Multi-language capabilities
  • deepseek-reasoner (DeepSeek-R1) — NEW Model
    A specialized reasoning engine optimized for:

    • Complex problem-solving
    • Logical inference tasks
    • Structured data analysis
    • Multi-step reasoning workflows

Full Changelog: v2.1.0...v2.1.1