-
-
Notifications
You must be signed in to change notification settings - Fork 162
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
v1.3.4 #242
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Owner
n4ze3m
commented
Nov 9, 2024
•
edited
Loading
edited
- add Koraen localization
- Add Generation Info (Ollama)
- feat: Add max tokens setting for model generations
- feat: Ability to send image without text
- fix: Chrome AI issue can't get gemini nano to run with page assist on chrome v130 on win11 #227
- feat: Use selected system prompt if temp system prompt is empty in current chat model settings
- feat: temporary chat mode Allow disabling chat history #193
- feat: Added Llamafile support Add Llama File support #44
- feat: Multi Ollama api support (beta) [Feature] Store multiple Ollama instances with individual settings #184
add Koraen localization
This commit introduces a new feature that displays generation information for each message in the chat. The generation info is displayed in a popover and includes details about the model used, the prompt, and other relevant information. This helps users understand how their messages were generated and troubleshoot any issues that may arise. The generation info is retrieved from the LLM response and is stored in the database alongside other message details. This commit also includes translations for the generation info label in all supported languages.
Adds the `generationInfo` field to the message history output to provide more context about the message's origin. This will be helpful for debugging and understanding the provenance of messages.
Adds a new setting to control the maximum number of tokens generated by the model. This provides more control over the length of responses and can be useful for limiting the amount of text generated in certain situations.
Adds support for the new AI capabilities in Chrome. This change includes updated logic for checking availability and creating text sessions.
… system fallback Adds support for using the currently selected system prompt in the current model settings. This allows users to fine-tune their chat experience based on their preferred prompt.
Adds a new "Temporary Chat" mode for quick, non-persistent conversations. The new mode is available in the header bar and will trigger a visually distinct chat experience with a temporary background color. Temporary chats do not save to the chat history and are meant for short, one-off interactions. This feature enhances flexibility and provides a more convenient option for users who need to quickly interact with the AI without committing the conversation to their history.
This was
linked to
issues
Nov 9, 2024
…ntent The code was relying on optional fields like `content` in chat history and chunk objects, leading to potential errors if these fields were missing. This commit ensures proper handling of these fields by adding optional chaining (`?.`) for safer access. This prevents crashes and ensures the application handles the missing fields gracefully.
Improves the formatting of image URLs within responses from custom models. This ensures that image URLs are correctly presented in the user interface.
Add support for LlamaFile, a new model provider that allows users to interact with models stored in LlamaFile format. This includes: - Adding an icon for LlamaFile in the provider selection menu. - Updating the model provider selection to include LlamaFile. - Updating the model handling logic to properly identify and process LlamaFile models. - Updating the API providers list to include LlamaFile. This enables users to leverage the capabilities of LlamaFile models within the application.
Adds support for using Ollama 2 as a model provider. This includes: - Adding Ollama 2 to the list of supported providers in the UI - Updating the model identification logic to properly handle Ollama 2 models - Modifying the model loading and runtime configuration to work with Ollama 2 - Implementing Ollama 2 specific functionality in the embedding and chat models This change allows users to leverage the capabilities of Ollama 2 for both embeddings and conversational AI tasks.
Expanded the list of providers for which models are fetched dynamically to include Ollama and Llamafile, removing the need for manual model addition in the user interface for these providers. This simplifies the user experience and ensures users always have access to the latest models without manual intervention.
This commit introduces a more efficient approach to fetching ollama2 models, ensuring proper filtering and handling of providers. This enhances the robustness and reliability of the model loading process, streamlining the overall user experience.
This commit addresses an issue where temporary chat history was not being updated correctly when using voice input. The `setHistoryId` function is now called within the `saveMessageOnError` function to ensure that the history ID is set correctly when a message is saved.
Adds error handling to the `updateMessageByIndex` function to prevent the temporary chat from breaking when an error occurs during the update process. This ensures a more robust and reliable experience for users.
Add Korean language support to the application. This includes translating the necessary UI elements and adding Korean to the supported language list.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.