Releases: logancyang/obsidian-copilot
3.0.2
Improvements
- #1775 Switch to the new file when creating files with composer tools. @wenzhengjiang
Bug Fixes
- #1776 Fix url processing with image false triggers @logancyang
- #1770 Fix chat input responsiveness @zeroliu
- #1773 Fix canvas parsing in writeToFile tool @wenzhengjiang
3.0.1
Quick Hotfixes
- Fix a critical bug that stopped
[[note]]
reference from working in the free chat mode after introducing the context menu in v3. - Optimize the replace writer tool
- Add a MSeeP security badge
3.0.0
Copilot for Obsidian v3.0.0!
We are thrilled to announce the official release of Copilot for Obsidian v3.0.0! After months of hard work, this major update brings a new era of intelligent assistance to your Obsidian vault, focusing on enhanced AI capabilities, a new search system, and significant user experience improvements.
🏞️ Image Support and Chat Context Menu
Image support and the chat context menu are available for free users now! As long as your model supports vision, you can check the vision box and send image(s) to it.
🔥 Copilot Vault Search v3 - Index-Free & Optional Semantic Search
We've completely reimagined how Copilot finds notes in your vault, making the search feature significantly more intelligent, robust, and efficient.
- Smart Index-Free Search: Search now works out-of-the-box without requiring an index build, eliminating index corruption issues.
- Enhanced Relevance: Copilot leverages keywords from titles, headings, tags, note properties, Obsidian links, co-citations, and parent folders to find relevant notes.
- Optional Semantic Engine: For semantic understanding, you can enable Semantic Search under QA settings, which uses an embedding index same as before.
- Memory Efficient: Uses minimal RAM, you can tune it under QA settings.
- Privacy First: The search infrastructure remains local; no data leaves your device unless you use an online model provider.
- New QA Settings:
- The embedding model is moved here from the Basic tab.
- Lexical Search RAM Limit: Control RAM usage for index-free search, allowing optimization for performance or memory constraints.
⌘ Introducing Inline Quick Command
Transform your inline editing workflow with the brand new "Copilot: trigger quick command." This feature replaces the legacy "apply adhoc custom prompt" and allows you to insert quick prompts to edit selected blocks inline, integrating seamlessly with your custom command workflow. Assigning it to a hotkey like Cmd (Ctrl) + K
is highly recommended!
🚀 Autonomous Agent (Plus Feature)
Experience a new level of AI interaction with the Autonomous Agent. When enabled in Plus settings, your Copilot can now automatically trigger tool calls based on your queries, eliminating the need for explicit @tool
commands.
- Intelligent Tool Calling: The agent can automatically use tools like vault search, web search, composer and YouTube processing to fulfill your requests.
- Tool Call Banner: See exactly which tools the agent used and their results with expandable banners.
- Configurable Tools: Gain fine-grained control by enabling or disabling specific tools that the agent can call (Local vault search, Web search, Composer operations, YouTube processing) in the Plus settings.
- Max Iterations Control: Adjust the agent's reasoning depth (4-8 iterations) for more complex queries.
- Supported Models: Optimized for
copilot-plus-flash
(Gemini 2.5 models), Claude 4, GPT-4.1, GPT-4.1-mini, and now GPT-5 models. (Note: Agent mode performs best with Gemini models, followed by Claude and GPT. (Performance can vary a lot if you choose other models) - Control Remains Yours: For more control, turn the agent toggle off. vault search and web search are conveniently available as toggle buttons below the chat input.
✨ Other Key Improvements
- Tool Execution Banner: Visual feedback when the agent uses tools.
- Better Tool Visibility: Tool toggle buttons in chat input when the agent is off (vault search, web search, composer).
- Improved Settings UI: Dedicated "Agent Accessible Tools" section with clear framing.
- ChatGPT-like Auto-Scroll: Chat messages now auto-scroll when a new user message is posted.
- Image Support: Improved embedded image reading, no longer requiring "absolute path" setting for same-title disambiguation. Supports markdown-style embedded image links

. - AI Message Regeneration: Fixed issues with AI message regeneration.
- Tool Result Formatting: Enhanced formatting for tool results.
- UI Responsiveness: Better UI responsiveness during tool execution.
- Context Menu: Moved context menu items to a dedicated "Copilot" submenu.
- Model Parameters: Top P, frequency penalty, verbosity, and reasoning effort model parameters are now optional and can be toggled manually.
- Project Mode Context UI: A new progress bar indicates when project context is loading, with status visible via the context status icon.
- Embedding Models: Gemini embedding 001 is added as a built-in embedding model. The embedding model picker is now under the QA tab.
- OpenRouter: Now the top provider in settings.
🙏 Thanks
Huge thanks to all our contributors and users, Copilot for Obsidian is nothing without its community! Please provide feedback if you encounter any issues.
2.9.5
Adding GPT-5 series models as built-in models, fresh out of the oven! Supports the new parameters reasoning_effort
and verbosity
. To see them, you may have to click "Refresh Builtin Models" under your chat model table in Copilot settings.


You can also add openrouter GPT-5 models such as openai/gpt-5-chat
as a Custom Model with the OpenRouter provider.
This is an unscheduled release to add GPT-5. Copilot v3 is under construction and will be released officially very soon, stay tuned!
2.9.4
Yet another quick release fixing a few bugs: fix composer canvas codeblock, update copilot-plus-small (it hasn't been stable recently, should be stable now after a complete reindex)
PRs
- #1621 Exclude copilot folders from indexing by default @logancyang
- #1620 Disallow file types in context @logancyang
- #1619 Fix copilot-plus-small @logancyang
- #1617 Fix composer canvas codeblock @wenzhengjiang
Troubleshoot
- If you find models missing in any model table or dropdown, go to Copilot settings -> Models tab, find "Refresh Built-in Models" and click it. If it doesn't help, please report back!
- For
@Believer
and@poweruser
who are on a preview version, now you can use BRAT to install official versions as well!
2.9.3
Copilot for Obsidian - Release v2.9.3
Another quick one fixing a default model reset issue introduced in v2.9.2.
Fixed a /
command mistrigger issue, it now requires a preceding space to trigger.
Added rate limit to our Projects mode file conversion due to heavy load (some users have been passing 10k-100k pages of pdfs repeatedly), right now the limit is set to (50 or 100MB of non-markdown docs) per 3 hours per license key.
PRs
- #1603 Add Projects rate limit UI change @logancyang
- #1602 Update file upload guidelines and rate limit information @logancyang
- #1600 Fix slash trigger @logancyang
- #1599 Fix default model reset @logancyang
Troubleshoot
- If you find models missing in any model table or dropdown, go to Copilot settings -> Models tab, find "Refresh Built-in Models" and click it. If it doesn't help, please report back!
2.9.2
Copilot for Obsidian - Release v2.9.2
A quick patch on top of v2.9.1. Now you don't need to manually @youtube
to get the transcript, simply include the youtube url(s) in your chat message and their transcripts will be available in the context. (@youtube <url>
for the transcript still works). Another critical fix is for free users - no more license key check popup if you happen to have autocomplete on.
Small UX improvement from our community contributor: improved message editing; autosave on current chat at every message to avoid loss of data in case of an app crash.
Added (free)
to free modes.
PRs
- #1594 Implement auto youtube tool @logancyang
- #1589 Improved message editing UX by adding Escape key cancellation and removing auto-save on blur @Mathieu2301
- #1593 Fix auto index trigger @logancyang
- #1592 Disable autocomplete by default and prevent license key popup for free user @logancyang
Troubleshoot
- If you find models missing in any model table or dropdown, go to Copilot settings -> Models tab, find "Refresh Built-in Models" and click it. If it doesn't help, please report back!
- For
@Believer
and@poweruser
who are on a preview version, please backup your current<vault>/.obsidian/plugins/copilot/data.json
, reinstall the plugin and copy the data.json back to safely migrate to this update
2.9.1
Copilot for Obsidian - Release v2.9.1
One big change in this release is the migration of Copilot custom commands, they are now saved as notes, same as custom prompts. We are unifying both into one system. Now you can edit them in Copilot settings under the Commands tab, or directly in the note, to enable them in the right-click menu or via /
slash commands in chat. Please let us know if you have any issues with this migration!
Other Significant Improvements
- OpenRouter Gemini 2.5 models added as builtin models, available in Projects mode as well! (Please click "Refresh Builtin Models" under the model table if you don't see them)
- Every model is configurable with its own parameters such as temperature, max tokens, top P, frequency penalty. Global params are removed to avoid confusion.
- Projects mode now has a new context UI! It's much easier to set and check the files under a project now!
- Introduced a new Copilot command "Add Selection to Chat Context" that adds the selected text to the chat context menu in Copilot Chat. It's also available in the right-click menu. (If you are familiar with Cursor, you can also assign this command with
cmd + shift + L
shortcut) - Files such as PDFs and EPUBs that are converted to markdown in Projects mode are cached as markdown now, find them under
<vault>/.copilot/file-content-cache/
. (Moving them out into the vault makes them indexable by Copilot, but keep in mind it may blow up your index size!) - Slash command
/
can be triggered anywhere in the chat input now (used to only trigger when input is empty), even mid-text! - Various bug fixes.
PRs
- #1584 Enable model params for copilot-plus-flash @logancyang
- #1580 Update max token default description in setting page @wenzhengjiang
- #1576 Add support for selected text context in chat component @logancyang
- #1575 Implement slash command detection and replacement in ChatInput @logancyang
- #1572 Update file cache to use markdown instead of json @logancyang
- #1571 Update ChatModels and add new OpenRouter models @logancyang
- #1570 Update dependencies and enhance project context modal @logancyang
- #1566 Enhance abort signal in chains @logancyang
- #1562 Support editing all parameters individually for each model @Emt-lin
- #1551 Support project context preview @Emt-lin
- #1549 Merge custom command with custom prompts @zeroliu
- #1581 Composer: fix compose block for empty note @wenzhengjiang
- #1568 Fix word completion triggers @logancyang
- #1560 Remove think tag for insert into note @logancyang
- #1552 Fix: Custom model verification, api key errors @Emt-lin
Troubleshoot
- v2.9.1 has a custom commands migration, please find those custom commands that failed the migration in your under an "unsupported" subfolder in your custom prompt folder. Please review the reason it failed and update properly to keep them supported.
- If you find models missing in any model table or dropdown, go to Copilot settings -> Models tab, find "Refresh Built-in Models" and click it. If it doesn't help, please report back!
- For
@Believer
and@poweruser
who are on a preview version, please backup your current<vault>/.obsidian/plugins/copilot/data.json
, reinstall the plugin and copy the data.json back to safely migrate to this update
2.9.0
Creating the AI Environment for Thinkers and Writers
Massive update to Copilot Plus!!🔥🔥🔥
Announcing our "3 milestones" (previously in believer-exclusive preview) in the brand new v2.9.0:
Projects mode (alpha)
A new Plus mode where you can define a combo of your custom instruction, model, parameters and context as individual workspaces, powered by models with a 1M-token context window and context caching.
This is different from @vault, you can ask much more abstract questions here such as "find common patterns/most important insights"
Supports 20+ file types including PDF, EPUB, PPTX, DOCX, CSV, and many more.
(Since it's still in Alpha, the models still require your own API key, so keep an eye on your model provider's dashboard to avoid a surprise bill! The context processing is on us by our servers, we process those papers and books for you to have them ready for AI consumption.)
Composer
Edit or create notes by just chatting with Copilot. Trigger it by explicitly including @composer
in your message. The AI will suggest an edit, you click Preview/Apply, and a diff view shows up for you to accept the edits by line or in bulk.
Composer supports canvas, too!
Autocomplete
Suggests the next words based on the content in your vault (toggle Allow Additional Context in Plus mode to allow more relevant context in your vault), supports most languages
- Sentence completion: suggests possible next words
- Word completion: completes partial words based on existing words in your vault
You can toggle them on or off separately, e.g. have only word completion if you find sentence completion distracting.
New Plus tab in Copilot settings
Others
- Implement chat history picker button, render Save Chat as Note conditionally when Autosave is off
- Toggle to always include current file in the context by default (Plus setting tab)
- Autocomplete settings, customizable key binding
- A new Refresh Built-in Models button below the Models table
- Claude 4 and 3.7 sonnet thinking tokens support
- Add "Force rebuild index" to the 3-dots menu at the top right of the chat input
- "Save Chat as Note" does not open the saved note automatically anymore, as requested by users
- New Chat is now a copilot command assignable with a hotkey
- Quick add for models in the API key setting page, now it grabs the list of all available models from provider for you to pick from.
- Custom Prompts Sort Strategy in Advanced settings
Troubleshoot
If you find models missing in any model table or dropdown, go to Copilot settings -> Models tab, find "Refresh Built-in Models" and click it. If it doesn't help, please report back!
Acknowledgements
This is a joint effort by the Copilot team: @wenzhengjiang @zeroliu @Emt-lin @logancyang. It's impossible to achieve without the support and awesome feedback from our great community. We have a lot more upgrades coming in our pipeline, with some massive changes to the free features as well. Please stay tuned!
2.8.9
GPT 4.1 models and o4-mini are supported, and xAI is added as a provider! Another big update is canvas support! You can add canvas in your context by either a direct reference [[]]
or the +
button in your chat context menu! Copilot can even understand the group structure!
Improvements
- #1461 Implement canvas adaptor @logancyang
- #1459 Support gpt 4.1 series, o4-mini and grok 3 @logancyang
- #1463 Switch insert and copy buttons and add more spacing @logancyang
- #1460 Add a toggle to turn custom prompt templating off @logancyang
- #1421 Ollama ApiKey support @sargreal
- #1441 refactor: Optimize some user experiences. @Emt-lin
- #1446 Improve custom command (v3) @zeroliu
- #1436 Pass project state to broca call @wenzhengjiang
- #1415 Add update notification @zeroliu
- #1414 Update broca requests @zeroliu