Skip to content

Releases: langgenius/dify

v1.10.1-fix.1

05 Dec 04:59
1.10.1-fix.1
57dc7e0

Choose a tag to compare

Important

For users who upgraded to version 1.10.1-fix.1 before 2025-12-09 03:00:00 UTC, please verify your docker compose configuration to ensure the web service is using the correct image version langgenius/dify-web:1.10.1-fix.1. This verification is critical to address the GHSA-fv66-9v8q-g76r security vulnerability.

  • Security/deps: backend bumps pyarrow 17.0.0, werkzeug 3.1.4, urllib3 2.5.0 in api/uv.lock; frontend bumps React 19.2.1 (addresses CVE-2025-55182) and Next.js 15.5.7 in web/package.json + web/pnpm-lock.yaml.

Full Changelog: 1.10.1...1.10.1-fix.1

v1.10.1 – Multi-Database Era Begins: MySQL Joins the Family

26 Nov 10:40
1.10.1
b353a12

Choose a tag to compare

🎉 Major new capabilities, critical stability fixes
🧩 And the long-awaited MySQL support finally arrives!

🚀 New Features

Infrastructure & DevOps

MySQL adaptation (PostgreSQL / MySQL / OceanBase now fully supported)
Thanks @longbingljw from the OceanBase team!
PR: #28188

  • Adds DB_TYPE configuration option
  • Supports MySQL JSON / LONGTEXT / UUID / index differences
  • Updates Alembic migrations for multi-DB compatibility
  • Introduces cross-DB SQL helpers for statistics and date handling
  • Rewrites dataset metadata filters with SQLAlchemy JSON operators
  • Adds CI workflows for MySQL migration testing

This is a significant backend upgrade in Dify’s history — multi-database support is now first-class.

Performance & Workflow Editor Optimization

  • Implemented a major performance upgrade for the Workflow Editor, eliminating costly per-node validation scans, reducing unnecessary re-renders, and improving responsiveness from becoming laggy at ~50 nodes to remaining smooth even near ~200 nodes — #28591, by @iamjoel.

Pipelines & Workflow Engine

  • Introduced a broad set of workflow-editor improvements, including UI refinement, stability fixes, and quality-of-life enhancements across variable inspection, media components, and node interactions — #27981, by @Xiu-Lan, @crazywoola, @johnny0120, @Woo0ood.

🛠 Fixes & Improvements

Runtime Stability & Workflow Execution

  • Fixed an issue where advanced-chat workflows could fail to stop, preventing stuck or lingering processes — #27803, by @Kevin9703.
  • Fixed a 500 error triggered when running “any node” in draft mode, improving workflow debugging reliability — #28636, by @hjlarry.
  • Corrected token overcounting during loop/iteration evaluation (not related to billing tokens) — #28406, by @anobaka.
  • Fixed workflow-as-tool returning an empty files field, ensuring tool integrations receive correct file metadata — #27925, by @CrabSAMA.
  • Resolved a session-scope error in FileService that could cause inconsistent file deletion behavior#27911, by @ethanlee928.

Knowledge Base

  • Fixed a 500 error when using the weightedScore retrieval option, restoring stability for weighted ranking scenarios — #28586, by @Eric-Guo.

Developer Experience & SDKs

  • Fixed Node.js SDK route and multipart upload handling, ensuring robust file and data submission through JavaScript integrations — #28573, by @lyzno1.
  • Fixed OpenAPI/Swagger failing to load, restoring developer documentation access — #28509, by @changkeke, with contributions from @asukaminato0721.

Web UI & UX

  • Corrected dark-mode rendering for the ExternalDataToolModal, ensuring consistent appearance across themes — #28630, by @Nov1c444.
  • Fixed Marketplace search-trigger behavior and scroll position, improving discovery and navigation — #28645, by @lyzno1.
  • Fixed incorrect navigation when opening chatflow log details, providing more predictable UI behavior — #28626, by @hjlarry.
  • Fixed layout and rendering issues in the README display panel, ensuring cleaner content presentation — #28658, by @yangzheli.
  • Reduced unnecessary re-renders in the useNodes hook, improving overall front-end performance — #28682, by @iamjoel.

Plugins & Integrations

  • Updated plugin verification logic to use a unique identifier, improving correctness across plugin installations and updates — #28608, by @Mairuis.

System Robustness

  • Prevented nullable tags in TriggerProviderIdentity, avoiding potential runtime errors — #28646, by @Yeuoly.
  • Improved error messaging for invalid webhook requests, providing clearer diagnostics — #28671, by @hjlarry.

Feedback & Logging

  • Fixed like/dislike feedback not appearing in logs, ensuring end-user rating signals are correctly visualized — #28652, by @fatelei.

Internationalization (i18n)

  • Standardized terminology for trigger and billing events, improving translation consistency — #28543, by @NeatGuyCoding.
  • Fixed multiple issues in execution-related translations, correcting missing or malformed entries — #28610, by @NeatGuyCoding.
  • Removed incorrect “running” translation entries#28571, by @NeatGuyCoding.
  • Refactored i18n scripts and removed obsolete translation keys#28618, by @lyzno1.
  • Added missing translations across the UI, improving language coverage — #28631, by @lyzno1.

Maintenance & Developer Tooling

  • Added front-end automated testing rules to strengthen baseline reliability — #28679, by @CodingOnStar and contributors.
  • Upgraded system libraries and Python dependencies to maintain security and compatibility — #28624, by @laipz8200 and @GareArc.
  • Updated start-web development script to use pnpm dev, simplifying contributor workflows — #28684, by @laipz8200.

Upgrade Guide

Docker Compose Deployments

Important

Required Action Before Upgrading

Starting from 1.10.1, the Dify API image now runs as a non-root user (UID 1001) for improved security.
If you are using local filesystem storage (the default in community deployments), you must update the ownership of your mounted storage directories on the host machine, or the containers will fail to read/write files.

Affected services:

  • api
  • worker

Affected host directory:

  • ./volumes/app/storage → mounted to /app/api/storage

What you must do before restarting the new version:

# Stop existing containers
docker compose down

# Update directory ownership on the host
sudo chown -R 1001:1001 ./volumes/app/storage

# Restart normally
docker compose up -d

After this one-time migration, Dify will operate normally with the new non-root user model.

  1. Back up your customized docker-compose YAML file (optional)

    cd docker
    cp docker-compose.yaml docker-compose.yaml.$(date +%s).bak
  2. Get the latest code from the main branch

    git checkout main
    git pull origin main
  3. Stop the service. Please execute in the docker directory

    docker compose down
  4. Back up data

    tar -cvf volumes-$(date +%s).tgz volumes
  5. Upgrade services

    docker compose up -d

If you encounter errors like below

2025/11/26 11:37:57 /app/internal/db/pg/pg.go:30
[error] failed to initialize database, got error failed to connect to `host=db_postgres user=postgres database=dify_plugin`: hostname resolving error (lookup db_postgres on 127.0.0.11:53: server misbehaving)

2025/11/26 11:37:57 /app/internal/db/pg/pg.go:34
[error] failed to initialize database, got error failed to connect to `host=db_postgres user=postgres database=postgres`: hostname resolving error (lookup db_postgres on 127.0.0.11:53: server misbehaving)
2025/11/26 11:37:57 init.go:99: [PANIC]failed to init dify plugin db: failed to connect to `host=db_postgres user=postgres database=postgres`: hostname resolving error (lookup db_postgres on 127.0.0.11:53: server misbehaving)
panic: [PANIC]failed to init dify plugin db: failed to connect to `host=db_postgres user=postgres database=postgres`: hostname resolving error (lookup db_postgres on 127.0.0.11:53: server misbehaving)

Please use the following command instead. For details, please read this #28706

docker compose --profile postgresql up -d

Source Code Deployments

  1. Stop the API server, Worker, and Web frontend Server.

  2. Get the latest code from the release branch:

    git checkout 1.10.1
  3. Update Python dependencies:

    cd api
    uv sync
  4. Then, let's run the migration script:

    uv run flask db upgrade
  5. Finally, run the API server, Worker, and Web frontend Server again.


What's Changed

Read more

v1.10.0 - Event-Driven Workflows

13 Nov 14:44
a47276a

Choose a tag to compare

Introduce Trigger Functionality

A trigger is a type of Start node that allows your workflow to run automatically—either on a schedule or in response to events from external systems (such as GitHub, Gmail, or your internal services)—without requiring a user action or API call.
Triggers are ideal for automating repetitive processes and integrating workflows with third-party applications to enable seamless data synchronization and processing.

⚡️ Trigger = When something happens → then do something

Triggers form the foundation of event-driven Workflow capabilities and currently support the following types:

  • Schedule — time-based triggers
  • SaaS Integration Event — events from external SaaS platforms (e.g., Slack, GitHub, Linear) integrated through Plugins
  • Webhook — HTTP callbacks from external systems

These trigger features are only available for Workflows. Chatflow, Agent, and BasicChat currently do not support triggers.

🧩 Marketplace

We provide several popular trigger plugins, which you can explore in our Marketplace.

image

😎 Enjoy the Experience

Sit back, relax, and let your workflows run themselves.

image

A big thanks to our contributor !

Thanks so much for the contributors in #23981 who helps us developed this feature! It's a big deal made by you guys! @ACAne0320 @hjlarry @lyzno1 @CathyL0 @zhangxuhe1

Upgrade Guide

Docker Compose Deployments

  1. Back up your customized docker-compose YAML file (optional)

    cd docker
    cp docker-compose.yaml docker-compose.yaml.$(date +%s).bak
  2. Get the latest code from the main branch

    git checkout main
    git pull origin main
  3. Stop the service. Please execute in the docker directory

    docker compose down
  4. Back up data

    tar -cvf volumes-$(date +%s).tgz volumes
  5. Upgrade services

    docker compose up -d

Source Code Deployments

  1. Stop the API server, Worker, and Web frontend Server.

  2. Get the latest code from the release branch:

    git checkout 1.10.0
  3. Update Python dependencies:

    cd api
    uv sync
  4. Then, let's run the migration script:

    uv run flask db upgrade
  5. Finally, run the API server, Worker, and Web frontend Server again.


What's Changed

  • feat: add flatten_output configuration to iteration node by @Nov1c444 in #27502
  • feat: implement MCP specification 2025-06-18 by @Nov1c444 in #25766
  • feat: introduce RAG tool recommendations and refactor related components for improved plugin management by @WTW0313 in #27259
  • feat: enhance tencent trace integration with LLM core metrics by @minimAluminiumalism in #27126
  • feat: use id for webapp by @GareArc in #27576
  • feat: enhance pipeline template list with marketplace feature toggle by @WTW0313 in #27604
  • feat(api): Introduce workflow pause state management by @QuantumGhost in #27298
  • feat: localization for hi-IN by @SmartDever02 in #27783
  • feat: Add Audio Content Support for MCP Tools by @IthacaDream in #27979
  • feat(api): Introduce Broadcast Channel by @QuantumGhost in #27835
  • feat(api): Introduce WorkflowResumptionContext for pause state management by @QuantumGhost in #28122
  • feat: introduce trigger functionality by @Yeuoly in #27644
  • feat: add segments max number limit for SegmentApi.post by @ruanimal in #27745
  • feat: enhance annotation API to support optional message_id and content fields by @liugddx in #27460
  • feat: add validation to prevent saving empty opening statement in conversation opener modal by @Nov1c444 in #27843
  • feat: change feedback to forum by @crazywoola in #27862
  • feat: add draft trigger detection to app model and UI by @zhsama in #28163
  • feat: enhance start node metadata to be undeletable in chat mode by @zhsama in #28173
  • fix issues 27388, add missing env variable: ENFORCE_LANGGENIUS_PLUGIN… by @MaoJianwei in #27545
  • fix: resolve 500 error when updating document chunk settings (#27551) by @quicksandznzn in #27574
  • fix: iteration node cannot be viewed(#27759) by @redSun64 in #27786
  • fix agent putout the output of workflow-tool twice (#26835) by @Cursx in #27706
  • fix:knowledge base reference information is overwritten when using mu… by @zhengchangchun in #27799
  • fix: installation_id is missing when in tools page by @crazywoola in #27849
  • fix: avoid passing empty uniqueIdentifier to InstallFromMarketplace by @johnny0120 in #27802
  • fix: python package vulnerability by @kenwoodjw in #27645
  • fix(api): return timestamp as integer in document api by @invzhi in #27761
  • fix: File model add known extra fields, fix issue about the tool of… by @CrabSAMA in #27607
  • FIX Issue #27697: Add env variable in docker-compose(template) and make it take effect. by @Dave0126 in #27704
  • fix: datasets weight settings embedding model does not change by @lcedaw in #27694
  • fix: bump pyobvector to 0.2.17 by @kenwoodjw in #27791
  • fix: elasticsearch_vector version by @huangzhuo1949 in #28028
  • fix workflow default updated_at by @IthacaDream in #28047
  • fix(api): Trace Hierarchy, Span Status, and Broken Workflow for Arize & Phoenix Integration by @ialisaleh in #27937
  • fix Version 2.0.0-beta.2: Chat annotations Api Error #25506 by @Cursx in #27206
  • fix: prevent fetch version info in enterprise edition by @douxc in #27923
  • fix(api): fix VariablePool.get adding unexpected keys to variable_dictionary by @QuantumGhost in #26767
  • fix: bump brotli to 1.2.0 resloved CVE-2025-6176 by @kenwoodjw in #27950
  • refactor: update installed app component to handle missing params and improve type safety by @ZeroZ-lab in #27331
  • refactor:Decouple Domain Models from Direct Database Access by @hieheihei in #27316
  • refactor: update install status handling in plugin installation process by @WTW0313 in #27594
  • refactor(api): add SQLAlchemy 2.x Mapped type hints to Message model by @laipz8200 in #27709
  • refactor: replace hardcoded user plan strings with CloudPlan enum by @laipz8200 in #27675
  • refactor: Use Repository Pattern for Model Layer by @hieheihei in #27663
  • refactor(api): set default value for EasyUIBasedAppGenerateEntity.query by @laipz8200 in #27712
  • refactor(web): reuse the same edit-custom-collection-modal component, and fix the pop up error by @yangzheli in #28003
  • refactor(web): remove redundant add-tool-modal components and related code by @yangzheli in #27996
  • chore: translate i18n files and update type definitions by @github-actions[bot] in #27423
  • chore(deps): bump testcontainers from 4.10.0 to 4.13.2 in /api by @dependabot[bot] in #27469
  • chore(deps-dev): bump @happy-dom/jest-environment from 20.0.7 to 20.0.8 in /web by @dependabot[bot] in #27465
  • chore: add more stories by @hjlarry in #27403
  • chore: improve mcp server url validation by @Nov1c444 in #27558
  • Sync celery queue name list by @Eric-Guo in #27554
  • chore: add web type check step to GitHub Actions workflow by @ZeroZ-lab in #27498
  • chore: warning messages too long in model config caused ui issue by @iamjoel in #27542
  • chore: add type-check to pre-commit by @lyzno1 in #28005
  • chore(deps): bump tablestore from 6.2.0 to 6.3.7 in /api by @dependabot[bot] in #27736
  • chore(deps): bump dayjs from 1.11.18 to 1.11.19 in /web by @dependabot[bot] in https...
Read more

v1.10.0-rc1 - Event-Driven Workflows

30 Oct 15:25

Choose a tag to compare

Pre-release

Introduce Trigger Funtionality

Trigger = When something → then do something

This is the foundation for event-driven Workflow capabilities, covering the following types:

  • Schedule (time-based triggers)
  • SaaS Integration Event (events from external SaaS platforms like Slack/Github/Linear, integrated by Plugins)
  • Webhook (external http callbacks)
  • Something to be discuss

All of those features are only designed for Workflow, Chatflow / Agent / BasicChat are not supported.

Image

Design Premise

After careful consideration, we concluded that the start node design cannot fully embody the philosophy behind Triggers.
Therefore, we have redesigned the start node as a component bound to WebApp or Service API.
This means:

  • The workflow input parameters are equivalent to the form defined by the start node.
  • Trigger types can define their own input formats instead of following Dify’s start node format:
    • Webhook: users can freely define the HTTP payload structure they need.
    • Plugins: predefine workflow input parameters for specific third-party platforms.
    • Schedule: only needs a single $current_time parameter.

As a result, we introduced 3 kinds of start nodes, they are all the trigger nodes, and we've already completed the product design, and the UI/UX design is also finished.
The implementation of Trigger will be divided into:

  • WebHook — configuration of webhook-related information in Canvas
  • Schedule — time-based triggers
  • Plugins — plugin system (most third-party platform integrations will depend on this)
WebHook Schedule Plugins
Image Image Image

Why

  1. Enable more scenarios
    Currently, if you want to build something like a Discord ticket bot → Linear in an enterprise setup, you need custom glue code, pre-published extensions, and manual token retrieval from the Discord developer console instead of simple one-click OAuth binding.
    Similar pain points exist for GitHub PR plugin review, Hello Dify email replies, etc., all requiring manual download/upload/trigger actions.
  2. Reduce fragmented experiences
    While it’s possible to achieve similar outcomes using external automation platforms with Dify API calls, the cross-platform experience is fragmented, and many external automation platforms are adding their own LLM orchestration capabilities.
  3. Centralize configuration and management
    Developers often have to host multiple services to poll for events. Endpoints can help, but they are not purpose-built for trigger scenarios, making the configuration flow unintuitive.
  4. Real user demand
    Multiple community members have requested this feature:

Upgrade Guide

Docker Compose Deployments

  1. Back up your customized docker-compose YAML file (optional)

    cd docker
    cp docker-compose.yaml docker-compose.yaml.$(date +%s).bak
  2. Get the latest code from the main branch

    git checkout main
    git pull origin main
  3. Stop the service. Please execute in the docker directory

    docker compose down
  4. Back up data

    tar -cvf volumes-$(date +%s).tgz volumes
  5. Upgrade services

    docker compose up -d

Source Code Deployments

  1. Stop the API server, Worker, and Web frontend Server.

  2. Get the latest code from the release branch:

    git checkout 1.10.0-rc1
  3. Update Python dependencies:

    cd api
    uv sync
  4. Then, let's run the migration script:

    uv run flask db upgrade
  5. Finally, run the API server, Worker, and Web frontend Server again.


What's Changed

  • Replace export button with more actions button in workflow control panel by @lyzno1 in #24033
  • feat: add scroll to selected node button in workflow header by @lyzno1 in #24030
  • feat: comprehensive trigger node system with Schedule Trigger implementation by @lyzno1 in #24039
  • feat: update workflow run button to Test Run with keyboard shortcut by @lyzno1 in #24071
  • feat: Test Run dropdown with dynamic trigger selection by @lyzno1 in #24113
  • fix: simplify trigger-schedule hourly mode calculation and improve UI consistency by @lyzno1 in #24082
  • Remove workflow features button by @lyzno1 in #24085
  • feat: implement Schedule Trigger validation with multi-start node topology support by @lyzno1 in #24134
  • fix: resolve merge conflict between Features removal and validation enhancement by @lyzno1 in #24150
  • Refactor Start node UI to User Input and optimize EntryNodeContainer by @lyzno1 in #24156
  • fix: remove duplicate weekdays keys in i18n workflow files by @lyzno1 in #24157
  • UI improvements: fix translation and custom icons for schedule trigger by @lyzno1 in #24167
  • fix: initialize recur fields when switching to hourly frequency by @lyzno1 in #24181
  • feat: implement multi-select monthly trigger schedule by @lyzno1 in #24247
  • feat(workflow): Plugin Trigger Node with Unified Entry Node System by @lyzno1 in #24205
  • feat: replace mock data with dynamic workflow options in test run dropdown by @lyzno1 in #24320
  • refactor: comprehensive schedule trigger component redesign by @lyzno1 in #24359
  • feat/trigger universal entry by @Yeuoly in #24358
  • feat/trigger: support specifying root node by @Yeuoly in #24388
  • feat: webhook trigger frontend by @CathyL0 in #24311
  • fix(trigger-webhook): remove redundant WebhookParam type and simplify parameter handling by @CathyL0 in #24390
  • feat(trigger-schedule): simplify timezone handling with user-centric approach by @lyzno1 in #24401
  • refactor: Use specific error types for workflow execution by @Yeuoly in #24475
  • refactor: rename RunAllTriggers icon to TriggerAll for semantic clarity by @lyzno1 in #24478
  • fix: when workflow only has trigger node can't save by @hjlarry in #24546
  • fix: when workflow not has start node can't open service api by @hjlarry in #24564
  • feat: implement workflow onboarding modal system by @lyzno1 in #24551
  • feat: webhook trigger backend api by @hjlarry in #24387
  • feat: fix i18n missing keys and merge upstream/main by @lyzno1 in #24615
  • refactor(sidebar): Restructure app operations with toggle func...
Read more

v1.9.2 - Sharper, Faster, and More Reliable

22 Oct 08:48
1.9.2

Choose a tag to compare

This release focuses on improving stability, async performance, and developer experience. Expect cleaner internals, better workflow control, and improved observability across the stack.


Warning

A recent change has modernized the Dify integration for Weaviate (see PR #25447 and related update in PR #26964). The upgrade switches the Weaviate Python client from v3 to v4 and raises the minimum required Weaviate server version to 1.24.0 or newer. With this update:

  • If you are running an older Weaviate server (e.g., v1.19.0), you must upgrade your server to at least v1.24.0 before updating Dify.
  • The code now uses the new client API and supports gRPC for faster operations, which may require opening port 50051 in your Docker Compose files.
  • Data migration between server versions may require re-indexing using Weaviate’s Cursor API or standard backup/restore procedures.
  • The Dify documentation will be updated to provide migration steps and compatibility guidance.

Action required:

  • Upgrade your Weaviate server to v1.24.0 or higher.
  • Follow the migration guide to update your data and Docker configuration as described in the latest official Dify documentation.
  • Ensure your environment meets the new version requirements before deploying Dify updates.

✨ Highlights

Workflow & Agents

Integrations & SDK

Web & UI

  • Faster load times by splitting and lazy‑loading constant files (by @yangzheli in #26794)
  • Improved DataSources with marketplace plugin integration and filtering (by @WTW0313 in #26810)
  • Added tax tooltips to pricing footer (by @CodingOnStar in #26705)
  • Account creation now syncs interface language with display settings (by @feelshana in #27042)

⚙️ Core Improvements


🧩 Fixes


🧹 Cleanup & DevX


Upgrade Guide

Docker Compose Deployments

  1. Back up your customized docker-compose YAML file (optional)

    cd docker
    cp docker-compose.yaml docker-compose.yaml.$(date +%s).bak
  2. Get the latest code from the main branch

    git checkout main
    git pull origin main
  3. Stop the service. Please execute in the docker directory

    docker compose down
  4. Back up data

    tar -cvf volumes-$(date +%s).tgz volumes
  5. Upgrade services

    docker compose up -d

Source Code Deployments

  1. Stop the API server, Worker, and Web frontend Server.

  2. Get the latest code from the release branch:

    git checkout 1.9.2
  3. Update Python dependencies:

    cd api
    uv sync
  4. Then, let's run the migration script:

    uv run flask db upgrade
  5. Finally, run the API server, Worker, and Web frontend Server again.


What's Changed

  • [Chore/Refactor] Implement lazy initialization for useState calls to prevent re-computation by @Copilot in #26252
  • Refactor: Use @ns.route for tags API by @asukaminato0721 in #26357
  • chore: translate i18n files and update type definitions by @github-actions[bot] in #26440
  • Fix: Enable Pyright and Fix Typing Errors in Datasets Controller by @asukaminato0721 in #26425
  • minor fix: fix some translations: trunk should use native, and some translation typos by @NeatGuyCoding in #26469
  • Fix typing errors in core/model_runtime by @asukaminato0721 in #26462
  • fix single-step runs support user input as structured_output variable values by @goofy-z in #26430
  • Refactor: Enable type checking for core/ops and fix type errors by @asukaminato0721 in #26414
  • improve: Explicitly delete task Redis key on completion in AppQueueManager by @Blackoutta in #26406
  • chore: bump pnpm version by @lyzno1 in #26010
  • Fix a typo in prompt by @casio12r in #25583
  • fix: duplicate chunks by @kenwoo...
Read more

v1.9.1 – 1,000 Contributors, Infinite Gratitude

29 Sep 11:35
cd47a47

Choose a tag to compare

Congratulations on having our 1000th contributor!

image

🚀 New Features

  • Infrastructure & DevOps:

    • Next.js upgraded to 15.5, now leveraging Turbopack in development for a faster, more modern build pipeline by @17hz in #24346.
    • Provided X-Dify-Version headers in marketplace API access for better traceability by @RockChinQ in #26210.
    • Security reporting improvements, with new sec report workflow added by @crazywoola in #26313.
  • Pipelines & Engines:

    • Built-in pipeline templates now support language configuration, unlocking multilingual deployments by @WTW0313 in #26124.
    • Graph engine now blocks response nodes during streaming to avoid unintended outputs by @laipz8200 in #26364 / #26377.
  • Community & Documentation:

🛠 Fixes & Improvements

  • Debugging & Logging:

    • Fixed NodeRunRetryEvent debug logging not working properly in Graph Engine by @quicksandznzn in #26085.
    • Fixed LLM node losing Flask context during parallel iterations, ensuring stable concurrent runs by @quicksandznzn in #26098.
    • Fixed agent-strategy prompt generator error by @quicksandznzn in #26278.
  • Search & Parsing:

  • Pipeline & Workflow:

    • Fixed workflow variable splitting logic (requires ≥2 parts) by @zhanluxianshen in #26355.
    • Fixed tool node attribute tool_node_version judgment error causing compatibility issues by @goofy-z in #26274.
    • Fixed iteration conversation variables not syncing correctly by @laipz8200 in #26368.
    • Fixed Knowledge Base node crash when retrieval_model is null by @quicksandznzn in #26397.
    • Fixed workflow node mutation issues, preventing props from being incorrectly altered by @hyongtao-code in #26266.
    • Removed restrictions on adding workflow nodes by @zxhlyh in #26218.
  • File Handling:

    • Fixed remote filename handling so Content-Disposition: inline becomes inline instead of incorrect parsing by @sorphwer in #25877.
    • Synced FileUploader context with props to fix inconsistent file parameters in cached variable view by @Woo0ood in #26199.
    • Fixed variable not found error (#26144) by @sqewad in #26155.
    • Fixed db connection error in embed_documents() by @AkisAya in #26196.
    • Fixed model list refresh when credentials change by @zxhlyh in #26421.
    • Fixed retrieval configuration handling and missing vector_setting in dataset components by @WTW0313 in #26361 / #26380.
    • Fixed ChatClient audio_to_text files keyword bug by @EchterTimo in #26317.
    • Added missing import IO in client.py by @EchterTimo in #26389.
    • Removed FILES_URL in default .yaml settings by @JoJohanse in #26410.
  • Performance & Networking:

    • Improved pooling of httpx clients for requests to code sandbox and SSRF protection by @Blackoutta in #26052.
    • Distributed plugin auto-upgrade tasks with concurrency control by @RockChinQ in #26282.
    • Switched plugin auto-upgrade cache to Redis for reliability by @RockChinQ in #26356.
    • Fixed plugin detail panel not showing when >100 plugins are installed by @JzoNgKVO in #26405.
    • Debounce reference fix for performance stability by @crazywoola in #26433.
  • UI/UX & Display:

    • Fixed lingering display-related issues (translations, UI consistency) by @hjlarry in #26335.
    • Fixed broken CSS animations under Turbopack by naming unnamed animations in CSS modules by @lyzno1 in #26408.
    • Fixed verification code input using wrong maxLength prop by @hyongtao-code in #26244.
    • Fixed array-only filtering in List Operator picker, removed file-children fallback, aligned child types by @Woo0ood in #26240.
    • Fixed translation inconsistencies in ja-JP: “ナレッジベース” vs. “ナレッジの名前とアイコン” by @mshr-h in #26243 and @NeatGuyCoding in #26270.
    • Improved “time from now” i18n support by @hjlarry in #26328.
    • Standardized dataset-pipeline i18n terminology by @lyzno1 in #26353.
  • Code & Components:

    • Refactored component exports for consistency by @ZeroZ-lab in #26033.
    • Refactored router to apply ns.route style by @laipz8200 in #26339.
    • Refactored lint scripts to remove duplication and simplify naming by @lyzno1 in #26259.
    • Applied @console_ns.route decorators to RAG pipeline controllers (internal refactor) by @Copilot in #26348.
    • Added missing type="button" attributes in components by @Copilot in #26249.

Upgrade Guide

Docker Compose Deployments

  1. Back up your customized docker-compose YAML file (optional)

    cd docker
    cp docker-compose.yaml docker-compose.yaml.$(date +%s).bak
  2. Get the latest code from the main branch

    git checkout main
    git pull origin main
  3. Stop the service. Please execute in the docker directory

    docker compose down
  4. Back up data

    tar -cvf volumes-$(date +%s).tgz volumes
  5. Upgrade services

    docker compose up -d

Source Code Deployments

  1. Stop the API server, Worker, and Web frontend Server.

  2. Get the latest code from the release branch:

    git checkout 1.9.1
  3. Update Python dependencies:

    cd api
    uv sync
  4. Then, let's run the migration script:

    uv run flask db upgrade
  5. Finally, run the API server, Worker, and Web frontend Server again.


What's Changed

  • fix(api): graph engine debug logging NodeRunRetryEvent not effective by @quicksandznzn in #26085
  • fix full_text_search name by @JohnJyong in #26104
  • bump nextjs to 15.5 and turbopack for development mode by @17hz in #24346
  • chore: refactor component exports for consistency by @ZeroZ-lab in #26033
  • fix:add some explanation for oceanbase parser selection by @longbingljw in #26071
  • feat(pipeline): add language support to built-in pipeline templates and update related components by @WTW0313 in #26124
  • ci: Add hotfix/** branches to build-push workflow triggers by @QuantumGhost in #26129
  • fix(api): Fix variable truncation for list[File] value in output mapping by @QuantumGhost in #26133
  • one example of Session by @asukaminato0721 in #24135
  • fix(api):LLM node losing Flask context during parallel iterations by @quicksandznzn in #26098
  • fix(search-input): ensure proper value extraction in composition end handler by @yangzheli in #26147
  • delete end_user check by @JohnJyong in #26187
  • improve: pooling httpx clients for requests to code sandbox and ssrf by @Blackoutta in #26052
  • fix: remote filename will be 'inline' if Content-Disposition: inline by @sorphwer in #25877
  • perf: provide X-Dify-Version for marketplace api access by @RockChinQ in #26210
  • Chore/remove add node restrict of workflow by @zxhlyh in #26218
  • Fix array-only filtering in List Operator picker; remove file children fallback and align child types. by @Woo0ood in #26240
  • fix: sync FileUploader context with props to fix inconsistent file parameter state in “View cached variables”. by @Woo0ood in #26199
  • fix: add echarts and zrender to transpilePackages for ESM compatibility by @lyzno1 in #26208
  • chore: fix inaccurate translation in ja-JP by @mshr-h in #26243
  • aliyun_trace: unify the span attribute & compatible CMS 2.0 endpoint by @hieheihei in #26194
  • fix(api): resolve error in agent‑strategy prompt generator by @quicksandznzn in #26278
  • minor: fix translation with the key value uses 「ナレッジの名前とアイコン」 while the rest of the file uses 「ナレッジベース」 by @NeatGuyCoding in #26270
  • refactor(web): simplify lint scripts, remove duplicates and standardize naming by @lyzno1 in #26259
  • fmt first by @asukaminato0721 in #26221
  • fix: resolve UUID parsing error for default user session lookup by @Cluas in #26109
  • Fix: avoid mutating node props by @hyongtao-code in #26266
  • update gen_ai semconv for aliyun trace by @hieheihei in #26288
  • chore: streamline AGENTS.md guidance by @laipz8200 in #26308
  • rm assigned but unused by @asukaminato0721 in #25639
  • Chore/add sec report by @crazywoola in #26313
  • Fix ChatClient.audio_to_text files keyword to make it work by @EchterTimo in #26317
  • perf: distribute concurrent pl...
Read more

1.9.0 – Orchestrating Knowledge, Powering Workflows

22 Sep 12:23
2e2c87c

Choose a tag to compare

knowledge_pipeline

🚀 Introduction

In Dify 1.9.0, we are introducing two major new capabilities: the Knowledge Pipeline and the Queue-based Graph Engine.

The Knowledge Pipeline provides a modularized and extensible workflow for knowledge ingestion and processing, while the Queue-based Graph Engine makes workflow execution more robust and controllable. We believe these will help you build and debug AI applications more smoothly, and we look forward to your experiences to help us continuously improve.


📚 Knowledge Pipeline

✨ Introduction

With the brand-new orchestration interface for knowledge pipelines, we introduce a fundamental architectural upgrade that reshapes how document processing are designed and executed, providing a more modular and flexible workflow that enables users to orchestrate every stage of the pipeline. Enhanced with a wide range of powerful plugins available in the marketplace, it empowers users to flexibly integrate diverse data sources and processing tools. Ultimately, this architecture enables building highly customized, domain-specific RAG solutions that meet enterprises’ growing demands for scalability, adaptability, and precision.

❓ Why Do We Need It?

Previously, Dify's RAG users still encounter persistent challenges in real-world adoption — from inaccurate knowledge retrieval and information loss to limited data integration and extensibility. Common pain points include:

  • 🔗 restricted integration of data sources
  • 🖼️ missing critical elements such as tables and images
  • ✂️ suboptimal chunking results

All of them lead to poor answer quality and hinder the model's overall performance.

In response, we reimagined RAG in Dify as an open and modular architecture, enabling developers, integrators, and domain experts to build document processing pipelines tailored to their specific requirements—from data ingestion to chunk storage and retrieval.

🛠️ Core Capabilities

🧩 Knowledge Pipeline Architecture

The Knowledge Pipeline is a visual, node-based orchestration system dedicated to document ingestion. It provides a customizable way to automate complex document processing, enabling fine-grained transformations and bridging raw content with structured, retrievable knowledge. Developers can build workflows step by step, like assembling puzzle pieces, making document handling easier to observe and adjust.

📑 Templates & Pipeline DSL

template

  • ⚡ Start quickly with official templates
  • 🔄 Customize and share pipelines by importing/exporting via DSL for easier reusability and collaboration

🔌 Customizable Data Sources & Tools

tools

tools-2

Each knowledge base can support multiple data sources. You can seamlessly integrate local files, online documents, cloud drives, and web crawlers through a plugin-based ingestion framework. Developers can extend the ecosystem with new data-source plugins, while marketplace processors handle specialized use cases like formulas, spreadsheets, and image parsing — ensuring accurate ingestion and structured representation.

🧾 New Chunking Strategies

In addition to General and Parent-Child modes, the new Q&A Processor plugin supports Q&A structures. This expands coverage for more use cases, balancing retrieval precision with contextual completeness.

🖼️ Image Extraction & Retrieval

image_in_pdf

Extract images from documents in multiple formats, store them as URLs in the knowledge base, and enable mixed text-image outputs to improve LLM-generated answers.

🧪 Test Run & Debugging Support

Before publishing a pipeline, you can:

  • ▶️ Execute a single step or node independently
  • 🔍 Inspect intermediate variables in detail
  • 👀 Preview string variables as Markdown in the variable inspector

This provides safe iteration and debugging at every stage.

🔄 One-Click Migration from Legacy Knowledge Bases

Seamlessly convert existing knowledge bases into the Knowledge Pipeline architecture with a single action, ensuring smooth transition and backward compatibility.

🌟 Why It Matters

The Knowledge Pipeline makes knowledge management more transparent, debuggable, and extensible. It is not the endpoint, but a foundation for future enhancements such as multimodal retrieval, human-in-the-loop collaboration, and enterprise-level data governance. We’re excited to see how you apply it and share your feedback.


⚙️ Queue-based Graph Engine

❓ Why Do We Need It?

Previously, designing workflows with parallel branches often led to:

  • 🌀 Difficulty managing branch states and reproducing errors
  • ❌ Insufficient debugging information
  • 🧱 Rigid execution logic lacking flexibility

These issues reduced the usability of complex workflows. To solve this, we redesigned the execution engine around queue scheduling, improving management of parallel tasks.

🛠️ Core Capabilities

📋 Queue Scheduling Model

All tasks enter a unified queue, where the scheduler manages dependencies and order. This reduces errors in parallel execution and makes topology more intuitive.

🎯 Flexible Execution Start Points

Execution can begin at any node, supporting partial runs, resumptions, and subgraph invocations.

🌊 Stream Processing Component

A new ResponseCoordinator handles streaming outputs from multiple nodes, such as token-by-token LLM generation or staged results from long-running tasks.

🕹️ Command Mechanism

With the CommandProcessor, workflows can be paused, resumed, or terminated during execution, enabling external control.

🧩 GraphEngineLayer

A new plugin layer that allows extending engine functionality without modifying core code. It can monitor states, send commands, and support custom monitoring.


Quickstart

  1. Prerequisites
    • Dify version: 1.9.0 or higher
  2. How to Enable
    • Enabled by default, no additional configuration required.
    • Debug mode: set DEBUG=true to enable DebugLoggingLayer.
    • Execution limits:
      • WORKFLOW_MAX_EXECUTION_STEPS=500
      • WORKFLOW_MAX_EXECUTION_TIME=1200
      • WORKFLOW_CALL_MAX_DEPTH=10
    • Worker configuration (optional):
      • WORKFLOW_MIN_WORKERS=1
      • WORKFLOW_MAX_WORKERS=10
      • WORKFLOW_SCALE_UP_THRESHOLD=3
      • WORKFLOW_SCALE_DOWN_IDLE_TIME=30
    • Applies to all workflows.

More Controllable Parallel Branches

Execution Flow:

Start ─→ Unified Task Queue ─→ WorkerPool Scheduling
                          ├─→ Branch-1 Execution
                          └─→ Branch-2 Execution
                                  ↓
                            Aggregator
                                  ↓
                                  End

Improvements:
1. All tasks enter a single queue, managed by the Dispatcher.
2. WorkerPool auto-scales based on load.
3. ResponseCoordinator manages streaming outputs, ensuring correct order.

Example: Command Mechanism

from core.workflow.graph_engine.manager import GraphEngineManager

# Send stop command
GraphEngineManager.send_stop_command(
    task_id="workflow_task_123",
    reason="Emergency stop: resource limit exceeded"
)

Note: pause/resume functionality will be supported in future versions.


Example: GraphEngineLayer

GraphEngineLayer Example


FAQ

  1. Is this release focused on performance?
    No. The focus is on stability, clarity, and correctness of parallel branches. Performance improvements are a secondary benefit.

  2. What events can be subscribed to?

    • Graph-level: GraphRunStartedEvent, GraphRunSucceededEvent, GraphRunFailedEvent, GraphRunAbortedEvent
    • Node-level: NodeRunStartedEvent, NodeRunSucceededEvent, NodeRunFailedEvent, NodeRunRetryEvent
    • Container nodes: IterationRunStartedEvent, IterationRunNextEvent, IterationRunSucceededEvent, LoopRunStartedEvent, LoopRunNextEvent, LoopRunSucceededEvent
    • Streaming output: NodeRunStreamChunkEvent
  3. How can I debug workflow execution?

    • Enable DEBUG=true to view detailed logs.
    • Use DebugLoggingLayer to record events.
    • Add custom monitoring via GraphEngineLayer.

Future Plans

This release is just the beginning. Upcoming improvements include:

  • Debugging Tools: A visual interface to view execution states and variables in real time.
  • Intelligent Scheduling: Optimize scheduling strategies using historical data.
  • More Complete Command Support: Add Pause/Resume, breakpoint debugging.
  • Human in the Loop: Support human intervention during execution.
  • Subgraph Functionality: Enhance modularity and reusability.
  • Multimodal Embedding: Support richer content types beyond text.

We look forward to your feedback and experiences to make the engine more practical.


Upgrade Guide

Important

After upgrading, you must run the following migration to transform existing datasource credentials. This step is required to ensure compatibility with the new version:

uv run flask transform-datasource-credentials

Docker Compose Deployments

  1. Back up your customized docker-compose YAML file (optional)
cd docker
cp docker-compose.yaml docker-compose.yaml.$(date +%s...
Read more

v2.0.0-beta.2

08 Sep 07:20
2a84832

Choose a tag to compare

v2.0.0-beta.2 Pre-release
Pre-release

Fixes

  • Fixed an issue in Workflow / Chatflow where using an LLM node with Memory could cause errors.
  • Fixed a blocking issue in non-pipeline mode when adding new Notion pages to the document list.
  • Fixed dark mode styling issues.

Upgrade Guide

Important

If upgrading from 0.x or 1.x, you must run the following migration to transform existing datasource credentials. This step is required to ensure compatibility with the new version:

uv run flask transform-datasource-credentials

Docker Compose Deployments

  1. Back up your customized docker-compose YAML file (optional)
cd docker
cp docker-compose.yaml docker-compose.yaml.$(date +%s).bak
  1. Get the latest code from the main branch
git checkout 2.0.0-beta.2
git pull origin 2.0.0-beta.2
  1. Stop the service. Please execute in the docker directory
docker compose down
  1. Back up data
tar -cvf volumes-$(date +%s).tgz volumes
  1. Upgrade services
docker compose up -d
  1. Migrate data after the container starts
docker exec -it docker-api-1 uv run flask transform-datasource-credentials

Source Code Deployments

  1. Stop the API server, Worker, and Web frontend Server.

  2. Get the latest code from the release branch:

git checkout 2.0.0-beta.2
  1. Update Python dependencies:
cd api
uv sync
  1. Then, let's run the migration script:
uv run flask db upgrade
uv run flask transform-datasource-credentials
  1. Finally, run the API server, Worker, and Web frontend Server again.

v2.0.0-beta.1 – Orchestrating Knowledge, Powering Workflows

04 Sep 13:38
fae6d4f

Choose a tag to compare

knowledge_pipeline

🚀 Introduction

In Dify 2.0, we are introducing two major new capabilities: the Knowledge Pipeline and the Queue-based Graph Engine.

This is a beta release, and we hope to explore these improvements together with you and gather your feedback. The Knowledge Pipeline provides a modularized and extensible workflow for knowledge ingestion and processing, while the Queue-based Graph Engine makes workflow execution more robust and controllable. We believe these will help you build and debug AI applications more smoothly, and we look forward to your experiences to help us continuously improve.


📚 Knowledge Pipeline

✨ Introduction

With the brand-new orchestration interface for knowledge pipelines, we introduce a fundamental architectural upgrade that reshapes how document processing are designed and executed, providing a more modular and flexible workflow that enables users to orchestrate every stage of the pipeline. Enhanced with a wide range of powerful plugins available in the marketplace, it empowers users to flexibly integrate diverse data sources and processing tools. Ultimately, this architecture enables building highly customized, domain-specific RAG solutions that meet enterprises’ growing demands for scalability, adaptability, and precision.

❓ Why Do We Need It?

Previously, Dify's RAG users still encounter persistent challenges in real-world adoption — from inaccurate knowledge retrieval and information loss to limited data integration and extensibility. Common pain points include:

  • 🔗 restricted integration of data sources
  • 🖼️ missing critical elements such as tables and images
  • ✂️ suboptimal chunking results

All of them lead to poor answer quality and hinder the model's overall performance.

In response, we reimagined RAG in Dify as an open and modular architecture, enabling developers, integrators, and domain experts to build document processing pipelines tailored to their specific requirements—from data ingestion to chunk storage and retrieval.

🛠️ Core Capabilities

🧩 Knowledge Pipeline Architecture

The Knowledge Pipeline is a visual, node-based orchestration system dedicated to document ingestion. It provides a customizable way to automate complex document processing, enabling fine-grained transformations and bridging raw content with structured, retrievable knowledge. Developers can build workflows step by step, like assembling puzzle pieces, making document handling easier to observe and adjust.

📑 Templates & Pipeline DSL

template

  • ⚡ Start quickly with official templates
  • 🔄 Customize and share pipelines by importing/exporting via DSL for easier reusability and collaboration

🔌 Customizable Data Sources & Tools

tools

tools-2

Each knowledge base can support multiple data sources. You can seamlessly integrate local files, online documents, cloud drives, and web crawlers through a plugin-based ingestion framework. Developers can extend the ecosystem with new data-source plugins, while marketplace processors handle specialized use cases like formulas, spreadsheets, and image parsing — ensuring accurate ingestion and structured representation.

🧾 New Chunking Strategies

In addition to General and Parent-Child modes, the new Q&A Processor plugin supports Q&A structures. This expands coverage for more use cases, balancing retrieval precision with contextual completeness.

🖼️ Image Extraction & Retrieval

image_in_pdf

Extract images from documents in multiple formats, store them as URLs in the knowledge base, and enable mixed text-image outputs to improve LLM-generated answers.

🧪 Test Run & Debugging Support

Before publishing a pipeline, you can:

  • ▶️ Execute a single step or node independently
  • 🔍 Inspect intermediate variables in detail
  • 👀 Preview string variables as Markdown in the variable inspector

This provides safe iteration and debugging at every stage.

🔄 One-Click Migration from Legacy Knowledge Bases

Seamlessly convert existing knowledge bases into the Knowledge Pipeline architecture with a single action, ensuring smooth transition and backward compatibility.

🌟 Why It Matters

The Knowledge Pipeline makes knowledge management more transparent, debuggable, and extensible. It is not the endpoint, but a foundation for future enhancements such as multimodal retrieval, human-in-the-loop collaboration, and enterprise-level data governance. We’re excited to see how you apply it and share your feedback.


⚙️ Queue-based Graph Engine

❓ Why Do We Need It?

Previously, designing workflows with parallel branches often led to:

  • 🌀 Difficulty managing branch states and reproducing errors
  • ❌ Insufficient debugging information
  • 🧱 Rigid execution logic lacking flexibility

These issues reduced the usability of complex workflows. To solve this, we redesigned the execution engine around queue scheduling, improving management of parallel tasks.

🛠️ Core Capabilities

📋 Queue Scheduling Model

All tasks enter a unified queue, where the scheduler manages dependencies and order. This reduces errors in parallel execution and makes topology more intuitive.

🎯 Flexible Execution Start Points

Execution can begin at any node, supporting partial runs, resumptions, and subgraph invocations.

🌊 Stream Processing Component

A new ResponseCoordinator handles streaming outputs from multiple nodes, such as token-by-token LLM generation or staged results from long-running tasks.

🕹️ Command Mechanism

With the CommandProcessor, workflows can be paused, resumed, or terminated during execution, enabling external control.

🧩 GraphEngineLayer

A new plugin layer that allows extending engine functionality without modifying core code. It can monitor states, send commands, and support custom monitoring.


Quickstart

  1. Prerequisites
    • Dify version: 2.0.0-beta.1 or higher
  2. How to Enable
    • Enabled by default, no additional configuration required.
    • Debug mode: set DEBUG=true to enable DebugLoggingLayer.
    • Execution limits:
      • WORKFLOW_MAX_EXECUTION_STEPS=500
      • WORKFLOW_MAX_EXECUTION_TIME=1200
      • WORKFLOW_CALL_MAX_DEPTH=10
    • Worker configuration (optional):
      • WORKFLOW_MIN_WORKERS=1
      • WORKFLOW_MAX_WORKERS=10
      • WORKFLOW_SCALE_UP_THRESHOLD=3
      • WORKFLOW_SCALE_DOWN_IDLE_TIME=30
    • Applies to all workflows.

More Controllable Parallel Branches

Execution Flow:

Start ─→ Unified Task Queue ─→ WorkerPool Scheduling
                          ├─→ Branch-1 Execution
                          └─→ Branch-2 Execution
                                  ↓
                            Aggregator
                                  ↓
                                  End

Improvements:
1. All tasks enter a single queue, managed by the Dispatcher.
2. WorkerPool auto-scales based on load.
3. ResponseCoordinator manages streaming outputs, ensuring correct order.

Example: Command Mechanism

from core.workflow.graph_engine.manager import GraphEngineManager

# Send stop command
GraphEngineManager.send_stop_command(
    task_id="workflow_task_123",
    reason="Emergency stop: resource limit exceeded"
)

Note: pause/resume functionality will be supported in future versions.


Example: GraphEngineLayer

GraphEngineLayer Example


FAQ

  1. Is this release focused on performance?
    No. The focus is on stability, clarity, and correctness of parallel branches. Performance improvements are a secondary benefit.

  2. What events can be subscribed to?

    • Graph-level: GraphRunStartedEvent, GraphRunSucceededEvent, GraphRunFailedEvent, GraphRunAbortedEvent
    • Node-level: NodeRunStartedEvent, NodeRunSucceededEvent, NodeRunFailedEvent, NodeRunRetryEvent
    • Container nodes: IterationRunStartedEvent, IterationRunNextEvent, IterationRunSucceededEvent, LoopRunStartedEvent, LoopRunNextEvent, LoopRunSucceededEvent
    • Streaming output: NodeRunStreamChunkEvent
  3. How can I debug workflow execution?

    • Enable DEBUG=true to view detailed logs.
    • Use DebugLoggingLayer to record events.
    • Add custom monitoring via GraphEngineLayer.

Future Plans

This beta release is just the beginning. Upcoming improvements include:

  • Debugging Tools: A visual interface to view execution states and variables in real time.
  • Intelligent Scheduling: Optimize scheduling strategies using historical data.
  • More Complete Command Support: Add Pause/Resume, breakpoint debugging.
  • Human in the Loop: Support human intervention during execution.
  • Subgraph Functionality: Enhance modularity and reusability.
  • Multimodal Embedding: Support richer content types beyond text.

We look forward to your feedback and experiences to make the engine more practical.


Upgrade Guide

Important

After upgrading, you must run the following migration to transform existing datasource credentials. This step is required to ensure compatibility with the new version:

uv run flask transform-datasource-credentials

Docker Compose Deployments

  1. Back up your cus...
Read more

v1.8.1

03 Sep 11:07
c7700ac

Choose a tag to compare

🌟 What's New in v1.8.1? 🌟

Welcome to version 1.8.1! 🎉🎉🎉 This release focuses on stability, performance improvements, and developer experience enhancements. We've built great features and resolved critical database issues based on community feedback.

🚀 Features

  • Export DSL from History: Able to export workflow DSL directly from version history panel. (See #24939, by GuanMu)
  • Downvote with Reason: Enhanced feedback system allowing users to provide specific reasons when downvoting responses. (See #24922, by jubinsoni)
  • Multi-modal/File: Added filename support to multi-modal prompt messages. (See #24777, by -LAN-)
  • Advanced Chat File Handling: Improved assistant content parts and file handling in advanced chat mode. (See #24663, by QIN2DIM)

⚡ Enhancements

  • DB Query: Optimized SQL queries that were performing partial full table scans. (See #24786, by Novice)
  • Type Checking: Migrated from MyPy to Basedpyright. (See #25047, by -LAN-)
  • Indonesian Language Support: Added Indonesian (id-ID) language support. (See #24951, by lyzno1)
  • Jinja2 Template: LLM prompt Jinja2 templates now support more variables. (See #24944, by 17hz)

🐛 Fixes

  • Security/XSS: Fixed XSS vulnerability in block-input and support-var-input components. (See #24835, by lyzno1)
  • Persistence Session Management: Resolved critical database session binding issues that were causing "not bound to a Session" errors. (See #25010, #24966, by Will)
  • Workflow & UI Issues: Fixed workflow publishing problems, resolved UUID v7 conflicts, and addressed various UI component issues including modal handling and input field improvements. (See #25030, #24643, #25034, #24864, by Will, -LAN-, 17hz & Atif)

Version 1.8.1 represents a significant step forward in platform stability and developer experience. The migration to modern type checking and database systems, combined with comprehensive bug fixes, creates a more robust foundation for future features.

Huge thanks to all our contributors who made this release possible! We welcome your ongoing feedback to help us continue improving the platform together.


Upgrade Guide

Docker Compose Deployments

  1. Back up your customized docker-compose YAML file (optional)

    cd docker
    cp docker-compose.yaml docker-compose.yaml.$(date +%s).bak
  2. Get the latest code from the main branch

    git checkout main
    git pull origin main
  3. Stop the service. Please execute in the docker directory

    docker compose down
  4. Back up data

    tar -cvf volumes-$(date +%s).tgz volumes
  5. Upgrade services

    docker compose up -d

Source Code Deployments

  1. Stop the API server, Worker, and Web frontend Server.

  2. Get the latest code from the release branch:

    git checkout 1.8.1
  3. Update Python dependencies:

    cd api
    uv sync
  4. Then, let's run the migration script:

    uv run flask db upgrade
  5. Finally, run the API server, Worker, and Web frontend Server again.


What's Changed

Read more