-
Notifications
You must be signed in to change notification settings - Fork 6.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
readwrite-splitting data sources is incorrect when tableless #34387
Comments
What do you mean of tableless write operations? Can you give an example? |
sorry, is the following explanation ok? read operations:
Expected behavior: Write operations do not include transactions:
Expected behavior: Write operations include transactions:
Expected behavior: |
@JoshuaChen Thank you for your feedback. To solve this problem, we may need to convert the physical data source in |
@strongduanmu I don't think so. If there are aggregatedDataSources with other rules in the configuration, the data in aggregatedDataSources should be taken first. If we convert the physical data source in connectionContext.getUsedDataSourceNames() into a logical data source, then when the data source is obtained randomly in the list, the logical data source will still be obtained, and READWRITE_SPLITTING will still be skipped aggregatedDataSources has a higher priority. In addition to the READWRITE_SPLITTING rule, we have other rules. A more complex example would be:
|
I updated my PR and added unit tests to try to explain my changes. I found that BroadcastRoute also does not support it.
If I'm wrong, please correct me.. |
I will spend some time investigating this issue. |
Thank you very much, I'd be happy to contribute if there's anything I can help with. |
Bug Report
When using the following configuration, the value passed to rule.findDataSourceGroupRule(logicDataSourceName) is write_ds or read_ds_0, which fails to locate the correct dataSourceGroups (e.g., joshua in the configuration). This causes the ReadwriteSplitting logic to be skipped. This PR fixes the issue by ensuring the logicDataSourceName parameter passed is set to joshua.
Which version of ShardingSphere did you use?
5.5.0/5.5.1/5.5.2-SNAPSHOT triggers BUG
5.4.1 works normally
Which project did you use? ShardingSphere-JDBC or ShardingSphere-Proxy?
ShardingSphere-JDBC
Expected behavior
Tableless write operations have transactions
Tableless write operations point to write_ds
Actual behavior
Tableless write operations have transactions
Tableless write operations do not always point to write_ds, but sometimes point to read_ds_0
Reason analyze (If you can)
#34340
The text was updated successfully, but these errors were encountered: