We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
source
FakeSource { result_table_name = "fake" row.num = 300 int.min = 1 int.max = 300 schema = { fields { name = "string" id = "int", age="int" } } }
paimon schema
{ "version" : 3, "id" : 0, "fields" : [ { "id" : 0, "name" : "name", "type" : "STRING" }, { "id" : 1, "name" : "id", "type" : "INT NOT NULL" }, { "id" : 2, "name" : "age", "type" : "INT NOT NULL" } ], "highestFieldId" : 2, "partitionKeys" : [ ], "primaryKeys" : [ "id" ], "options" : { "bucket" : "-1", "file.format":"orc", "manifest.format" : "orc", "dynamic-bucket.target-row-num":"30" }, "timeMillis" : 1731551425602 }
After multiple writes to the paimon dynamic bucket table from the source, the same primary key was not merged, resulting in duplicate data.
The issue appears to be related to rows with the same primary key not being assigned to the same bucket.
2.3.8
NAN
FLINK 1.18
No response
The text was updated successfully, but these errors were encountered:
No branches or pull requests
Search before asking
What happened
source
paimon schema
After multiple writes to the paimon dynamic bucket table from the source, the same primary key was not merged, resulting in duplicate data.
The issue appears to be related to rows with the same primary key not being assigned to the same bucket.
SeaTunnel Version
2.3.8
SeaTunnel Config
Running Command
Error Exception
Zeta or Flink or Spark Version
FLINK 1.18
Java or Scala Version
No response
Screenshots
No response
Are you willing to submit PR?
Code of Conduct
The text was updated successfully, but these errors were encountered: