You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am reading data from kafka topic and getting error:
Caused by: java.lang.IllegalArgumentException: Cannot add column, name already exists: A.b.c
at org.apache.iceberg.relocated.com.google.common.base.Preconditions.checkArgument(Preconditions.java:445)
at org.apache.iceberg.SchemaUpdate.internalAddColumn(SchemaUpdate.java:156)
at org.apache.iceberg.SchemaUpdate.addColumn(SchemaUpdate.java:107)
at org.apache.iceberg.UpdateSchema.addColumn(UpdateSchema.java:107)
at io.tabular.iceberg.connect.data.SchemaUtils.lambda$commitSchemaUpdates$4(SchemaUtils.java:127)
at java.base/java.util.ArrayList.forEach(ArrayList.java:1541)
at io.tabular.iceberg.connect.data.SchemaUtils.commitSchemaUpdates(SchemaUtils.java:126)
at io.tabular.iceberg.connect.data.SchemaUtils.lambda$applySchemaUpdates$0(SchemaUtils.java:93)
at org.apache.iceberg.util.Tasks$Builder.runTaskWithRetry(Tasks.java:413)
at org.apache.iceberg.util.Tasks$Builder.runSingleThreaded(Tasks.java:219)
at org.apache.iceberg.util.Tasks$Builder.run(Tasks.java:203)
at org.apache.iceberg.util.Tasks$Builder.run(Tasks.java:196)
at io.tabular.iceberg.connect.data.SchemaUtils.applySchemaUpdates(SchemaUtils.java:93)
at io.tabular.iceberg.connect.data.IcebergWriter.convertToRow(IcebergWriter.java:98)
at io.tabular.iceberg.connect.data.IcebergWriter.write(IcebergWriter.java:68)
... 20 more
I am reading data from kafka topic and getting error:
Caused by: java.lang.IllegalArgumentException: Cannot add column, name already exists: A.b.c
at org.apache.iceberg.relocated.com.google.common.base.Preconditions.checkArgument(Preconditions.java:445)
at org.apache.iceberg.SchemaUpdate.internalAddColumn(SchemaUpdate.java:156)
at org.apache.iceberg.SchemaUpdate.addColumn(SchemaUpdate.java:107)
at org.apache.iceberg.UpdateSchema.addColumn(UpdateSchema.java:107)
at io.tabular.iceberg.connect.data.SchemaUtils.lambda$commitSchemaUpdates$4(SchemaUtils.java:127)
at java.base/java.util.ArrayList.forEach(ArrayList.java:1541)
at io.tabular.iceberg.connect.data.SchemaUtils.commitSchemaUpdates(SchemaUtils.java:126)
at io.tabular.iceberg.connect.data.SchemaUtils.lambda$applySchemaUpdates$0(SchemaUtils.java:93)
at org.apache.iceberg.util.Tasks$Builder.runTaskWithRetry(Tasks.java:413)
at org.apache.iceberg.util.Tasks$Builder.runSingleThreaded(Tasks.java:219)
at org.apache.iceberg.util.Tasks$Builder.run(Tasks.java:203)
at org.apache.iceberg.util.Tasks$Builder.run(Tasks.java:196)
at io.tabular.iceberg.connect.data.SchemaUtils.applySchemaUpdates(SchemaUtils.java:93)
at io.tabular.iceberg.connect.data.IcebergWriter.convertToRow(IcebergWriter.java:98)
at io.tabular.iceberg.connect.data.IcebergWriter.write(IcebergWriter.java:68)
... 20 more
This is my sink properties file:
name=iceberg-sink-connector
topics=sample-topic
connector.class=io.tabular.iceberg.connect.IcebergSinkConnector
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
value.converter.schemas.enable=false
key.converter.schemas.enable=false
iceberg.tables.evolve-schema-enabled=true
iceberg.catalog=fake-bucket
iceberg.catalog.type=hadoop
iceberg.table.name=bronze.simple1
iceberg.catalog.warehouse=
iceberg.tables.auto-create-enabled=true
iceberg.tables.schema-evolution.enabled=true
iceberg.catalog.s3.path-style-access=false
iceberg.tables.evolve-schema-enabled=true
iceberg.tables.dynamic-enabled=false
iceberg.tables.schema-force-optional=false
iceberg.tables.schema-case-insensitive=false
iceberg.catalog.io-impl=org.apache.iceberg.aws.s3.S3FileIO
commit.timeout.ms=120000
iceberg.commit.timeout-ms=60000
iceberg.tables=bronze.simple1
log4j.logger.io.confluent.connect.s3=DEBUG
iceberg.control.commit.timeout-ms=200000
iceberg.control.commit.interval-ms=200
iceberg.connection.timeout.ms=300000
iceberg.read.timeout.ms=300000
I tried below configuration as well
iceberg.tables.write-props.write.operation=append
consumer.write.operation=append
write.operation=merge
The text was updated successfully, but these errors were encountered: