You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi we have 3-4 files with 3GB. and have over 500 million records in each. These files needs to be processed by the kafka file pulse connector. for one file with 520 mill records it take like 3 hours and for another file with 540Million records will take like 40min.
so i am not sure why 1 file take very much time than the other where the size of the file is almost same and no of records are also almost same.
my assumption is we are using the json converter and drop filter. may be json convert is eating up the time, do you think this could be causing the issue. do i need to convert the input json to json to figure out if i need to filleter the record or else i can use something else to figure out making the process go more smoothly and faster.
can you please take a look at the below configuration and see if there is any thing which we can do to speed up the reading process.
Hi @veeraraghukiranyerva, how many tasks did you use for your test? JVM settings can also have an impact on performance. I also recommend you take a look at this issue, that can point to some configuration settings to improve performance: #383
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Hi we have 3-4 files with 3GB. and have over 500 million records in each. These files needs to be processed by the kafka file pulse connector. for one file with 520 mill records it take like 3 hours and for another file with 540Million records will take like 40min.
so i am not sure why 1 file take very much time than the other where the size of the file is almost same and no of records are also almost same.
my assumption is we are using the json converter and drop filter. may be json convert is eating up the time, do you think this could be causing the issue. do i need to convert the input json to json to figure out if i need to filleter the record or else i can use something else to figure out making the process go more smoothly and faster.
can you please take a look at the below configuration and see if there is any thing which we can do to speed up the reading process.
{
"name" : "iHub-clog",
"config" : {
"batch.size" : "100000",
"buffer.initial.bytes.size" : "16384",
"connector.class" : "io.streamthoughts.kafka.connect.filepulse.source.FilePulseSourceConnector",
"file.filter.minimum.age.ms" : "10000",
"filters" : "ParseJson, ExcludeMessageField, Drop",
"filters.Drop.if" : "{{ matches($.CLOG_Objects.File, '(CAPP|CUSTOMER|PREFERENCE.MGMT|IHUB.C.AR|COUNTY.CODE|MKTG.CODE|MKTG.CUSTOMER|PMNT)') }}",
"filters.Drop.invert" : "true",
"filters.Drop.type" : "io.streamthoughts.kafka.connect.filepulse.filter.DropFilter",
"filters.ExcludeMessageField.fields" : "message",
"filters.ExcludeMessageField.type" : "io.streamthoughts.kafka.connect.filepulse.filter.ExcludeFilter",
"filters.ParseJson.merge" : "true",
"filters.ParseJson.type" : "io.streamthoughts.kafka.connect.filepulse.filter.JSONFilter",
"fs.cleanup.policy.class" : "io.streamthoughts.kafka.connect.filepulse.fs.clean.LogCleanupPolicy",
"fs.listing.class" : "io.streamthoughts.kafka.connect.filepulse.fs.LocalFSDirectoryListing",
"fs.listing.directory.path" : "/tlextfs",
"fs.listing.filters" : "io.streamthoughts.kafka.connect.filepulse.fs.filter.LastModifiedFileListFilter",
"fs.listing.interval.ms" : "30000",
"key.converter" : "org.apache.kafka.connect.storage.StringConverter",
"name" : "iHub-clog",
"read.max.wait.ms" : "600000",
"tasks.file.status.storage.bootstrap.servers" : "XXXXXXXX-servers",
"tasks.reader.class" : "io.streamthoughts.kafka.connect.filepulse.fs.reader.LocalRowFileInputReader",
"topic" : "connectit",
"value.converter" : "org.apache.kafka.connect.json.JsonConverter"
}
}
can you suggest a similar configuration which can speed the process up for us.
The text was updated successfully, but these errors were encountered: