Attempted to add support for camel case fields via storage api. #307
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
The storage API streaming implementation (at least in java: https://github.com/googleapis/java-bigquerystorage/blob/8efb8131ff89b57509b4b122c75f765c62514b1c/google-cloud-bigquerystorage/src/main/java/com/google/cloud/bigquery/storage/v1/BQTableSchemaToProtoDescriptor.java#L145) maps the fields in the protobuff message to lower case. When a stream request is being processed and the proto buff message is unmarshalled they are in lower case (naturally) but then when the table data is being build those fields that are camelCase in the database are ignored because in the request they are lower case.
Since i cannot evaluate what the eventual outcome of replacing the mapping check with lower case might be i added a back up look up with lowercase fields. According to google cloumn names should be case-insensitive (https://cloud.google.com/bigquery/docs/reference/standard-sql/lexical#case_sensitivity)
Please advise if a better solution exists.