Skip to content

Commit

Permalink
[MINOR] fix(docs): Fix several document errors (#6251)
Browse files Browse the repository at this point in the history
### What changes were proposed in this pull request?

Fix several errors in the document about hadoop-catalog and hive-catalog

### Why are the changes needed?

Improving the user experience. 

### Does this PR introduce _any_ user-facing change?

N/A.

### How was this patch tested?

N/A.
  • Loading branch information
yuqi1129 authored Jan 15, 2025
1 parent 704292e commit a13d435
Show file tree
Hide file tree
Showing 3 changed files with 11 additions and 7 deletions.
2 changes: 1 addition & 1 deletion docs/hadoop-catalog-with-gcs.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ Refer to [Fileset configurations](./hadoop-catalog.md#fileset-properties) for mo

This section will show you how to use the Hadoop catalog with GCS in Gravitino, including detailed examples.

### Create a Hadoop catalog with GCS
### Step1: Create a Hadoop catalog with GCS

First, you need to create a Hadoop catalog with GCS. The following example shows how to create a Hadoop catalog with GCS:

Expand Down
5 changes: 2 additions & 3 deletions docs/hadoop-catalog-with-oss.md
Original file line number Diff line number Diff line change
Expand Up @@ -123,7 +123,7 @@ oss_catalog = gravitino_client.create_catalog(name="test_catalog",
</TabItem>
</Tabs>

Step 2: Create a Schema
### Step 2: Create a Schema

Once the Hadoop catalog with OSS is created, you can create a schema inside that catalog. Below are examples of how to do this:

Expand Down Expand Up @@ -174,11 +174,10 @@ catalog.as_schemas().create_schema(name="test_schema",
</Tabs>


### Create a fileset
### Step3: Create a fileset

Now that the schema is created, you can create a fileset inside it. Here’s how:


<Tabs groupId="language" queryString>
<TabItem value="shell" label="Shell">

Expand Down
11 changes: 8 additions & 3 deletions docs/hive-catalog-with-cloud-storage.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
---
title: "Hive catalog with s3 and adls"
title: "Hive catalog with S3, ADLS and GCS"
slug: /hive-catalog
date: 2024-9-24
keyword: Hive catalog cloud storage S3 ADLS
keyword: Hive catalog cloud storage S3 ADLS GCS
license: "This software is licensed under the Apache License version 2."
---

Expand Down Expand Up @@ -84,8 +84,13 @@ cp ${HADOOP_HOME}/share/hadoop/tools/lib/*aws* ${HIVE_HOME}/lib

# For Azure Blob Storage(ADLS)
cp ${HADOOP_HOME}/share/hadoop/tools/lib/*azure* ${HIVE_HOME}/lib

# For Google Cloud Storage(GCS)
cp gcs-connector-hadoop3-2.2.22-shaded.jar ${HIVE_HOME}/lib
```

[`gcs-connector-hadoop3-2.2.22-shaded.jar`](https://github.com/GoogleCloudDataproc/hadoop-connectors/releases/download/v2.2.22/gcs-connector-hadoop2-2.2.22-shaded.jar) is the bundle jar that contains Hadoop GCS connector, you need to choose the corresponding gcs connector jar for the version of Hadoop you are using.

Alternatively, you can download the required JARs from the Maven repository and place them in the Hive classpath. It is crucial to verify that the JARs are compatible with the version of Hadoop you are using to avoid any compatibility issue.

### Restart Hive metastore
Expand Down Expand Up @@ -265,7 +270,7 @@ To access S3-stored tables using Spark, you need to configure the SparkSession a
sparkSession.sql("...");
```
:::Note
:::note
Please download [Hadoop AWS jar](https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-aws), [aws java sdk jar](https://mvnrepository.com/artifact/com.amazonaws/aws-java-sdk-bundle) and place them in the classpath of the Spark. If the JARs are missing, Spark will not be able to access the S3 storage.
Azure Blob Storage(ADLS) requires the [Hadoop Azure jar](https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-azure), [Azure cloud sdk jar](https://mvnrepository.com/artifact/com.azure/azure-storage-blob) to be placed in the classpath of the Spark.
for Google Cloud Storage(GCS), you need to download the [Hadoop GCS jar](https://github.com/GoogleCloudDataproc/hadoop-connectors/releases) and place it in the classpath of the Spark.
Expand Down

0 comments on commit a13d435

Please sign in to comment.