| title | Mount S3 for HDFS tiering |
|---|---|
| titleSuffix | SQL Server big data clusters |
| description | This article explains how to configure HDFS tiering to mount an external S3 file system into HDFS on a [!INCLUDE[big-data-clusters-2019](../includes/ssbigdataclusters-ver15.md)]. |
| author | nelgson |
| ms.author | negust |
| ms.reviewer | mikeray |
| ms.date | 08/21/2019 |
| ms.topic | conceptual |
| ms.prod | sql |
| ms.technology | big-data-cluster |
The following sections provide an example of how to configure HDFS tiering with an S3 Storage data source.
- Deployed big data cluster
- Big data tools
- azdata
- kubectl
- Create and upload data to an S3 bucket
- Upload CSV or Parquet files to your S3 bucket. This is the external HDFS data that will be mounted to HDFS in the big data cluster.
Open a command-prompt on a client machine that can access your big data cluster. Set an environment variable using the following format. Note that the credentials need to be in a comma separated list. The 'set' command is used on Windows. If you are using Linux, then use 'export' instead.
set MOUNT_CREDENTIALS=fs.s3a.access.key=<Access Key ID of the key>,
fs.s3a.secret.key=<Secret Access Key of the key>
Tip
For more information on how to create S3 access keys, see S3 access keys.
Now that you have prepared a credential file with access keys, you can start mounting. The following steps mount the remote HDFS storage in S3 to the local HDFS storage of your big data cluster.
-
Use kubectl to find the IP Address for the endpoint controller-svc-external service in your big data cluster. Look for the External-IP.
kubectl get svc controller-svc-external -n <your-big-data-cluster-name>
-
Log in with azdata using the external IP address of the controller endpoint with your cluster username and password:
azdata login -e https://<IP-of-controller-svc-external>:30080/
-
Set environment variable MOUNT_CREDENTIALS following the instructions above
-
Mount the remote HDFS storage in Azure using azdata bdc hdfs mount create. Replace the placeholder values before running the following command:
azdata bdc hdfs mount create --remote-uri s3a://<S3 bucket name> --mount-path /mounts/<mount-name>
[!NOTE] The mount create command is asynchronous. At this time, there is no message indicating whether the mount succeeded. See the status section to check the status of your mounts.
If mounted successfully, you should be able to query the HDFS data and run Spark jobs against it. It will appear in the HDFS for your big data cluster in the location specified by --mount-path.
To list the status of all mounts in your big data cluster, use the following command:
azdata bdc hdfs mount statusTo list the status of a mount at a specific path in HDFS, use the following command:
azdata bdc hdfs mount status --mount-path <mount-path-in-hdfs>The following example refreshes the mount.
azdata bdc hdfs mount refresh --mount-path <mount-path-in-hdfs>To delete the mount, use the azdata bdc hdfs mount delete command, and specify the mount path in HDFS:
azdata bdc hdfs mount delete --mount-path <mount-path-in-hdfs>For more information about [!INCLUDEbig-data-clusters-2019], see [What are [!INCLUDEbig-data-clusters-2019]?](big-data-cluster-overview.md).