Skip to content

Commit 7c007a8

Browse files
authored
Merge pull request #10549 from rothja/bdcmssqlclustername
Updating notes about default mssql-cluster name
2 parents 08309ac + e70be2e commit 7c007a8

12 files changed

Lines changed: 35 additions & 37 deletions

docs/big-data-cluster/big-data-cluster-create-apps.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -75,7 +75,7 @@ If you are using AKS, you need to run the following command to get the IP addres
7575

7676

7777
```bash
78-
kubectl get svc mgmtproxy-svc-external -n <name of your cluster>
78+
kubectl get svc mgmtproxy-svc-external -n <name of your big data cluster>
7979
```
8080

8181
## Kubeadm or Minikube

docs/big-data-cluster/connect-to-big-data-cluster.md

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -32,9 +32,12 @@ To connect to a big data cluster with Azure Data Studio, make a new connection t
3232
1. From the command line, find the IP of your master instance with the following command:
3333

3434
```
35-
kubectl get svc master-svc-external -n <your-cluster-name>
35+
kubectl get svc master-svc-external -n <your-big-data-cluster-name>
3636
```
3737

38+
> [!TIP]
39+
> The big data cluster name defaults to **mssql-cluster** unless you customized the name in a deployment configuration file. For more information, see [Configure deployment settings for big data clusters](deployment-custom-configuration.md#clustername).
40+
3841
1. In Azure Data Studio, press **F1** > **New Connection**.
3942

4043
1. In **Connection type**, select **Microsoft SQL Server**.

docs/big-data-cluster/data-ingestion-curl.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -20,14 +20,14 @@ This article explains how to use **curl** to load data into HDFS on SQL Server 2
2020

2121
## Obtain the service external IP
2222

23-
WebHDFS is started when deployment is completed, and its access goes through Knox. The Knox endpoint is exposed through a Kubernetes service called **gateway-svc-external**. To create the necessary WebHDFS URL to upload/download files, you need the **gateway-svc-external** service external IP address and the name of your cluster. You can get the **gateway-svc-external** service external IP address by running the following command:
23+
WebHDFS is started when deployment is completed, and its access goes through Knox. The Knox endpoint is exposed through a Kubernetes service called **gateway-svc-external**. To create the necessary WebHDFS URL to upload/download files, you need the **gateway-svc-external** service external IP address and the name of your big data cluster. You can get the **gateway-svc-external** service external IP address by running the following command:
2424

2525
```bash
26-
kubectl get service gateway-svc-external -n <cluster name> -o json | jq -r .status.loadBalancer.ingress[0].ip
26+
kubectl get service gateway-svc-external -n <big data cluster name> -o json | jq -r .status.loadBalancer.ingress[0].ip
2727
```
2828

2929
> [!NOTE]
30-
> The `<cluster name>` here is the name of the cluster that you specified in the deployment configuration file. The default name is `mssql-cluster`.
30+
> The `<big data cluster name>` here is the name of the cluster that you specified in the deployment configuration file. The default name is `mssql-cluster`.
3131
3232
## Construct the URL to access WebHDFS
3333

docs/big-data-cluster/data-ingestion-restore-database.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ This article shows how to restore the AdventureWorks database, but you can use a
3232
Copy the backup file to the SQL Server container in the master instance pod of the Kubernetes cluster.
3333

3434
```bash
35-
kubectl cp <path to .bak file> mssql-master-pool-0:/tmp -c mssql-server -n <name of your cluster>
35+
kubectl cp <path to .bak file> mssql-master-pool-0:/tmp -c mssql-server -n <name of your big data cluster>
3636
```
3737

3838
Example:
@@ -44,7 +44,7 @@ kubectl cp ~/Downloads/AdventureWorks2016CTP3.bak mssql-master-pool-0:/tmp -c ms
4444
Then, verify that the backup file was copied to the pod container.
4545

4646
```bash
47-
kubectl exec -it mssql-master-pool-0 -n <name of your cluster> -c mssql-server -- bin/bash
47+
kubectl exec -it mssql-master-pool-0 -n <name of your big data cluster> -c mssql-server -- bin/bash
4848
cd /var/
4949
ls /tmp
5050
exit

docs/big-data-cluster/deploy-get-started.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@ After configuring Kubernetes, you deploy a big data cluster with the `mssqlctl c
4646

4747
- If you are deploying to a dev-test environment, you can choose to use one of the [default configurations](deployment-guidance.md#deploy) provided by **mssqlctl**.
4848

49-
- To customize your deployment, you can create and use your own [deployment configuration files](deployment-guidance.md#configfile).
49+
- To customize your deployment, you can create and use your own [deployment configuration files](deployment-guidance.md#configfile).
5050

5151
- For a completely unattended installation, you can pass all other settings in environment variables. For more information, see [unattended deployments](deployment-guidance.md#unattended).
5252

docs/big-data-cluster/deployment-custom-configuration.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@ mssqlctl cluster config section set -c custom.json -j ".metadata.name=test-clust
4545
```
4646

4747
> [!IMPORTANT]
48-
> The name of your cluster must be only lower case alpha-numeric characters, no spaces. All Kubernetes artifacts (containers, pods, statefull sets, services) for the cluster will be created in a namespace with same name as the cluster name specified.
48+
> The name of your big data cluster must be only lower case alpha-numeric characters, no spaces. All Kubernetes artifacts (containers, pods, statefull sets, services) for the cluster will be created in a namespace with same name as the cluster name specified.
4949
5050
## <a id="ports"></a> Update endpoint ports
5151

docs/big-data-cluster/deployment-guidance.md

Lines changed: 9 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -87,8 +87,10 @@ You can deploy a big data cluster by running **mssqlctl cluster create**. This p
8787
mssqlctl cluster create
8888
```
8989

90-
> [!TIP]
91-
> In this example, you are prompted for any settings that are not part of the default configuration, such as passwords. Note that the Docker information is provided to you by Microsoft as part of the SQL Server 2019 [Early Adoption Program](https://aka.ms/eapsignup).
90+
In this scenario, you are prompted for any settings that are not part of the default configuration, such as passwords. Note that the Docker information is provided to you by Microsoft as part of the SQL Server 2019 [Early Adoption Program](https://aka.ms/eapsignup).
91+
92+
> [!IMPORTANT]
93+
> The default name of the big data cluster is **mssql-cluster**. This is important to know in order to run any of the **kubectl** commands that specify the Kubernetes namespace with the `-n` parameter.
9294
9395
## <a id="customconfig"></a> Custom configurations
9496

@@ -215,9 +217,12 @@ After the deployment script has completed successfully, you can obtain the IP ad
215217
1. After the deployment, find the IP address of the controller endpoint by looking at the EXTERNAL-IP output of the following **kubectl** command:
216218

217219
```bash
218-
kubectl get svc controller-svc-external -n <your-cluster-name>
220+
kubectl get svc controller-svc-external -n <your-big-data-cluster-name>
219221
```
220222

223+
> [!TIP]
224+
> If you did not change the default name during deployment, use `-n mssql-cluster` in the previous command. **mssql-cluster** is the default name for the big data cluster.
225+
221226
1. Log in to the big data cluster with **mssqlctl login**. Set the **--controller-endpoint** parameter to the external IP address of the controller endpoint.
222227

223228
```bash
@@ -262,7 +267,7 @@ minikube ip
262267
Irrespective of the platform you are running your Kubernetes cluster on, to get all the service endpoints deployed for the cluster, run following command:
263268

264269
```bash
265-
kubectl get svc -n <your-cluster-name>
270+
kubectl get svc -n <your-big-data-cluster-name>
266271
```
267272

268273
## <a id="connect"></a> Connect to the cluster

docs/big-data-cluster/hdfs-tiering-mount-adlsgen2.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -98,7 +98,7 @@ Now that you have prepared a credential file with either access keys or using OA
9898
1. Use **kubectl** to find the IP Address for the endpoint **controller-svc-external** service in your big data cluster. Look for the **External-IP**.
9999

100100
```bash
101-
kubectl get svc controller-svc-external -n <your-cluster-name>
101+
kubectl get svc controller-svc-external -n <your-big-data-cluster-name>
102102
```
103103

104104
1. Log in with **mssqlctl** using the external IP address of the controller endpoint with your cluster username and password:

docs/big-data-cluster/hdfs-tiering-mount-s3.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@ Now that you have prepared a credential file with access keys, you can start mou
4646
1. Use **kubectl** to find the IP Address for the endpoint **controller-svc-external** service in your big data cluster. Look for the **External-IP**.
4747

4848
```bash
49-
kubectl get svc controller-svc-external -n <your-cluster-name>
49+
kubectl get svc controller-svc-external -n <your-big-data-cluster-name>
5050
```
5151

5252
1. Log in with **mssqlctl** using the external IP address of the controller endpoint with your cluster username and password:

docs/big-data-cluster/quickstart-big-data-cluster-deploy.md

Lines changed: 9 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -77,7 +77,7 @@ Use the following steps to run the deployment script. This script will create an
7777
| **Azure region** | The Azure region for the new AKS cluster (default **westus**). |
7878
| **Machine size** | The [machine size](https://docs.microsoft.com/azure/virtual-machines/windows/sizes) to use for nodes in the AKS cluster (default **Standard_L8s**). |
7979
| **Worker nodes** | The number of worker nodes in the AKS cluster (default **1**). |
80-
| **Cluster name** | The name of both the AKS cluster and the big data cluster. The name of your cluster must be only lower case alpha-numeric characters, and no spaces. (default **sqlbigdata**). |
80+
| **Cluster name** | The name of both the AKS cluster and the big data cluster. The name of your big data cluster must be only lower case alpha-numeric characters, and no spaces. (default **sqlbigdata**). |
8181
| **Password** | Password for the controller, HDFS/Spark gateway, and master instance (default **MySQLBigData2019**). |
8282
| **Controller user** | Username for the controller user (default: **admin**). |
8383

@@ -113,7 +113,7 @@ After 10 to 20 minutes, you should be notified that the controller pod is runnin
113113
114114
## Inspect the cluster
115115

116-
At any time during deployment, you can use kubectl or the Cluster Administration Portal to inspect the status and details about the running big data cluster.
116+
At any time during deployment, you can use **kubectl** or **mssqlctl** to inspect the status and details about the running big data cluster.
117117

118118
### Use kubectl
119119

@@ -122,43 +122,33 @@ Open a new command window to use **kubectl** during the deployment process.
122122
1. Run the following command to get a summary of the status of the whole cluster:
123123

124124
```
125-
kubectl get all -n <your-cluster-name>
125+
kubectl get all -n <your-big-data-cluster-name>
126126
```
127127

128+
> [!TIP]
129+
> If you did not change the big data cluster name, the script defaults to **sqlbigdata**.
130+
128131
1. Inspect the kubernetes services and their internal and external endpoints with the following **kubectl** command:
129132

130133
```
131-
kubectl get svc -n <your-cluster-name>
134+
kubectl get svc -n <your-big-data-cluster-name>
132135
```
133136

134137
1. You can also inspect the status of the kubernetes pods with the following command:
135138

136139
```
137-
kubectl get pods -n <your-cluster-name>
140+
kubectl get pods -n <your-big-data-cluster-name>
138141
```
139142

140143
1. Find out more information about a specific pod with the following command:
141144

142145
```
143-
kubectl describe pod <pod name> -n <your-cluster-name>
146+
kubectl describe pod <pod name> -n <your-big-data-cluster-name>
144147
```
145148

146149
> [!TIP]
147150
> For more details about how to monitor and troubleshoot a deployment, see [Monitoring and troubleshoot SQL Server big data clusters](cluster-troubleshooting-commands.md).
148151
149-
### Use the Cluster Administration Portal
150-
151-
Once the Controller pod is running, you can also use the Cluster Administration Portal to monitor the deployment. You can access the portal using the external IP address and port number for the `mgmtproxy-svc-external` (for example: **https://\<ip-address\>:30777/portal**). The credentials used to log into the portal match the values for **Controller user** and **Password** that you specified in the deployment script.
152-
153-
You can get the IP address of the **mgmtproxy-svc-external** service by running this command in a bash or cmd window:
154-
155-
```bash
156-
kubectl get svc mgmtproxy-svc-external -n <your-cluster-name>
157-
```
158-
159-
> [!NOTE]
160-
> In CTP 3.0, you will see a security warning when accessing the web page, because big data clusters is currently using auto-generated SSL certificates.
161-
162152
## Connect to the cluster
163153

164154
When the deployment script finishes, the output notifies you of success:

0 commit comments

Comments
 (0)