You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/big-data-cluster/connect-to-big-data-cluster.md
+4-1Lines changed: 4 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -32,9 +32,12 @@ To connect to a big data cluster with Azure Data Studio, make a new connection t
32
32
1. From the command line, find the IP of your master instance with the following command:
33
33
34
34
```
35
-
kubectl get svc master-svc-external -n <your-cluster-name>
35
+
kubectl get svc master-svc-external -n <your-big-data-cluster-name>
36
36
```
37
37
38
+
> [!TIP]
39
+
> The big data cluster name defaults to **mssql-cluster** unless you customized the name in a deployment configuration file. For more information, see [Configure deployment settings for big data clusters](deployment-custom-configuration.md#clustername).
40
+
38
41
1. In Azure Data Studio, press **F1** > **New Connection**.
39
42
40
43
1. In **Connection type**, select **Microsoft SQL Server**.
Copy file name to clipboardExpand all lines: docs/big-data-cluster/data-ingestion-curl.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -20,14 +20,14 @@ This article explains how to use **curl** to load data into HDFS on SQL Server 2
20
20
21
21
## Obtain the service external IP
22
22
23
-
WebHDFS is started when deployment is completed, and its access goes through Knox. The Knox endpoint is exposed through a Kubernetes service called **gateway-svc-external**. To create the necessary WebHDFS URL to upload/download files, you need the **gateway-svc-external** service external IP address and the name of your cluster. You can get the **gateway-svc-external** service external IP address by running the following command:
23
+
WebHDFS is started when deployment is completed, and its access goes through Knox. The Knox endpoint is exposed through a Kubernetes service called **gateway-svc-external**. To create the necessary WebHDFS URL to upload/download files, you need the **gateway-svc-external** service external IP address and the name of your big data cluster. You can get the **gateway-svc-external** service external IP address by running the following command:
24
24
25
25
```bash
26
-
kubectl get service gateway-svc-external -n <cluster name> -o json | jq -r .status.loadBalancer.ingress[0].ip
26
+
kubectl get service gateway-svc-external -n <big data cluster name> -o json | jq -r .status.loadBalancer.ingress[0].ip
27
27
```
28
28
29
29
> [!NOTE]
30
-
> The `<cluster name>` here is the name of the cluster that you specified in the deployment configuration file. The default name is `mssql-cluster`.
30
+
> The `<big data cluster name>` here is the name of the cluster that you specified in the deployment configuration file. The default name is `mssql-cluster`.
Copy file name to clipboardExpand all lines: docs/big-data-cluster/deploy-get-started.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -46,7 +46,7 @@ After configuring Kubernetes, you deploy a big data cluster with the `mssqlctl c
46
46
47
47
- If you are deploying to a dev-test environment, you can choose to use one of the [default configurations](deployment-guidance.md#deploy) provided by **mssqlctl**.
48
48
49
-
- To customize your deployment, you can create and use your own [deployment configuration files](deployment-guidance.md#configfile).
49
+
- To customize your deployment, you can create and use your own [deployment configuration files](deployment-guidance.md#configfile).
50
50
51
51
- For a completely unattended installation, you can pass all other settings in environment variables. For more information, see [unattended deployments](deployment-guidance.md#unattended).
> The name of your cluster must be only lower case alpha-numeric characters, no spaces. All Kubernetes artifacts (containers, pods, statefull sets, services) for the cluster will be created in a namespace with same name as the cluster name specified.
48
+
> The name of your big data cluster must be only lower case alpha-numeric characters, no spaces. All Kubernetes artifacts (containers, pods, statefull sets, services) for the cluster will be created in a namespace with same name as the cluster name specified.
Copy file name to clipboardExpand all lines: docs/big-data-cluster/deployment-guidance.md
+9-4Lines changed: 9 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -87,8 +87,10 @@ You can deploy a big data cluster by running **mssqlctl cluster create**. This p
87
87
mssqlctl cluster create
88
88
```
89
89
90
-
> [!TIP]
91
-
> In this example, you are prompted for any settings that are not part of the default configuration, such as passwords. Note that the Docker information is provided to you by Microsoft as part of the SQL Server 2019 [Early Adoption Program](https://aka.ms/eapsignup).
90
+
In this scenario, you are prompted for any settings that are not part of the default configuration, such as passwords. Note that the Docker information is provided to you by Microsoft as part of the SQL Server 2019 [Early Adoption Program](https://aka.ms/eapsignup).
91
+
92
+
> [!IMPORTANT]
93
+
> The default name of the big data cluster is **mssql-cluster**. This is important to know in order to run any of the **kubectl** commands that specify the Kubernetes namespace with the `-n` parameter.
92
94
93
95
## <aid="customconfig"></a> Custom configurations
94
96
@@ -215,9 +217,12 @@ After the deployment script has completed successfully, you can obtain the IP ad
215
217
1. After the deployment, find the IP address of the controller endpoint by looking at the EXTERNAL-IP output of the following **kubectl** command:
216
218
217
219
```bash
218
-
kubectl get svc controller-svc-external -n <your-cluster-name>
220
+
kubectl get svc controller-svc-external -n <your-big-data-cluster-name>
219
221
```
220
222
223
+
> [!TIP]
224
+
> If you did not change the default name during deployment, use `-n mssql-cluster` in the previous command. **mssql-cluster** is the default name for the big data cluster.
225
+
221
226
1. Log in to the big data cluster with **mssqlctl login**. Set the **--controller-endpoint** parameter to the external IP address of the controller endpoint.
222
227
223
228
```bash
@@ -262,7 +267,7 @@ minikube ip
262
267
Irrespective of the platform you are running your Kubernetes cluster on, to get all the service endpoints deployed for the cluster, run following command:
Copy file name to clipboardExpand all lines: docs/big-data-cluster/quickstart-big-data-cluster-deploy.md
+9-19Lines changed: 9 additions & 19 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -77,7 +77,7 @@ Use the following steps to run the deployment script. This script will create an
77
77
|**Azure region**| The Azure region for the new AKS cluster (default **westus**). |
78
78
|**Machine size**| The [machine size](https://docs.microsoft.com/azure/virtual-machines/windows/sizes) to use for nodes in the AKS cluster (default **Standard_L8s**). |
79
79
|**Worker nodes**| The number of worker nodes in the AKS cluster (default **1**). |
80
-
|**Cluster name**| The name of both the AKS cluster and the big data cluster. The name of your cluster must be only lower case alpha-numeric characters, and no spaces. (default **sqlbigdata**). |
80
+
|**Cluster name**| The name of both the AKS cluster and the big data cluster. The name of your big data cluster must be only lower case alpha-numeric characters, and no spaces. (default **sqlbigdata**). |
81
81
|**Password**| Password for the controller, HDFS/Spark gateway, and master instance (default **MySQLBigData2019**). |
82
82
|**Controller user**| Username for the controller user (default: **admin**). |
83
83
@@ -113,7 +113,7 @@ After 10 to 20 minutes, you should be notified that the controller pod is runnin
113
113
114
114
## Inspect the cluster
115
115
116
-
At any time during deployment, you can use kubectl or the Cluster Administration Portal to inspect the status and details about the running big data cluster.
116
+
At any time during deployment, you can use **kubectl** or **mssqlctl** to inspect the status and details about the running big data cluster.
117
117
118
118
### Use kubectl
119
119
@@ -122,43 +122,33 @@ Open a new command window to use **kubectl** during the deployment process.
122
122
1. Run the following command to get a summary of the status of the whole cluster:
123
123
124
124
```
125
-
kubectl get all -n <your-cluster-name>
125
+
kubectl get all -n <your-big-data-cluster-name>
126
126
```
127
127
128
+
> [!TIP]
129
+
> If you did not change the big data cluster name, the script defaults to **sqlbigdata**.
130
+
128
131
1. Inspect the kubernetes services and their internal and external endpoints with the following **kubectl** command:
129
132
130
133
```
131
-
kubectl get svc -n <your-cluster-name>
134
+
kubectl get svc -n <your-big-data-cluster-name>
132
135
```
133
136
134
137
1. You can also inspect the status of the kubernetes pods with the following command:
135
138
136
139
```
137
-
kubectl get pods -n <your-cluster-name>
140
+
kubectl get pods -n <your-big-data-cluster-name>
138
141
```
139
142
140
143
1. Find out more information about a specific pod with the following command:
141
144
142
145
```
143
-
kubectl describe pod <pod name> -n <your-cluster-name>
146
+
kubectl describe pod <pod name> -n <your-big-data-cluster-name>
144
147
```
145
148
146
149
> [!TIP]
147
150
> For more details about how to monitor and troubleshoot a deployment, see [Monitoring and troubleshoot SQL Server big data clusters](cluster-troubleshooting-commands.md).
148
151
149
-
### Use the Cluster Administration Portal
150
-
151
-
Once the Controller pod is running, you can also use the Cluster Administration Portal to monitor the deployment. You can access the portal using the external IP address and port number for the `mgmtproxy-svc-external` (for example: **https://\<ip-address\>:30777/portal**). The credentials used to log into the portal match the values for **Controller user** and **Password** that you specified in the deployment script.
152
-
153
-
You can get the IP address of the **mgmtproxy-svc-external** service by running this command in a bash or cmd window:
154
-
155
-
```bash
156
-
kubectl get svc mgmtproxy-svc-external -n <your-cluster-name>
157
-
```
158
-
159
-
> [!NOTE]
160
-
> In CTP 3.0, you will see a security warning when accessing the web page, because big data clusters is currently using auto-generated SSL certificates.
161
-
162
152
## Connect to the cluster
163
153
164
154
When the deployment script finishes, the output notifies you of success:
0 commit comments