Skip to content

Commit c81c97b

Browse files
20210702 1812
1 parent 509b803 commit c81c97b

3 files changed

Lines changed: 33 additions & 32 deletions

File tree

docs/big-data-cluster/deploy-on-aks.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -2,11 +2,11 @@
22
title: Configure Azure Kubernetes Service
33
titleSuffix: SQL Server Big Data Clusters
44
description: Learn how to configure Azure Kubernetes Service (AKS) for SQL Server 2019 big data cluster deployments.
5-
author: MikeRayMSFT
6-
ms.author: mikeray
7-
ms.reviewer: mihaelab
5+
author: WilliamDAssafMSFT
6+
ms.author: wiassaf
7+
ms.reviewer:
88
ms.metadata: seo-lt-2019
9-
ms.date: 12/13/2019
9+
ms.date: 07/02/2021
1010
ms.topic: conceptual
1111
ms.prod: sql
1212
ms.technology: big-data-cluster
@@ -98,7 +98,7 @@ Before you run the command, update the script. Replace `<Azure data center>` wit
9898
az aks get-versions `
9999
--location <Azure data center> `
100100
--query orchestrators `
101-
--o table
101+
-o table
102102
```
103103

104104
Choose the latest available version for your cluster. Record the version number. You will use it in the next step.

docs/big-data-cluster/deployment-custom-configuration.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -2,8 +2,8 @@
22
title: Configure deployments
33
titleSuffix: SQL Server big data clusters
44
description: Learn how to customize a big data cluster deployment with configuration files that are built into the azdata management tool.
5-
author: MikeRayMSFT
6-
ms.author: mikeray
5+
author: WilliamDAssafMSFT
6+
ms.author: wiassaf
77
ms.reviewer: rajmera3
88
ms.date: 02/11/2021
99
ms.topic: conceptual
@@ -12,7 +12,7 @@ ms.technology: big-data-cluster
1212
---
1313

1414

15-
# Configure deployment settings for cluster resources and services
15+
# Configure deployment settings for Big Data Cluster resources and services
1616

1717
[!INCLUDE[SQL Server 2019](../includes/applies-to-version/sqlserver2019.md)]
1818
> [!Note]
@@ -334,7 +334,7 @@ First create a patch.json file as below that adjust the *storage* settings
334334
}
335335
}
336336
},
337-
{
337+
{
338338
"op": "add",
339339
"path": "spec.resources.master.spec.storage",
340340
"value": {

docs/big-data-cluster/deployment-guidance.md

Lines changed: 24 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -2,9 +2,9 @@
22
title: Deployment guidance
33
titleSuffix: SQL Server Big Data Clusters
44
description: Learn how to deploy SQL Server Big Data Clusters on Kubernetes.
5-
author: MikeRayMSFT
6-
ms.author: mikeray
7-
ms.reviewer: mihaelab
5+
author: WilliamDAssafMSFT
6+
ms.author: wiassaf
7+
ms.reviewer:
88
ms.date: 06/22/2020
99
ms.topic: conceptual
1010
ms.prod: sql
@@ -15,7 +15,7 @@ ms.technology: big-data-cluster
1515

1616
[!INCLUDE[SQL Server 2019](../includes/applies-to-version/sqlserver2019.md)]
1717

18-
A SQL Server big data cluster is deployed as docker containers on a Kubernetes cluster. This is an overview of the setup and configuration steps:
18+
SQL Server Big Data Cluster is deployed as docker containers on a Kubernetes cluster. This is an overview of the setup and configuration steps:
1919

2020
- Set up a Kubernetes cluster on a single VM, cluster of VMs, in Azure Kubernetes Service (AKS), Red Hat OpenShift or in Azure Red Hat OpenShift (ARO).
2121
- Install the cluster configuration tool [!INCLUDE [azure-data-cli-azdata](../includes/azure-data-cli-azdata.md)] on your client machine.
@@ -25,7 +25,7 @@ A SQL Server big data cluster is deployed as docker containers on a Kubernetes c
2525

2626
See [Supported platforms](release-notes-big-data-cluster.md#supported-platforms) for a complete list of the various Kubernetes platforms validated for deploying SQL Server Big Data Clusters.
2727

28-
### SQL Server Editions
28+
### SQL Server editions
2929

3030
|Edition|Notes|
3131
|---------|---------|
@@ -58,15 +58,16 @@ kubectl config view
5858
```
5959

6060
> [!Important]
61-
> If you are deploying on a multi node Kuberntes cluster that you bootstrapped using `kubeadm`, before starting the big data cluster deployment, ensure the clocks are synchronized across all the Kubernetes nodes the deployment is targeting. The big data cluster has built-in health properties for various services that are time sensitive and clock skews can result in incorrect status.
61+
> If you are deploying on a multi node Kubernetes cluster that you bootstrapped using `kubeadm`, before starting the big data cluster deployment, ensure the clocks are synchronized across all the Kubernetes nodes the deployment is targeting. The big data cluster has built-in health properties for various services that are time sensitive and clock skews can result in incorrect status.
6262
6363
After you have configured your Kubernetes cluster, you can proceed with the deployment of a new SQL Server big data cluster. If you are upgrading from a previous release, please see [How to upgrade [!INCLUDE[big-data-clusters-2019](../includes/ssbigdataclusters-ss-nover.md)]](deployment-upgrade.md).
6464

6565
## Ensure you have storage configured
6666

67-
Most big data cluster deployments should have persistent storage. At this time, you need to make sure you have a plan for how you're going to provide persistent storage on the Kubernetes cluster before you deploy the BDC.
67+
Most big data cluster deployments should have persistent storage. At this time, you need to make sure you have a plan for how you're going to provide persistent storage on the Kubernetes cluster before you deploy.
6868

69-
If you deploy in AKS, no storage setup is necessary. AKS provides built-in storage classes with dynamic provisioning. You can customize the storage class (`default` or `managed-premium`) in the deployment configuration file. The built-in profiles use a `default` storage class. If you are deploying on a Kubernetes cluster you deployed using `kubeadm`, you'll need to ensure you have sufficient storage for a cluster of your desired scale available and configured for use. If you wish to customize how your storage is used, you should do this before proceeding. See [Data persistence with SQL Server big data cluster on Kubernetes](concept-data-persistence.md).
69+
- If you deploy in AKS, no storage setup is necessary. AKS provides built-in storage classes with dynamic provisioning. You can customize the storage class (`default` or `managed-premium`) in the deployment configuration file. The built-in profiles use a `default` storage class.
70+
- If you are deploying on a Kubernetes cluster you deployed using `kubeadm`, you'll need to ensure you have sufficient storage for a cluster of your desired scale available and configured for use. If you wish to customize how your storage is used, you should do this before proceeding. See [Data persistence with SQL Server big data cluster on Kubernetes](concept-data-persistence.md).
7071

7172
## Install SQL Server 2019 Big Data tools
7273

@@ -92,24 +93,24 @@ Big data cluster deployment options are defined in JSON configuration files. You
9293
> [!NOTE]
9394
> The container images required for the big data cluster deployment are hosted on Microsoft Container Registry (`mcr.microsoft.com`), in the `mssql/bdc` repository. By default, these settings are already included in the `control.json` configuration file in each of the deployment profiles included with [!INCLUDE [azure-data-cli-azdata](../includes/azure-data-cli-azdata.md)]. In addition, the container image tag for each release is also pre-populated in the same configuration file. If you need to pull the container images into your own private container registry and or modify the container registry/repository settings, follow the instructions in the [Offline installation article](deploy-offline.md)
9495
95-
Run this command to find what are the templates available:
96+
Run this command to find the templates available:
9697

97-
```
98+
```bash
9899
azdata bdc config list -o table
99100
```
100101

101102
The following templates are available as of SQL Server 2019 CU5:
102103

103104
| Deployment profile | Kubernetes environment |
104105
|---|---|
105-
| `aks-dev-test` | Deploy SQL Server big data cluster on Azure Kubernetes Service (AKS)|
106-
| `aks-dev-test-ha` | Deploy SQL Server big data cluster on Azure Kubernetes Service (AKS). Mission critical services like SQL Server master and HDFS name node are configured for high availability.|
107-
| `aro-dev-test`|Deploy SQL Server big data cluster on Azure Red Hat OpenShift for development and testing. <br/><br/>Introduced in SQL Server 2019 CU 5.|
108-
| `aro-dev-test-ha`|Deploy SQL Server big data cluster with high availability on a Red Hat OpenShift cluster for development and testing. <br/><br/>Introduced in SQL Server 2019 CU 5.|
109-
| `kubeadm-dev-test` | Deploy SQL Server big data cluster on a Kubernetes cluster created with kubeadm using a single or multiple physical or virtual machines.|
110-
| `kubeadm-prod`| Deploy SQL Server big data cluster on a Kubernetes cluster created with kubeadm using a single or multiple physical or virtual machines. Use this template to enable big data cluster services to integrate with Active Directory. Mission critical services like SQL Server master instance and HDFS name node are deployed in a highly available configuration. |
111-
| `openshift-dev-test`|Deploy SQL Server big data cluster on a Red Hat OpenShift cluster for development and testing. <br/><br/>Introduced in SQL Server 2019 CU 5.|
112-
| `openshift-prod`|Deploy SQL Server big data cluster with high availability on a Red Hat OpenShift cluster. <br/><br/>Introduced in SQL Server 2019 CU 5.|
106+
| `aks-dev-test` | Deploy SQL Server Big Data Cluster on Azure Kubernetes Service (AKS)|
107+
| `aks-dev-test-ha` | Deploy SQL Server Big Data Cluster on Azure Kubernetes Service (AKS). Mission critical services like SQL Server master and HDFS name node are configured for high availability.|
108+
| `aro-dev-test`|Deploy SQL Server Big Data Cluster on Azure Red Hat OpenShift for development and testing. <br/><br/>Introduced in SQL Server 2019 CU 5.|
109+
| `aro-dev-test-ha`|Deploy SQL Server Big Data Cluster with high availability on a Red Hat OpenShift cluster for development and testing. <br/><br/>Introduced in SQL Server 2019 CU 5.|
110+
| `kubeadm-dev-test` | Deploy SQL Server Big Data Cluster on a Kubernetes cluster created with kubeadm using a single or multiple physical or virtual machines.|
111+
| `kubeadm-prod`| Deploy SQL Server Big Data Cluster on a Kubernetes cluster created with kubeadm using a single or multiple physical or virtual machines. Use this template to enable big data cluster services to integrate with Active Directory. Mission critical services like SQL Server master instance and HDFS name node are deployed in a highly available configuration. |
112+
| `openshift-dev-test`|Deploy SQL Server Big Data Cluster on a Red Hat OpenShift cluster for development and testing. <br/><br/>Introduced in SQL Server 2019 CU 5.|
113+
| `openshift-prod`|Deploy SQL Server Big Data Cluster with high availability on a Red Hat OpenShift cluster. <br/><br/>Introduced in SQL Server 2019 CU 5.|
113114

114115
You can deploy a big data cluster by running `azdata bdc create`. This prompts you to choose one of the default configurations and then guides you through the deployment.
115116

@@ -150,27 +151,27 @@ It is also possible to customize your deployment to accommodate the workloads yo
150151
```
151152

152153
> [!TIP]
153-
> You can also pass in the cluster name at deployment time using the *--name* parameter for *azdata create bdc* command. The parameters in the command have precedence over the values in the configuration files.
154+
> You can also pass in the cluster name at deployment time using the *--name* parameter for `azdata create bdc` command. The parameters in the command have precedence over the values in the configuration files.
154155
>
155156
> A useful tool for finding JSON paths is the [JSONPath Online Evaluator](https://jsonpath.com/).
156157
>
157-
In addition to passing key-value pairs, you can also provide inline JSON values or pass JSON patch files. For more information, see [Configure deployment settings for big data clusters](deployment-custom-configuration.md).
158+
In addition to passing key-value pairs, you can also provide inline JSON values or pass JSON patch files. For more information, see [Configure deployment settings for Big Data Cluster resources and services](deployment-custom-configuration.md).
158159

159160
1. Pass the custom configuration file to `azdata bdc create`. Note that you must set the required [environment variables](#env), otherwise the terminal prompts for the values:
160161

161162
```bash
162163
azdata bdc create --config-profile custom --accept-eula yes
163164
```
164165

165-
> For more information on the structure of a deployment configuration file, see the [Deployment configuration file reference](reference-deployment-config.md). For more configuration examples, see [Configure deployment settings for big data clusters](deployment-custom-configuration.md).
166+
> For more information on the structure of a deployment configuration file, see the [Deployment configuration file reference](reference-deployment-config.md). For more configuration examples, see [Configure deployment settings for Big Data Clusters](deployment-custom-configuration.md).
166167
167168
## <a id="env"></a> Environment variables
168169

169170
The following environment variables are used for security settings that are not stored in a deployment configuration file. Note that Docker settings except credentials can be set in the configuration file.
170171

171172
| Environment variable | Requirement |Description |
172173
|---|---|---|
173-
| `AZDATA_USERNAME` | Required |The username for SQL Server big data cluster administrator. A sysadmin login with the same name is created in SQL Server master instance. As a security best practice, `sa` account is disabled. <br/><br/>[!INCLUDE [big-data-cluster-root-user](../includes/big-data-cluster-root-user.md)]|
174+
| `AZDATA_USERNAME` | Required |The username for SQL Server Big Data Cluster administrator. A sysadmin login with the same name is created in SQL Server master instance. As a security best practice, `sa` account is disabled. <br/><br/>[!INCLUDE [big-data-cluster-root-user](../includes/big-data-cluster-root-user.md)]|
174175
| `AZDATA_PASSWORD` | Required |The password for the user accounts created above. On clusters deployed prior to SQL Server 2019 CU5, the same password is used for the `root` user, to secure Knox gateway and HDFS. |
175176
| `ACCEPT_EULA`| Required for first use of [!INCLUDE [azure-data-cli-azdata](../includes/azure-data-cli-azdata.md)]| Set to "yes". When set as an environment variable, it applies EULA to both SQL Server and [!INCLUDE [azure-data-cli-azdata](../includes/azure-data-cli-azdata.md)]. If not set as environment variable, you can include `--accept-eula=yes` in the first use of [!INCLUDE [azure-data-cli-azdata](../includes/azure-data-cli-azdata.md)] command.|
176177
| `DOCKER_USERNAME` | Optional | The username to access the container images in case they are stored in a private repository. See the [Offline deployments](deploy-offline.md) topic for more details on how to use a private Docker repository for big data cluster deployment.|
@@ -428,7 +429,7 @@ For more information on how to connect to the big data cluster, see [Connect to
428429

429430
## Next steps
430431

431-
To learn more about big data cluster deployment, see the following resources:
432+
To learn more about SQL Server Big Data Cluster deployment, see the following resources:
432433

433434
- [Configure deployment settings for big data clusters](deployment-custom-configuration.md)
434435
- [Perform an offline deployment of a SQL Server big data cluster](deploy-offline.md)

0 commit comments

Comments
 (0)