Skip to content

Commit fe06085

Browse files
committed
Merge branch 'release-sqlseattle' of https://github.com/MicrosoftDocs/sql-docs-pr into 20180830-Kubernetes
2 parents bb308a2 + 827135d commit fe06085

97 files changed

Lines changed: 1454 additions & 430 deletions

File tree

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

docs/aris/index.md

Lines changed: 0 additions & 3 deletions
This file was deleted.

docs/aris/sql-server-aris-jupyter-notebook-guidance.md

Lines changed: 0 additions & 19 deletions
This file was deleted.

docs/aris/sql-server-aris-notebook-samples.md

Lines changed: 0 additions & 25 deletions
This file was deleted.

docs/aris/sql-server-aris-release-notes.md

Lines changed: 0 additions & 18 deletions
This file was deleted.
Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,18 @@
1+
---
2+
title: Frequently asked questions about SQL Server Big Data Cluster | Microsoft Docs
3+
description:
4+
author: rothja
5+
ms.author: jroth
6+
manager: craigg
7+
ms.date: 08/06/2018
8+
ms.topic: conceptual
9+
ms.prod: sql
10+
---
11+
12+
# Frequently asked questions about SQL Server Big Data Cluster
13+
14+
TBD
15+
16+
## Next steps
17+
18+
TBD
Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
title: What is SQL Server Aris? | Microsoft Docs
2+
title: What is SQL Server Big Data Cluster? | Microsoft Docs
33
description:
44
author: rothja
55
ms.author: jroth
@@ -9,24 +9,24 @@ ms.topic: overview
99
ms.prod: sql
1010
---
1111

12-
# What is SQL Server Aris?
12+
# What is SQL Server Big Data Cluster?
1313

14-
[!INCLUDE[SQL Server vNext](../includes/sssqlv15-md.md)] CTP 2.0 enables you to integrate your "high-value" relational data in SQL Server with your "high-volume" data in big data environments, such as Hadoop.
14+
[!INCLUDE[SQL Server 2019 CTP 2.0](../includes/sssqlv15-md.md)] CTP 2.0 enables you to integrate your "high-value" relational data in SQL Server with your "high-volume" data in big data environments, such as Hadoop.
1515

1616
## Architecture
1717

1818
CTP 2.0 allows you to create and deploy a *data pool* that consists of many SQL Server *data pool instances* in your cluster. You can then ingest your high-volume data from HDFS via Spark streaming jobs into the SQL Server data pool instances by partitioning the data and spreading the partitions across the SQL Server data pool instances in the data pool.
1919

2020
Once the high-volume data is stored in partitions in the SQL Server data pool instances on the cluster, you can create an *external table* in the SQL Server *master instance* that represents the high-volume data that resides in the partitions stored in the SQL Server data pool instances in your cluster. This external table can be queried in the master instance just like any other table, but in this case a fan-out query will be simultaneously executed against each of the SQL Server data pool instances to query the partitioned data. This fan-out query runs the filter part of the query and local aggregations in parallel across all of the data pool instances. The results of these queries will be brought back to the master instance and you can optionally choose to join the results of the high-volume data fan-out query with the results of a high-value data query in the SQL Server master instance.
2121

22-
The following diagram shows the eventual state of the Project Aris architecture:
22+
The following diagram shows the eventual state of the Big Data Cluster architecture:
2323

24-
![Architecture diagram](./media/sql-server-aris-overview/architecture-diagram.png)
24+
![Architecture diagram](./media/big-data-cluster-overview/architecture-diagram.png)
2525

2626
## Next steps
2727

2828
To get started, see the following quickstarts:
2929

30-
- [Deploy SQL Server Aris on Kubernetes](quickstart-sql-server-aris-deploy.md)
31-
- [Get started with SQL Server Aris on SQL Server vNext](quickstart-sql-server-aris-get-started.md)
32-
- [Run Jupypter Notebooks on SQL Server vNext](quickstart-sql-server-aris-jupyter-notebook.md)
30+
- [Deploy SQL Server Big Data Cluster on Kubernetes](quickstart-big-data-cluster-deploy.md)
31+
- [Get started with SQL Server Big Data Cluster on SQL Server 2019 CTP 2.0](quickstart-big-data-cluster-get-started.md)
32+
- [Run Jupypter Notebooks on SQL Server 2019 CTP 2.0](quickstart-big-data-cluster-notebooks.md)

docs/aris/concept-storage-pool.md renamed to docs/big-data-cluster/big-data-cluster-release-notes.md

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
title: Overview of the SQL Server Aris storage pool | Microsoft Docs
2+
title: Release notes for SQL Server Big Data Cluster | Microsoft Docs
33
description:
44
author: rothja
55
ms.author: jroth
@@ -9,11 +9,10 @@ ms.topic: conceptual
99
ms.prod: sql
1010
---
1111

12-
# Overview of the SQL Server Aris storage pool
12+
# Release notes for SQL Server Big Data Cluster
1313

1414
TBD
1515

1616
## Next steps
1717

1818
TBD
19-

docs/aris/concept-compute-pool.md renamed to docs/big-data-cluster/concept-compute-pool.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
title: Overview of SQL Server Aris compute pools | Microsoft Docs
2+
title: Overview of SQL Server Big Data Cluster compute pools | Microsoft Docs
33
description:
44
author: rothja
55
ms.author: jroth
@@ -9,7 +9,7 @@ ms.topic: conceptual
99
ms.prod: sql
1010
---
1111

12-
# Overview of SQL Server Aris compute pools
12+
# Overview of SQL Server Big Data Cluster compute pools
1313

1414
TBD
1515

docs/aris/concept-controller.md renamed to docs/big-data-cluster/concept-controller.md

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
title: Overview of the SQL Server Aris controller | Microsoft Docs
2+
title: Overview of the SQL Server Big Data Cluster controller | Microsoft Docs
33
description:
44
author: rothja
55
ms.author: jroth
@@ -9,20 +9,20 @@ ms.topic: conceptual
99
ms.prod: sql
1010
---
1111

12-
# Overview of the SQL Server Aris controller
12+
# Overview of the SQL Server Big Data Cluster controller
1313

1414
## What is the cluster Controller?
1515
The Controller hosts the core logic for building and managing the cluster. It takes care of all interactions with Kubernetes, SQL Servers instances that are part of the cluster as well as Hadoop components like HDFS.
1616

1717
The Controller service provides the following core functionality: 
18-
- Manage cluster lifecycle: cluster bootstrap & delete, update configurations and upgrade (upgrade not available in CTP2.0)
18+
- Manage cluster lifecycle: cluster bootstrap & delete, update configurations and upgrade (upgrade not available inCTP 2.0)
1919
- Manage master SQL Server instances
2020
- Manage compute, data and storage pools
2121
- Expose monitoring tools to observe the state of the cluster
2222
- Expose troubleshooting tools to detect and repair unexpected issues
2323
- Manage cluster security: ensure secure cluster endpoints, manage users and roles, configure credentials for intra-cluster communication
24-
- Manage the workflow of upgrades so that they are implemented safely (not available in CTP2.0)
25-
- Manage high availability and DR for statefull services in the cluster (not in CTP2.0)
24+
- Manage the workflow of upgrades so that they are implemented safely (not available inCTP 2.0)
25+
- Manage high availability and DR for statefull services in the cluster (not inCTP 2.0)
2626

2727
## Deploying the Controller service
2828
The Controller is hosted as a daemon set in the same Kubernetes environment where the customer wants to build out the cluster. This service is installed by a Kubernetes administrator during cluster bootstrap, using the mssqlctl command-line utility:
@@ -31,14 +31,14 @@ The Controller is hosted as a daemon set in the same Kubernetes environment wher
3131
python mssqlctl.py create cluster <name of your cluster>
3232
```
3333

34-
The buildout workflow will layout on top of Kubernetes a fully functional Aris cluster that includes all the components described in the Overview (TBD add link) section. This workflow creates first the Controller, and once this is deployed, it will coordinate the installation and configuration of rest of the services part of Master, Compute, Data and Storage pools.
34+
The buildout workflow will layout on top of Kubernetes a fully functional Big Data Cluster that includes all the components described in the Overview (TBD add link) section. This workflow creates first the Controller, and once this is deployed, it will coordinate the installation and configuration of rest of the services part of Master, Compute, Data and Storage pools.
3535

3636
## Manging the cluster through the Controller
3737
Customers are expected to manage the cluster purely through the Controller using either `mssqlctl` APIs or the administration portal this is hosted within the cluster. If customers will deploy additional Kubernetes objects (e.g. pods) into the same namespace, they will not be managed or monitored by the Controller.
3838
The Controller and the Kubernetes objects (stateful sets, pods, secrets, etc…) created for the cluster reside in a dedicated Kubernetes namespace. The Controller service will be granted permission by the Kubernetes cluster administrator to manage all resources within that namespace. The RBAC policy this scenario is configured automatically as part of initial cluster deployment. 
3939

4040
### mssqlctl
41-
`mssqlctl` is a command line utility written in Python that enables cluster administrators to bootstrap and manage the Aris cluster via REST APIs.
41+
`mssqlctl` is a command line utility written in Python that enables cluster administrators to bootstrap and manage the Big Data Cluster via REST APIs.
4242
TBD Aquisition experience I.e. from any client machine->pip install....
4343

4444
### Cluster Admin Portal
@@ -48,7 +48,7 @@ Once the Controller service is up and running, cluster administrator can leverag
4848
TBD (see service status, where are the logs)
4949

5050
## Controller service security
51-
All communication to the Controller is conducted via a REST API over HTTPS. Certificates for Controller endpoint will be configured at bootstrap time. A self-signed certificate will be automatically generated for the CTP2.0. In future releases, we will provide a mechanism for customers to provide their certificates from their own certificate authority for production deployments. 
51+
All communication to the Controller is conducted via a REST API over HTTPS. Certificates for Controller endpoint will be configured at bootstrap time. A self-signed certificate will be automatically generated for theCTP 2.0. In future releases, we will provide a mechanism for customers to provide their certificates from their own certificate authority for production deployments. 
5252

5353
Authentication to Controller endpoint is based on username and password. These credentials are provisioned at cluster bootstrap time using the input for environment variables `CONTROLLER_USERNAME` and `CONTROLLER_PASSWORD`. For example, from a Linux client, you can run below script to set the environment variables:
5454

@@ -68,4 +68,4 @@ TBD
6868

6969
## Next steps
7070

71-
- [Deploy SQL Server Aris on Kubernetes](quickstart-sql-server-aris-deploy.md)
71+
- [Deploy SQL Server Big Data Cluster on Kubernetes](quickstart-big-data-cluster-deploy.md)
Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
title: Data persistence with SQL Server Aris on Kubernetes | Microsoft Docs
2+
title: Data persistence with SQL Server Big Data Cluster on Kubernetes | Microsoft Docs
33
description:
44
author: rothja
55
ms.author: jroth
@@ -9,24 +9,24 @@ ms.topic: conceptual
99
ms.prod: sql
1010
---
1111

12-
# Data persistence with SQL Server Aris on Kubernetes
12+
# Data persistence with SQL Server Big Data Cluster on Kubernetes
1313

14-
[Persistent Volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) provide a plugin model for storage in Kubernetes where how storage is provided is completed abstracted from how it is consumed. Therefore, you can bring your own highly available storage and plug it into the SQL Server Aris cluster. This gives you full control over the type of storage, availability, and performance that you require. Kubernetes supports various kinds of storage solutions including Azure disks/files, NFS, local storage, and more.
14+
[Persistent Volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) provide a plugin model for storage in Kubernetes where how storage is provided is completed abstracted from how it is consumed. Therefore, you can bring your own highly available storage and plug it into the SQL Server Big Data Cluster cluster. This gives you full control over the type of storage, availability, and performance that you require. Kubernetes supports various kinds of storage solutions including Azure disks/files, NFS, local storage, and more.
1515

1616
## Configure persistent volumes
1717

18-
The way SQL Server Aris consumes these persistent volumes is by using [Storage Classes](https://kubernetes.io/docs/concepts/storage/storage-classes/). You can create different storage classes for different kind of storage and specify them at the Aris cluster deployment time. You can configure which storage class to use for which purpose (pool). SQL Server Aris creates [persistent volume claims](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) with the specified storage class name for each pod that requires persistent volumes. It then mounts the corresponding persistent volume(s) in the pod.
18+
The way SQL Server Big Data Cluster consumes these persistent volumes is by using [Storage Classes](https://kubernetes.io/docs/concepts/storage/storage-classes/). You can create different storage classes for different kind of storage and specify them at the Big Data Cluster deployment time. You can configure which storage class to use for which purpose (pool). SQL Server Big Data Cluster creates [persistent volume claims](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) with the specified storage class name for each pod that requires persistent volumes. It then mounts the corresponding persistent volume(s) in the pod.
1919

2020
> [!NOTE]
2121
> For CTP 2.0, you can only have one storage class with static size and access mode for the whole cluster.
2222
2323
## Deployment settings
2424

25-
To use persistent storage during deployment, configure the **USE_PERSISTENT_VOLUME** and **STORAGE_CLASS_NAME** flags with mssqlctl. **USE_PERSISTENT_VOLUME** is set to false by default, and, in this case, SQL Server Aris uses emptyDir mounts. If you set the flag to true, you must also provide **STORAGE_CLASS_NAME** as a parameter at the deployment time.
25+
To use persistent storage during deployment, configure the **USE_PERSISTENT_VOLUME** and **STORAGE_CLASS_NAME** flags with mssqlctl. **USE_PERSISTENT_VOLUME** is set to false by default, and, in this case, SQL Server Big Data Cluster uses emptyDir mounts. If you set the flag to true, you must also provide **STORAGE_CLASS_NAME** as a parameter at the deployment time.
2626

2727
## AKS/ACS storage classes
2828

29-
AKS and ACS both come with two built-in storage classes **default** and **premium-storage** along with dynamic provisioner for them. You can specify either of those or create their own storage class for deploying Aris cluster with persistent storage enabled.
29+
AKS and ACS both come with two built-in storage classes **default** and **premium-storage** along with dynamic provisioner for them. You can specify either of those or create their own storage class for deploying Big Data Cluster with persistent storage enabled.
3030

3131
## Minikube storage class
3232

@@ -38,10 +38,10 @@ Kubeadm does not come with a built-in storage class; therefore, we have created
3838

3939
## On-premises cluster
4040

41-
On-premise clusters obviously do not come with any built-in storage class, therefore you must set up persistent volumes/provisioners beforehand and then use the corresponding storage classes during SQL Server Aris deployment.
41+
On-premise clusters obviously do not come with any built-in storage class, therefore you must set up persistent volumes/provisioners beforehand and then use the corresponding storage classes during SQL Server Big Data Cluster deployment.
4242

4343
## Next steps
4444

4545
For complete documentation about volumes in Kubernetes, see the [Kubernetes documentation on Volumes](https://kubernetes.io/docs/concepts/storage/volumes/).
4646

47-
For more information about deploying SQL Server Aris, see [How to deploy SQL Server Aris on Kubernetes](sql-server-aris-deployment-guidance.md).
47+
For more information about deploying SQL Server Big Data Cluster, see [How to deploy SQL Server Big Data Cluster on Kubernetes](deployment-guidance.md).

0 commit comments

Comments
 (0)