Skip to content

Commit bb78aef

Browse files
authored
Merge pull request #17939 from MikeRayMSFT/20201123-deploy-openshift
Stage with new code block.
2 parents 1831288 + 4b30045 commit bb78aef

1 file changed

Lines changed: 18 additions & 50 deletions

File tree

docs/big-data-cluster/deploy-openshift.md

Lines changed: 18 additions & 50 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
title: Deploy on OpenShift
33
titleSuffix: SQL Server Big Data Cluster
4-
description: Learn how to upgrade SQL Server Big Data Clusters on OpenShift .
4+
description: Learn how to upgrade SQL Server Big Data Clusters on OpenShift.
55
author: mihaelablendea
66
ms.author: mihaelab
77
ms.reviewer: mikeray
@@ -32,7 +32,7 @@ This article outlines deployment steps that are specific to the OpenShift platfo
3232
> [!IMPORTANT]
3333
> Below pre-requisites must be performed by a OpenShift cluster admin (cluster-admin cluster role) that has sufficient permissions to create these cluster level objects. For more information on cluster roles in OpenShift see [Using RBAC to define and apply permissions](https://docs.openshift.com/container-platform/4.4/authentication/using-rbac.html).
3434
35-
1. Ensure the `pidsLimit` setting on the OpenShift is updated to accommodate SQL Server workloads. The default value in OpenShift is too low for production like workloads. We recommend a value of at least `4096`, but the optimal value will depend of the `max worker threads` setting in SQL Server and the number of CPU processors on the OpenShift host node.
35+
1. Ensure the `pidsLimit` setting on the OpenShift is updated to accommodate SQL Server workloads. The default value in OpenShift is too low for production like workloads. Start with at least `4096`, but the optimal value depends the `max worker threads` setting in SQL Server and the number of CPU processors on the OpenShift host node.
3636
- To find out how to update `pidsLimit` for your OpenShift cluster use [these instructions]( https://github.com/openshift/machine-config-operator/blob/master/docs/ContainerRuntimeConfigDesign.md). Note that OpenShift versions before `4.3.5` had a defect causing the updated value to not take effect. Make sure you upgrade OpenShift to the latest version.
3737
- To help you compute the optimal value depending on your environment and planned SQL Server workloads, you can use the estimation and examples below:
3838

@@ -44,7 +44,13 @@ This article outlines deployment steps that are specific to the OpenShift platfo
4444
> [!NOTE]
4545
> Other processes (e.g. backups, CLR, Fulltext, SQLAgent) also add some overhead, so add a buffer to the estimated value.
4646
47-
2. Create a custom security context constraint (SCC) using the attached [`bdc-scc.yaml`](#bdc-sccyaml-file).
47+
1. Download the custom security context constraint (SCC) [`bdc-scc.yaml`](#bdc-sccyaml-file):
48+
49+
```console
50+
curl https://raw.githubusercontent.com/microsoft/sql-server-samples/master/samples/features/sql-big-data-cluster/deployment/openshift/bdc-scc.yaml -o bdc-scc.yaml
51+
```
52+
53+
1. Apply the SCC to the cluster.
4854

4955
```console
5056
oc apply -f bdc-scc.yaml
@@ -99,7 +105,7 @@ This article outlines deployment steps that are specific to the OpenShift platfo
99105
azdata bdc config init --source openshift-dev-test --target custom-openshift
100106
```
101107

102-
For a deployment on ARO, we recommend to start with one of the `aro-` profiles, that includes default values for `serviceType` and `storageClass` appropriate for this environment. For example:
108+
For a deployment on ARO, start with one of the `aro-` profiles, that includes default values for `serviceType` and `storageClass` appropriate for this environment. For example:
103109

104110
```console
105111
azdata bdc config init --source aro-dev-test --target custom-openshift
@@ -124,10 +130,10 @@ This article outlines deployment steps that are specific to the OpenShift platfo
124130

125131
1. Upon successful deployment, you can log in and list the external cluster endpoints:
126132

127-
```console
128-
azdata login -n mssql-cluster
129-
azdata bdc endpoint list
130-
```
133+
```console
134+
azdata login -n mssql-cluster
135+
azdata bdc endpoint list
136+
```
131137

132138
## OpenShift specific settings in the deployment configuration files
133139

@@ -159,48 +165,10 @@ The name of the default storage class in ARO is managed-premium (as opposed to A
159165

160166
## `bdc-scc.yaml` file
161167

162-
```yaml
163-
apiVersion: security.openshift.io/v1
164-
kind: SecurityContextConstraints
165-
metadata:
166-
  annotations:
167-
    kubernetes.io/description: SQL Server BDC custom scc is based on 'nonroot' scc plus additional capabilities.
168-
  generation: 2
169-
  name: bdc-scc
170-
allowHostDirVolumePlugin: false
171-
allowHostIPC: false
172-
allowHostNetwork: false
173-
allowHostPID: false
174-
allowHostPorts: false
175-
allowPrivilegeEscalation: true
176-
allowPrivilegedContainer: false
177-
allowedCapabilities:
178-
- SETUID
179-
- SETGID
180-
- CHOWN
181-
- SYS_PTRACE
182-
defaultAddCapabilities: null
183-
fsGroup:
184-
  type: RunAsAny
185-
readOnlyRootFilesystem: false
186-
requiredDropCapabilities:
187-
- KILL
188-
- MKNOD
189-
runAsUser:
190-
  type: MustRunAsNonRoot
191-
seLinuxContext:
192-
  type: MustRunAs
193-
supplementalGroups:
194-
  type: RunAsAny
195-
volumes:
196-
- configMap
197-
- downwardAPI
198-
- emptyDir
199-
- persistentVolumeClaim
200-
- projected
201-
- secret
202-
```
168+
The SCC file for this deployment is:
169+
170+
:::code language="yaml" source="../../sql-server-samples/samples/features/sql-big-data-cluster/deployment/openshift/bdc-scc.yaml":::
203171

204172
## Next steps
205173

206-
[Tutorial: Load sample data into a SQL Server big data cluster](tutorial-load-sample-data.md)
174+
[Tutorial: Load sample data into a SQL Server big data cluster](tutorial-load-sample-data.md)

0 commit comments

Comments
 (0)