Skip to content

Commit f2bdebe

Browse files
authored
Merge pull request #17948 from MicrosoftDocs/master
11/24 AM Publish
2 parents 4b98c54 + 33c9ac6 commit f2bdebe

5 files changed

Lines changed: 34 additions & 72 deletions

File tree

docs/big-data-cluster/deploy-openshift.md

Lines changed: 18 additions & 50 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
title: Deploy on OpenShift
33
titleSuffix: SQL Server Big Data Cluster
4-
description: Learn how to upgrade SQL Server Big Data Clusters on OpenShift .
4+
description: Learn how to upgrade SQL Server Big Data Clusters on OpenShift.
55
author: mihaelablendea
66
ms.author: mihaelab
77
ms.reviewer: mikeray
@@ -32,7 +32,7 @@ This article outlines deployment steps that are specific to the OpenShift platfo
3232
> [!IMPORTANT]
3333
> Below pre-requisites must be performed by a OpenShift cluster admin (cluster-admin cluster role) that has sufficient permissions to create these cluster level objects. For more information on cluster roles in OpenShift see [Using RBAC to define and apply permissions](https://docs.openshift.com/container-platform/4.4/authentication/using-rbac.html).
3434
35-
1. Ensure the `pidsLimit` setting on the OpenShift is updated to accommodate SQL Server workloads. The default value in OpenShift is too low for production like workloads. We recommend a value of at least `4096`, but the optimal value will depend of the `max worker threads` setting in SQL Server and the number of CPU processors on the OpenShift host node.
35+
1. Ensure the `pidsLimit` setting on the OpenShift is updated to accommodate SQL Server workloads. The default value in OpenShift is too low for production like workloads. Start with at least `4096`, but the optimal value depends the `max worker threads` setting in SQL Server and the number of CPU processors on the OpenShift host node.
3636
- To find out how to update `pidsLimit` for your OpenShift cluster use [these instructions]( https://github.com/openshift/machine-config-operator/blob/master/docs/ContainerRuntimeConfigDesign.md). Note that OpenShift versions before `4.3.5` had a defect causing the updated value to not take effect. Make sure you upgrade OpenShift to the latest version.
3737
- To help you compute the optimal value depending on your environment and planned SQL Server workloads, you can use the estimation and examples below:
3838

@@ -44,7 +44,13 @@ This article outlines deployment steps that are specific to the OpenShift platfo
4444
> [!NOTE]
4545
> Other processes (e.g. backups, CLR, Fulltext, SQLAgent) also add some overhead, so add a buffer to the estimated value.
4646
47-
2. Create a custom security context constraint (SCC) using the attached [`bdc-scc.yaml`](#bdc-sccyaml-file).
47+
1. Download the custom security context constraint (SCC) [`bdc-scc.yaml`](#bdc-sccyaml-file):
48+
49+
```console
50+
curl https://raw.githubusercontent.com/microsoft/sql-server-samples/master/samples/features/sql-big-data-cluster/deployment/openshift/bdc-scc.yaml -o bdc-scc.yaml
51+
```
52+
53+
1. Apply the SCC to the cluster.
4854

4955
```console
5056
oc apply -f bdc-scc.yaml
@@ -99,7 +105,7 @@ This article outlines deployment steps that are specific to the OpenShift platfo
99105
azdata bdc config init --source openshift-dev-test --target custom-openshift
100106
```
101107

102-
For a deployment on ARO, we recommend to start with one of the `aro-` profiles, that includes default values for `serviceType` and `storageClass` appropriate for this environment. For example:
108+
For a deployment on ARO, start with one of the `aro-` profiles, that includes default values for `serviceType` and `storageClass` appropriate for this environment. For example:
103109

104110
```console
105111
azdata bdc config init --source aro-dev-test --target custom-openshift
@@ -124,10 +130,10 @@ This article outlines deployment steps that are specific to the OpenShift platfo
124130

125131
1. Upon successful deployment, you can log in and list the external cluster endpoints:
126132

127-
```console
128-
azdata login -n mssql-cluster
129-
azdata bdc endpoint list
130-
```
133+
```console
134+
azdata login -n mssql-cluster
135+
azdata bdc endpoint list
136+
```
131137

132138
## OpenShift specific settings in the deployment configuration files
133139

@@ -159,48 +165,10 @@ The name of the default storage class in ARO is managed-premium (as opposed to A
159165

160166
## `bdc-scc.yaml` file
161167

162-
```yaml
163-
apiVersion: security.openshift.io/v1
164-
kind: SecurityContextConstraints
165-
metadata:
166-
  annotations:
167-
    kubernetes.io/description: SQL Server BDC custom scc is based on 'nonroot' scc plus additional capabilities.
168-
  generation: 2
169-
  name: bdc-scc
170-
allowHostDirVolumePlugin: false
171-
allowHostIPC: false
172-
allowHostNetwork: false
173-
allowHostPID: false
174-
allowHostPorts: false
175-
allowPrivilegeEscalation: true
176-
allowPrivilegedContainer: false
177-
allowedCapabilities:
178-
- SETUID
179-
- SETGID
180-
- CHOWN
181-
- SYS_PTRACE
182-
defaultAddCapabilities: null
183-
fsGroup:
184-
  type: RunAsAny
185-
readOnlyRootFilesystem: false
186-
requiredDropCapabilities:
187-
- KILL
188-
- MKNOD
189-
runAsUser:
190-
  type: MustRunAsNonRoot
191-
seLinuxContext:
192-
  type: MustRunAs
193-
supplementalGroups:
194-
  type: RunAsAny
195-
volumes:
196-
- configMap
197-
- downwardAPI
198-
- emptyDir
199-
- persistentVolumeClaim
200-
- projected
201-
- secret
202-
```
168+
The SCC file for this deployment is:
169+
170+
:::code language="yaml" source="../../sql-server-samples/samples/features/sql-big-data-cluster/deployment/openshift/bdc-scc.yaml":::
203171

204172
## Next steps
205173

206-
[Tutorial: Load sample data into a SQL Server big data cluster](tutorial-load-sample-data.md)
174+
[Tutorial: Load sample data into a SQL Server big data cluster](tutorial-load-sample-data.md)

docs/linux/sql-server-linux-setup-machine-learning.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ description: 'Learn how to install SQL Server Machine Learning Services (Python
55
author: dphansen
66
ms.author: davidph
77
manager: cgronlun
8-
ms.date: 03/05/2020
8+
ms.date: 11/24/2020
99
ms.topic: how-to
1010
ms.prod: sql
1111
ms.technology: machine-learning-services
@@ -15,7 +15,9 @@ monikerRange: ">=sql-server-ver15||>=sql-server-linux-ver15||=sqlallproducts-all
1515

1616
[!INCLUDE [SQL Server 2019 - Linux](../includes/applies-to-version/sqlserver2019-linux.md)]
1717

18-
This article guides you in the installation of [SQL Server Machine Learning Services](../machine-learning/index.yml) on Linux. Python and R scripts can be executed in-database using Machine Learning Services.
18+
This article guides you in the installation of [SQL Server Machine Learning Services](../machine-learning//sql-server-machine-learning-services.md) on Linux. Python and R scripts can be executed in-database using Machine Learning Services.
19+
20+
You can install Machine Learning Services on Red Hat Enterprise Linux (RHEL), SUSE Linux Enterprise Server (SLES), and Ubuntu. For more information, see [the Supported platforms section in the Installation guidance for SQL Server on Linux](sql-server-linux-setup.md#supportedplatforms).
1921

2022
> [!NOTE]
2123
> Machine Learning Services is installed by default on SQL Server Big Data Clusters. For more information, see [Use Machine Learning Services (Python and R) on Big Data Clusters](../big-data-cluster/machine-learning-services.md)
@@ -29,8 +31,6 @@ This article guides you in the installation of [SQL Server Machine Learning Serv
2931
* Check the SQL Server Linux repositories for the Python and R extensions.
3032
If you already configured source repositories for the database engine install, you can run the **mssql-mlservices** package install commands using the same repo registration.
3133

32-
You can install SQL Server on Red Hat Enterprise Linux (RHEL), SUSE Linux Enterprise Server (SLES), and Ubuntu. For more information, see [the Supported platforms section in the Installation guidance for SQL Server on Linux](sql-server-linux-setup.md#supportedplatforms).
33-
3434
* (R only) Microsoft R Open (MRO) provides the base R distribution for the R feature in SQL Server and is a prerequisite for using RevoScaleR, MicrosoftML, and other R packages installed with Machine Learning Services.
3535
* The required version is MRO 3.5.2.
3636
* Choose from the following two approaches to install MRO:
@@ -432,4 +432,4 @@ Python developers can learn how to use Python with SQL Server by following these
432432
R developers can get started with some simple examples, and learn the basics of how R works with SQL Server. For your next step, see the following links:
433433

434434
+ [Quickstart: Run R in T-SQL](../machine-learning/tutorials/quickstart-r-create-script.md)
435-
+ [Tutorial: In-database analytics for R developers](../machine-learning/tutorials/r-taxi-classification-introduction.md)
435+
+ [Tutorial: In-database analytics for R developers](../machine-learning/tutorials/r-taxi-classification-introduction.md)

docs/relational-databases/statistics/statistics.md

Lines changed: 10 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
title: Statistics
33
description: The Query Optimizer uses statistics to create query plans that improve query performance. Learn about concepts and guidelines for using query optimization.
44
ms.custom: ""
5-
ms.date: "06/03/2020"
5+
ms.date: "11/23/2020"
66
ms.prod: sql
77
ms.reviewer: ""
88
ms.technology: performance
@@ -129,21 +129,24 @@ You can use the [sys.dm_db_stats_properties](../../relational-databases/system-d
129129

130130

131131
#### AUTO_UPDATE_STATISTICS_ASYNC
132-
The asynchronous statistics update option, [AUTO_UPDATE_STATISTICS_ASYNC](../../t-sql/statements/alter-database-transact-sql-set-options.md#auto_update_statistics_async), determines whether the Query Optimizer uses synchronous or asynchronous statistics updates. By default, the asynchronous statistics update option is OFF, and the Query Optimizer updates statistics synchronously. The AUTO_UPDATE_STATISTICS_ASYNC option applies to statistics objects created for indexes, single columns in query predicates, and statistics created with the [CREATE STATISTICS](../../t-sql/statements/create-statistics-transact-sql.md) statement.
132+
The asynchronous statistics update option, [AUTO_UPDATE_STATISTICS_ASYNC](../../t-sql/statements/alter-database-transact-sql-set-options.md#auto_update_statistics_async), determines whether the Query Optimizer uses synchronous or asynchronous statistics updates. By default, the asynchronous statistics update option is OFF, and the Query Optimizer updates statistics synchronously. The AUTO_UPDATE_STATISTICS_ASYNC option applies to statistics objects created for indexes, single columns in query predicates, and statistics created with the [CREATE STATISTICS](../../t-sql/statements/create-statistics-transact-sql.md) statement.
133133

134-
> [!NOTE]
135-
> To set the asynchronous statistics update option in [!INCLUDE[ssManStudioFull](../../includes/ssmanstudiofull-md.md)], in the *Options* page of the *Database Properties* window, both *Auto Update Statistics* and *Auto Update Statistics Asynchronously* options need to be set to **True**.
134+
> [!NOTE]
135+
> To set the asynchronous statistics update option in [!INCLUDE[ssManStudioFull](../../includes/ssmanstudiofull-md.md)], in the *Options* page of the *Database Properties* window, both *Auto Update Statistics* and *Auto Update Statistics Asynchronously* options need to be set to **True**.
136136
137-
Statistics updates can be either synchronous (the default) or asynchronous. With synchronous statistics updates, queries always compile and execute with up-to-date statistics; When statistics are out-of-date, the Query Optimizer waits for updated statistics before compiling and executing the query. With asynchronous statistics updates, queries compile with existing statistics even if the existing statistics are out-of-date; The Query Optimizer could choose a suboptimal query plan if statistics are out-of-date when the query compiles. Queries that compile after the asynchronous updates have completed will benefit from using the updated statistics.
137+
Statistics updates can be either synchronous (the default) or asynchronous. With synchronous statistics updates, queries always compile and execute with up-to-date statistics; When statistics are out-of-date, the Query Optimizer waits for updated statistics before compiling and executing the query. With asynchronous statistics updates, queries compile with existing statistics even if the existing statistics are out-of-date; The Query Optimizer could choose a suboptimal query plan if statistics are out-of-date when the query compiles. Queries that compile after the asynchronous updates have completed will benefit from using the updated statistics.
138138

139-
Consider using synchronous statistics when you perform operations that change the distribution of data, such as truncating a table or performing a bulk update of a large percentage of the rows. If you do not update the statistics after completing the operation, using synchronous statistics will ensure statistics are up-to-date before executing queries on the changed data.
139+
Consider using synchronous statistics when you perform operations that change the distribution of data, such as truncating a table or performing a bulk update of a large percentage of the rows. If you do not update the statistics after completing the operation, using synchronous statistics will ensure statistics are up-to-date before executing queries on the changed data.
140140

141-
Consider using asynchronous statistics to achieve more predictable query response times for the following scenarios:
141+
Consider using asynchronous statistics to achieve more predictable query response times for the following scenarios:
142142

143143
* Your application frequently executes the same query, similar queries, or similar cached query plans. Your query response times might be more predictable with asynchronous statistics updates than with synchronous statistics updates because the Query Optimizer can execute incoming queries without waiting for up-to-date statistics. This avoids delaying some queries and not others.
144144

145145
* Your application has experienced client request time outs caused by one or more queries waiting for updated statistics. In some cases, waiting for synchronous statistics could cause applications with aggressive time outs to fail.
146146

147+
> [!NOTE]
148+
> Statistics on local temporary tables are always updated synchronously regardless of AUTO_UPDATE_STATISTICS_ASYNC option. Statistics on global temporary tables are updated synchronously or asynchronously according to the AUTO_UPDATE_STATISTICS_ASYNC option set for the user database.
149+
147150
Asynchronous statistics update is performed by a background request. When the request is ready to write updated statistics to the database, it attempts to acquire a schema modification lock on the statistics metadata object. If a different session is already holding a lock on the same object, asynchronous statistics update is blocked until the schema modification lock can be acquired. Similarly, sessions that need to acquire a schema stability lock on the statistics metadata object to compile a query may be blocked by the asynchronous statistics update background session, which is already holding or waiting to acquire the schema modification lock. Therefore, for workloads with very frequent query compilations and frequent statistics updates, using asynchronous statistics may increase the likelihood of concurrency issues due to lock blocking.
148151

149152
In Azure SQL Database, you can avoid potential concurrency issues using asynchronous statistics update if you enable the ASYNC_STATS_UPDATE_WAIT_AT_LOW_PRIORITY [database-scoped configuration](../../t-sql/statements/alter-database-scoped-configuration-transact-sql.md). With this configuration enabled, the background request will wait to acquire the schema modification lock on a separate low priority queue, allowing other requests to continue compiling queries with existing statistics. Once no other session is holding a lock on the statistics metadata object, the background request will acquire its schema modification lock and update statistics. In the unlikely event that the background request cannot acquire the lock within a timeout period of several minutes, the asynchronous statistics update will be aborted, and the statistics will not be updated until another automatic statistics update is triggered, or until statistics are [updated manually](update-statistics.md).

docs/reporting-services/install-windows/migrate-a-reporting-services-installation-native-mode.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -127,7 +127,7 @@ For more information about changes in SQL Server Reporting Services, see the Upg
127127

128128
2. Rswebapplication.config
129129

130-
3. Rssvrpolicy.config
130+
3. Rssrvpolicy.config
131131

132132
4. Rsmgrpolicy.config
133133

docs/toc.yml

Lines changed: 0 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -1876,15 +1876,6 @@
18761876
href: relational-databases/tables/use-table-valued-parameters-database-engine.md
18771877
- name: Edge constraints
18781878
href: relational-databases/tables/graph-edge-constraints.md
1879-
items:
1880-
- name: Create
1881-
href: relational-databases/tables/create-edge-constraints.md
1882-
- name: Delete
1883-
href: relational-databases/tables/delete-edge-constraint.md
1884-
- name: Modify
1885-
href: relational-databases/tables/modify-edge-constraint.md
1886-
- name: View
1887-
href: relational-databases/tables/view-edge-constraint-properties.md
18881879
- name: Primary keys
18891880
href: relational-databases/tables/primary-and-foreign-key-constraints.md
18901881
items:

0 commit comments

Comments
 (0)