You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -47,11 +47,13 @@ Community technology preview (CTP) 2.0 is the first public release of [!INCLUDE[
47
47
- SQL Server Machine Learning Services
48
48
- Polybase
49
49
- Expanded support for Persistent Memory (PMEM) devices
50
+
50
51
-[Big Data Cluster](#bigdatacluster)
51
52
- Deploy a SQL Server Big Data Cluster with Linux containers on Kubernetes
52
53
- Use Azure Data Studio to run Jupyter Notebooks
53
54
- Ingest external data into a data pool
54
55
- Query HDFS data in the storage pool
56
+
55
57
-[SQL Server on Linux](#sqllinux)
56
58
- Replication support
57
59
- Support for the Microsoft Distributed Transaction Coordinator (MSDTC)
@@ -60,10 +62,13 @@ Community technology preview (CTP) 2.0 is the first public release of [!INCLUDE[
60
62
- Machine Learning on Linux
61
63
- New container registry
62
64
- New RHEL-based container images
65
+
63
66
-[Master Data Services](#mds)
64
67
- Silverlight controls replaced
68
+
65
69
-[Security](#security)
66
70
- Certificate management in SQL Server Configuration Manager
71
+
67
72
-[Tools](#tools)
68
73
- SQL Server Management Studio (SSMS) 18.0 (preview)
69
74
- Azure Data Studio (preview)
@@ -87,10 +92,13 @@ Continue reading for more details about these features.
87
92
- Aggregation of a column or columns that have a large number of distinct values AND
88
93
- Responsiveness is more critical than absolute precision. `APPROXIMATE_COUNT_DISTINCT` yields results typically within 2% of the precise answer in a small fraction of the time.
89
94
90
-
-**Batch mode on rowstore** enables batch mode without requiring a columnstore index. Batch mode processing allows query operators to process data more efficiently by working on a batch of rows at a time instead of one row at a time. A number of other scalability improvements are tied to batch mode processing. In earlier versions, batch mode only worked in conjunction with columnstore indexes. This feature is enabled by default under database compatibility level 150. Workloads that may benefit:
91
-
- A significant part of the workload consists of analytical queries (as a rule of thumb, queries with operators such as joins or aggregates processing hundreds of thousands of rows or more), AND
92
-
- The workload is CPU bound AND
93
-
- Creating a columnstore index adds too much overhead to the transactional part of your workload, OR creating a columnstore index is not feasible because your application depends on a feature that is not yet supported with columnstore indexes.
95
+
-**Batch mode on rowstore** no longer requires a columnstore index to process a query in batch mode. Batch mode allows query operators to work on a set of rows, instead of just one row at a time. This feature is enabled by default under database compatibility level 150. Batch mode improves the speed of queries that access rowstore tables when all the following are true:
96
+
- The query uses analytic operators such as joins or aggregation operators.
97
+
- The query involves 100,000 or more rows.
98
+
- The query is CPU bound, rather than input/output data bound.
99
+
- Creation and use of a columnstore index would have one of the following drawbacks:
100
+
- Would add too much overhead to the query.
101
+
- Or, is not feasible because your application depends on a feature that is not yet supported with columnstore indexes.
94
102
95
103
-**Table variable deferred compilation** improves plan quality and overall performance for queries referencing table variables. During optimization and initial compilation, this feature will propagate cardinality estimates that are based on actual table variable row counts. This accurate row count information will be used for optimizing downstream plan operations. This feature is enabled by default under database compatibility level 150.
@@ -156,18 +164,17 @@ This feature may provide significant storage savings, depending on the character
156
164
157
165
### Lightweight query profiling infrastructure enabled by default
158
166
159
-
The lightweight query profiling infrastructure provides query performance data more efficiently than standard profiling technologies. Lightweight profiling is now enabled by default. It was introduced in SQL Server 2016 SP1. Lightweight profiling offers a query execution statistics collection mechanism with an expected overhead of 2% CPU, compared with an overhead of up to 75% CPU for the standard query profiling mechanism. On previous versions,
160
-
it was OFF by default. Database administrators could enable it with [trace flag 7412](../t-sql/database-console-commands/dbcc-traceon-trace-flags-transact-sql.md).
167
+
The lightweight query profiling infrastructure provides query performance data more efficiently than standard profiling technologies. Lightweight profiling is now enabled by default. It was introduced in SQL Server 2016 SP1. Lightweight profiling offers a query execution statistics collection mechanism with an expected overhead of 2% CPU, compared with an overhead of up to 75% CPU for the standard query profiling mechanism. On previous versions, it was OFF by default. Database administrators could enable it with [trace flag 7412](../t-sql/database-console-commands/dbcc-traceon-trace-flags-transact-sql.md).
161
168
162
-
For more information, see [Developers Choice: Query progress – anytime, anywhere](http://blogs.msdn.microsoft.com/sql_server_team/query-progress-anytime-anywhere/).
169
+
For more information, see [Developers Choice: Query progress – anytime, anywhere](http://blogs.msdn.microsoft.com/sql_server_team/query-progress-anytime-anywhere/).
163
170
164
171
### Data Discovery and Classification
165
172
166
-
Data discovery and classification provides advanced capabilities natively built into SQL Server for classifying, labeling, and protecting the sensitive data in your databases. Classifying your most sensitive data (business, financial, healthcare, personal information, etc.) can play a pivotal role in your organizational information protection stature. It can serve as infrastructure for:
173
+
Data discovery and classification provides advanced capabilities that are natively built into SQL Server. Classifying and labelingyour most sensitive data provides the following benefits:
167
174
168
-
-Helping meet data privacy standards and regulatory compliance requirements
169
-
-Various security scenarios, such as monitoring (auditing) and alerting on anomalous access to sensitive data
170
-
-Making it easier to identify where sensitive data resides in the enterprise so admins can take the right steps securing the database
175
+
-Helps meet data privacy standards and regulatory compliance requirements.
176
+
-Supports security scenarios, such as monitoring (auditing), and alerting on anomalous access to sensitive data.
177
+
-Makes it easier to identify where sensitive data resides in the enterprise, so that administrators can take the right steps to secure the database.
171
178
172
179
For more information, see [SQL Data Discovery and Classification](../relational-databases/security/sql-data-discovery-and-classification.md).
173
180
@@ -195,27 +202,31 @@ SELECT page_info.*
195
202
FROMsys.dm_exec_requestsAS d
196
203
CROSS APPLY sys.fn_PageResCracker(d.page_resource) AS r
-**Up to five synchronous replicas** – SQL Server 2019 preview increases the limit for synchronous replicas from three (in SQL Server 2017) to five. Configure up to five synchronous replicas (1 primary and up to 4 synchronous secondary replicas) with automatic failover between these replicas.
210
+
-**Up to five synchronous replicas** – SQL Server 2019 preview increases the maximum number of synchronous replicas to 5, up from 3 in SQL Server 2017. You can configure this group of 5 replicas to have automatic failover within the group. There is 1 primary replica, plus 4 synchronous secondary replicas.
204
211
205
212
-**Secondary to primary replica connection redirection**: Allows client application connections to be directed to the primary replica regardless of the target server specified in the connection string. This capability allows connection redirection without a listener. Use Secondary to primary replica connection redirection in the following cases:
206
213
207
-
- The cluster technology does not offer a listener capability
208
-
- A multi subnet configuration where redirection becomes complex
209
-
- Read scale-out or disaster recovery scenarios where cluster type is `NONE`
214
+
- The cluster technology does not offer a listener capability.
215
+
- A multi subnet configuration where redirection becomes complex.
216
+
- Read scale-out or disaster recovery scenarios where cluster type is `NONE`.
210
217
211
-
For details, see [Secondary to primary replica read/write connection redirection (Always On Availability Groups)](../database-engine/availability-groups/windows/secondary-replica-connection-redirection-always-on-availability-groups.md
212
-
).
218
+
For details, see [Secondary to primary replica read/write connection redirection (Always On Availability Groups)](../database-engine/availability-groups/windows/secondary-replica-connection-redirection-always-on-availability-groups.md).
213
219
214
220
### Always Encrypted with secure enclaves
215
221
216
-
Expands upon Always Encrypted with in-place encryption and rich computations by enabling computations on plaintext data inside a secure enclave on the server side.
222
+
Expands upon Always Encrypted with in-place encryption and rich computations. The expansions come from the enabling of computations on plaintext data, inside a secure enclave on the server side.
223
+
224
+
Cryptographic operations include the encryption of columns, and the rotating of column encryption keys. These operations can now be issued by using Transact-SQL, and they do not require that data be moved out of the database. Secure enclaves provide Always Encrypted to a broader set of scenarios that have both of the following requirements:
225
+
226
+
- The demand that sensitive data be protected during access.
227
+
- The requirement that rich computations on protected data be supported within the database system.
217
228
218
-
Cryptographic operations (encrypting columns, rotating columns encryption keys, etc.), can now be issued using Transact-SQL and do not require moving data out of the database. Secure enclaves unlock Always Encrypted to a much broader set of scenarios and applications that demand sensitive data to be protected in use, while also requiring rich computations on protected data to be supported within the database system. For details, see [Always Encrypted with secure enclaves](../relational-databases/security/encryption/always-encrypted-enclaves.md).
229
+
For details, see [Always Encrypted with secure enclaves](../relational-databases/security/encryption/always-encrypted-enclaves.md).
219
230
220
231
>[!NOTE]
221
232
>Always Encrypted with secure enclaves is only available on Windows OS.
0 commit comments