| title | What's new in SQL Server 2019 | Microsoft Docs |
|---|---|
| ms.date | 06/26/2019 |
| ms.prod | sql |
| ms.reviewer | |
| ms.technology | release-landing |
| ms.topic | article |
| author | MikeRayMSFT |
| ms.author | mikeray |
| monikerRange | >=sql-server-ver15||=sqlallproducts-allversions |
What's new in [!INCLUDEsql-server-2019]
[!INCLUDEtsql-appliesto-ss-xxxx-xxxx-xxx-md]
[!INCLUDEsql-server-2019] builds on previous releases to grow SQL Server as a platform that gives you choices of development languages, data types, on-premises or cloud, and operating systems. This article summarizes what is new for [!INCLUDEsql-server-2019].
The article summarizes the features in each release and points to more details for each feature. The Details section provides technical details of features that may not be available in core documentation. The other sections of this article provide details about all of the features released to date for this [!INCLUDEsql-server-2019].
For more information and known issues, see the [[!INCLUDEsql-server-2019] Release Notes](sql-server-ver15-release-notes.md).
Use the latest tools for the best experience with [!INCLUDEsql-server-2019].
Community technology preview (CTP) 3.1 is the latest public release of [!INCLUDEsql-server-2019]. This release includes improvements from previous CTP releases to fix bugs, improve security, and optimize performance.
[!INCLUDEctp-support-exclusion]
For specific features excluded from support, see the release notes.
In addition, the following features are added or enhanced for [!INCLUDEsql-server-2019] CTP 3.1.
| New feature or update | Details |
|---|---|
mssqlctl command changes |
mssqlctl cluster commands have been renamed to mssqlctl bdc. For more information, see the mssqlctl reference. |
New status commands for mssqlsctl |
mssqlctl adds new commands to complement existing monitoring commands. These replace the cluster administration portal - which is removed in this release. |
| Spark compute pools | Create additional nodes to increase Spark compute power without having to scale up storage. In addition, you can start storage pool nodes that aren't used for Spark. Spark and storage are decoupled. For more information, see Configure storage without spark. |
| MSSQL Spark connector | Support for read/write to data pool external tables. Previous releases supported read/write to MASTER instance tables only. For more information, see How to read and write to SQL Server from Spark using the MSSQL Spark Connector. |
| Machine Learning using MLeap | Train an MLeap machine learning model in Spark and score it in SQL Server using the Java language extension. |
| New feature or update | Details |
|---|---|
| Index encrypted columns | Create indexes on columns encrypted using randomized encryption and enclave-enabled keys, to improve the performance of rich queries (using LIKE and comparison operators). See Always Encrypted with Secure Enclaves. |
Set MIN and MAX server memory values at setup |
During setup, you can set server memory values. Use the default values, the calculated recommended values, or manually specify your own values once you've chosen the Recommended option Server Memory Server Configuration Options. |
New graph function - SHORTEST_PATH |
Use SHORTEST_PATH inside MATCH to find the shortest path between any 2 nodes in a graph or to perform arbitrary length traversals. |
| Partition tables and indexes for graph databases | The data of partitioned tables and indexes is divided into units that can be spread across more than one filegroup in a graph database. |
New option for indexes - OPTIMIZE_FOR_SEQUENTIAL_KEY |
Turns on an optimization within the database engine that helps improve throughput for high-concurrency inserts into the index. This option is intended for indexes that are prone to last-page insert contention, typically seen with indexes that have a sequential key such as an identity column, sequence, or date/time column. See CREATE INDEX for more information. |
| New feature or update | Details |
|---|---|
| Tempdb improvements | By default, a new installation of SQL Server on Linux creates multiple tempdb data files based on the number of logical cores (with up to 8 data files). This does not apply to in-place minor or major version upgrades. Each tempdb file is 8 MB with an auto growth of 64 MB. This behavior similar to the default SQL Server installation on Windows. |
| New feature or update | Details |
|---|---|
| mssqlctl updates | Several mssqlctl command and parameter updates. This includes an update to the mssqlctl login command, which now targets the controller username and endpoint. |
| Storage enhancements | Support for different storage configurations for logs and data. Also, the number of persistent volume claims for a big data cluster has been reduced. |
| Multiple compute pool instances | Support for multiple compute pool instances. |
| New pool behavior and capabilities | The compute pool is now used by default for storage pool and data pool operations in a ROUND_ROBIN distribution only. The data pool can now use a new REPLICATED distribution type, which means that the same data is present on all the data pool instances. |
| External table improvements | External tables of HADOOP data source type now supports reading rows up to 1 MB in size. External tables (ODBC, storage pool, data pool) now support rows as wide as a SQL Server table. |
| New feature or update | Details |
|---|---|
| SQL Server Language Extensions - Java language extension | The Microsoft Extensibility SDK for Java for Microsoft SQL Server is now open sourced and available on GitHub. |
| Register external languages | New DDL, CREATE EXTERNAL LANGUAGE, registers external languages, like Java, in SQL Server. See CREATE EXTERNAL LANGUAGE. |
| More supported data types for Java | See Java data types. |
| Custom capture policy for the Query Store | When enabled, additional Query Store configurations are available under a new Query Store Capture Policy setting, to fine tune data collection in a specific server. For more information, see ALTER DATABASE SET Options. |
| In-memory database adds new DDL syntax to control the hybrid buffer pool. 2 | With hybrid buffer pool, database pages sitting on database files placed on a persistent memory (PMEM) device will be directly accessed when required. |
| New in-memory database feature, memory-optimized tempdb metadata added. | See Memory-Optimized TempDB Metadata |
| Linked Servers support UTF-8 character encoding. | Collation and Unicode Support |
| BIN2_UTF8 collation name changed to Latin1_General_100_BIN2_UTF8. | Collation and Unicode Support |
| SQL Server Setup includes MaxDOP recommendations that follow the documented guidelines. | Configure the max degree of parallelism Server Configuration Option |
sys.dm_exec_query_plan_stats returns more information about degree of parallelism and memory grants for query plans. |
sys.dm_exec_query_plan_stats1 |
1 This is an opt-in feature and requires trace flag 2451 to be enabled.
2 A trace flag is no longer required to enable the hybrid buffer pool.
[!INCLUDEmaster-data-services]
| New feature or update | Details |
|---|---|
| [!INCLUDEsql-server-2019] [!INCLUDEmaster-data-services] supports Azure SQL Database managed instance databases. | Host [!INCLUDEmaster-data-services] on a managed instance. See [[!INCLUDEmaster-data-services] installation and configuration](../master-data-services/master-data-services-installation-and-configuration.md#SetUpWeb). |
| New feature or update | Details |
|---|---|
| MDX query support for tabular models with calculation groups. | This release removes an earlier limitation in calculation groups. |
| Dynamic formatting of measures using calculation groups. | This feature allows you to conditionally change format strings for measures with calculation groups. For example, with currency conversion, a measure can be displayed using different foreign currency formats. |
| New feature or update | Details |
|---|---|
| Deployment profiles | Use default and customized deployment configuration JSON files for big data cluster deployments instead of environment variables. |
| Prompted deployments | mssqlctl cluster create now prompts for any necessary settings for default deployments. |
| Service endpoint and pod name changes | For more information, see the big data cluster release notes. |
| mssqlctl improvements | Use mssqlctl to list external endpoints and check the version of mssqlctl with the --version parameter. |
| Offline install | Guidance for offline big data cluster deployments. |
| HDFS tiering improvements | HDFS tiering against Amazon S3 storage. OAuth support for ADLS Gen2. Caching functionality for better performance. For more information, see HSDFS Tiering |
| Spark to SQL Server connector | Read and write to SQL Server from Spark using the MSSQL JDBC Connector |
| New feature or update | Details |
|---|---|
| PolyBase on Linux. | Install PolyBase on Linux for non-Hadoop connectors. PolyBase type mapping. |
| New Java language SDK for SQL Server. | Simplifies development of Java programs that can be run from SQL Server. See What's new in SQL Server Machine Learning Services. |
Expanded the scope of plans available in DMF sys.dm_exec_query_plan_stats. |
See sys.dm_exec_query_plan_stats1 |
New LAST_QUERY_PLAN_STATS database scoped configuration to enable sys.dm_exec_query_plan_stats. |
See ALTER DATABASE SCOPED CONFIGURATION |
| New spatial reference identifiers (SRIDs). | Australian GDA2020 provides more robust and accurate datum which is more closely aligned to global positioning systems. The new SRIDs are: - 7843 - geographic 2D - 7844 - geographic 3D sys.spatial_reference_systems view contains definitions of new SRIDs. |
1 This is an opt-in feature and requires trace flag 2451 to be enabled or setting the
LAST_QUERY_PLAN_STATSdatabase scoped configuration to ON.
| New feature or update | Details |
|---|---|
| Guidance on GPU support for running deep learning with TensorFlow in Spark. | Deploy a big data cluster with GPU support and run TensorFlow. |
| SqlDataPool and SqlStoragePool data sources are no longer created by default. | Create these manually as needed. See the known issues. |
INSERT INTO SELECT support for the data pool. |
For an example, see Tutorial: Ingest data into a SQL Server data pool with Transact-SQL. |
FORCE SCALEOUTEXECUTION and DISABLE SCALEOUTEXECUTION option. |
See Big data clusters release notes. |
| Updated AKS deployment recommendations. | When evaluating big data clusters on AKS, we now recommend using a single node of size Standard_L8s. |
| Spark runtime upgrade to Spark 2.4. | |
| New feature or update | Details |
|---|---|
| Truncation error message defaults to include table and column names, and truncated value. | VERBOSE_TRUNCATION_WARNINGS |
New DMF sys.dm_exec_query_plan_stats returns the equivalent of the last known actual execution plan for most queries. |
sys.dm_exec_query_plan_stats1 |
The new query_post_execution_plan_profile Extended Event collects the equivalent of an actual execution plan based on lightweight profiling, unlike query_post_execution_showplan which uses standard profiling. |
Query profiling infrastructure |
| Transparent Data Encryption (TDE) scan - suspend and resume. | Transparent Data Encryption (TDE) scan - suspend and resume |
1 This is an opt-in feature and requires trace flag 2451 to be enabled.
| New feature or update | Details |
|---|---|
| Many-to-many relationships in tabular models. | Many-to-many relationships in tabular models |
| Property settings for resource governance. | Property settings for resource governance |
| New feature or update | Details |
|---|---|
| Submit Spark jobs on big data clusters in IntelliJ. | Submit Spark jobs on SQL Server big data clusters in IntelliJ |
| Common CLI for application deployment and cluster management. | How to deploy an app on SQL Server 2019 big data cluster (preview) |
| VS Code extension to deploy applications to a big data cluster. | How to use VS Code to deploy applications to SQL Server big data clusters |
| Changes to the mssqlctl tool command usage. | For more details see the known issues for mssqlctl. |
| Use Sparklyr in big data cluster. | Use Sparklyr in SQL Server 2019 big data cluster |
| Mount external HDFS-compatible storage into big data cluster with HDFS tiering. | See HDFS tiering. |
| New unified connection experience for the SQL Server master instance and the HDFS/Spark Gateway. | See SQL Server master instance and the HDFS/Spark Gateway. |
| Deleting a cluster with mssqlctl cluster delete now deletes only the objects in the namespace that were part of the big data cluster. | The namespace is not deleted. However, in earlier releases this command did delete the entire namespace. |
| Security endpoint names have been changed and consolidated. | service-security-lb and service-security-nodeport have been consolidated into the endpoint-security endpoint. |
| Proxy endpoint names have been changed and consolidated. | service-proxy-lb and service-proxy-nodeport have been consolidated into the endpoint-service-proxy endpoint. |
| Controller endpoint names have been changed and consolidated. | service-mssql-controller-lb and service-mssql-controller-nodeport have been consolidated into the endpoint-controller endpoint. |
| New feature or update | Details |
|---|---|
| Enable accelerated database recovery can be enabled per-database. | Accelerated database recovery |
| Query Store plan forcing support for fast forward and static cursors. | Plan forcing support for fast forward and static cursors |
| Reduced recompilations for workloads using temporary tables across multiple scopes. | Reduced recompilations for workloads |
| Improved indirect checkpoint scalability. | Improved indirect checkpoint scalability |
Adds support to use UTF-8 character encoding with a BIN2 collation (UTF8_BIN2). |
Collation and Unicode Support |
| Define cascaded delete actions on an edge constraint in a graph database. | Edge constraints |
Enable or disable LIGHTWEIGHT_QUERY_PROFILING with the new database scoped configuration. |
LIGHTWEIGHT_QUERY_PROFILING |
| New feature or update | Details |
|---|---|
| Azure Data Studio supports Azure Active Directory. | Azure Data Studio |
| Notebook view UI has moved into Azure Data Studio core. | How to manage notebooks in Azure Data Studio |
| Added new wizard to create external data sources from Hadoop Distributed File System (HDFS) to SQL Server Big Data Cluster. | Tools |
| Improved Notebook viewer UI. | Tools |
| Added new Notebook APIs. | Tools |
| Added "Reinstall Notebook dependencies" command to assist with Python package updates. | Tools |
| Launch Azure Data Studio from SSMS. | Tools |
| New feature or update | Details |
|---|---|
| Calculation groups in tabular model. | Calculation groups in tabular model |
| New feature or update | Details |
|---|---|
| Use SparkR from Azure Data Studio on a big data cluster. | |
| Deploy Python and R apps. | Deploy applications using mssqlctl |
| New feature or update | Details |
|---|---|
| Adds support to use UTF-8 character encoding with SQL Server Replication. | Collation and Unicode Support |
| New feature or update | Details |
|---|---|
| Always On Availability Group on Docker containers with Kubernetes. | Always On Availability Groups for containers |
| New feature or update | Details |
|---|---|
| Adds support to select UTF-8 collation as default during. [!INCLUDEsql-server-2019] setup. | Collation and Unicode Support |
| Scalar UDF inlining automatically transforms scalar user-defined functions (UDF) into relational expressions and embeds them in the calling SQL query. | Scalar UDF Inlining |
The dynamic management view sys.dm_exec_requests column command shows SELECT (STATMAN) if a SELECT is waiting for a synchronous statistics update operation to complete prior to continuing query execution. |
sys.dm_exec_requests |
The new wait type WAIT_ON_SYNC_STATISTICS_REFRESH is surfaced in the sys.dm_os_wait_stats dynamic management view. It shows the accumulated instance-level time spent on synchronous statistics refresh operations. |
sys.dm_os_wait_stats |
| Hybrid buffer pool is a new feature of the SQL Server database engine where database pages sitting on database files placed on a persistent memory (PMEM) device will be directly accessed when required. | Hybrid buffer pool |
| Use derived table or view aliases in graph match query | Graph Edge Constraints |
| New feature or update | Details |
|---|---|
| New container registry for SQL Server. | Get started with SQL Server containers on Docker |
| New feature or update | Details |
|---|---|
| Azure Data Studio supports Connect and manage [!INCLUDEsql-server-2019] big data clusters. | |
| New feature or update | Details |
|---|---|
| Deploy a Big Data cluster with [!INCLUDEssNoVersion] and Spark Linux containers on Kubernetes. | |
| Access your big data from HDFS. | |
| Run Advanced analytics and machine learning with Spark. | |
| Use Spark streaming to data to SQL data pools. | |
| Run Query books that provide a notebook experience in Azure Data Studio. | Data engineering |
| New feature or update | Details |
|---|---|
| Database COMPATIBILITY_LEVEL 150 is added. | ALTER DATABASE Compatibility Level (Transact-SQL) |
| Resumable Online Index Create. | CREATE INDEX (Transact-SQL) |
| Row mode memory grant feedback. | Row mode memory grant feedback |
Approximate COUNT DISTINCT. |
Approximate query processing |
| Batch mode on rowstore. | Batch mode on rowstore |
| Table variable deferred compilation. | Table variable deferred compilation |
| Java language extension. | Java language extension |
Merge your current graph data from node or edge tables with new data using the MATCH predicates in the MERGE statement. |
|
| Edge constraints. | Graph edge constraints |
| Database scoped default setting for online and resumable DDL operations. | |
| Availability groups support up to 5 synchronous secondary replicas. | Availability groups |
| Secondary to primary replica read/write connection redirection | Secondary to primary replica read/write connection redirection-Always On Availability Groups |
| SQL Data Discovery and Classification. | SQL Data Discovery & Classification |
| Expanded support for persistent memory devices. | Hybrid Buffer Pool |
Support for columnstore statistics in DBCC CLONEDATABASE |
Stats blob for columnstore indexes |
sp_estimate_data_compression_savings introduces COLUMNSTORE and COLUMNSTORE_ARCHIVE. |
Considerations for Columnstore Indexes |
| Machine Learning services supported on Windows Server Failover Cluster. | What's new - SQL Server Machine Learning Services |
| Machine Learning support for partition-based modeling. | What's new - SQL Server Machine Learning Services |
| Lightweight query profiling infrastructure enabled by default | Lightweight query execution statistics profiling infrastructure v3 |
| New PolyBase connectors for [!INCLUDEssNoVersion], Oracle, Teradata, and MongoDB. | What is PolyBase? |
sys.dm_db_page_info(database_id, file_id, page_id, mode) returns information about a page in a database. |
sys.dm_db_page_info (Transact-SQL) |
| Always Encrypted with secure enclaves. | Always Encrypted with secure enclaves |
| Build and rebuild online clustered columnstore index. | Perform Index Operations Online |
| New feature or update | Details |
|---|---|
| Replication support | SQL Server Replication on Linux |
| Support for the Microsoft Distributed Transaction Coordinator (MSDTC) | How to configure MSDTC on Linux |
| OpenLDAP support for third-party AD providers | Tutorial: Use Active Directory authentication with SQL Server on Linux |
| Machine Learning on Linux | Configure Machine Learning on Linux |
| New feature or update | Details |
|---|---|
| The Master Data Services (MDS) portal no longer depends on Silverlight. | All the former Silverlight components have been replaced with HTML controls. |
| New feature or update | Details |
|---|---|
| Certificate management in SQL Server Configuration Manager | Certificate Management (SQL Server Configuration Manager) |
| New feature or update | Details |
|---|---|
| Azure Data Studio supports Connect and manage [!INCLUDEsql-server-2019] big data clusters. | What is Azure Data Studio |
| Supports scenarios using SQL Server big data clusters. | SQL Server 2019 extension (preview) |
| SQL Server Management Studio (SSMS) 18.0 (preview): Supports [!INCLUDEsql-server-2019]. | |
| Support for Always Encrypted with secure enclaves. | Always Encrypted with Secure Enclaves |
As of CTP 2.4, [!INCLUDEsql-server-2019] does not introduce new features for the following services:
- [!INCLUDEssNoVersion] [!INCLUDEssISnoversion] (SSIS)
- [!INCLUDEssNoVersion] [!INCLUDEssRSnoversion] (SSRS)
[!INCLUDEsql-server-2019] Big data clusters enables new scenarios including the following:
- GPU support for running deep learning with TensorFlow in Spark. (CTP 2.4)
- Spark runtime upgrade to Spark 2.4. (CTP 2.4)
INSERT INTO SELECTsupport for the data pool.)(CTP 2.4)FORCE SCALEOUTEXECUTIONandDISABLE SCALEOUTEXECUTIONoption clause for external table queries. (CTP 2.4)- [Submit Spark jobs on [!INCLUDEsql-server-2019] big data clusters in IntelliJ](../big-data-cluster/spark-submit-job-intellij-tool-plugin.md). (CTP 2.3)
- Application deployment and management experience for a variety of data-related apps, including operationalizing machine learning models using R and Python, running SQL Server Integration Services (SSIS) jobs, and more. (CTP 2.3)
- [Use Sparklyr in [!INCLUDEsql-server-2019] big data clusters](../big-data-cluster/sparklyr-from-RStudio.md). (CTP 2.3)
- Mount external HDFS-compatible storage into big data cluster with HDFS tiering. (CTP 2.3)
- Use SparkR from Azure Data Studio on a big data cluster. (CTP 2.2)
- Deploy Python and R apps. (CTP 2.2)
- Deploy a Big Data cluster with [!INCLUDEssNoVersion] and Spark Linux containers on Kubernetes. (CTP 2.0)
- Access your big data from HDFS. (CTP 2.0)
- Run Advanced analytics and machine learning with Spark. (CTP 2.0)
- Use Spark streaming to data to SQL data pools. (CTP 2.0)
- Run Query books that provide a notebook experience in Azure Data Studio. (CTP 2.0)
[!INCLUDE Big data clusters preview]
[!INCLUDEsql-server-2019] introduces or enhances the following new features for the [!INCLUDEssDEnoversion].
The new query_post_execution_plan_profile Extended Event collects the equivalent of an actual execution plan based on lightweight profiling, unlike query_post_execution_showplan which uses standard profiling. For more information, see Query profiling infrastructure.
CREATE EVENT SESSION [QueryPlanOld] ON SERVER
ADD EVENT sqlserver.query_post_execution_showplan(
ACTION(sqlos.task_time, sqlserver.database_id,
sqlserver.database_name, sqlserver.query_hash_signed,
sqlserver.query_plan_hash_signed, sqlserver.sql_text))
ADD TARGET package0.event_file(SET filename = N'C:\Temp\QueryPlanStd.xel')
WITH (MAX_MEMORY=4096 KB, EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS,
MAX_DISPATCH_LATENCY=30 SECONDS, MAX_EVENT_SIZE=0 KB,
MEMORY_PARTITION_MODE=NONE, TRACK_CAUSALITY=OFF, STARTUP_STATE=OFF);CREATE EVENT SESSION [QueryPlanLWP] ON SERVER
ADD EVENT sqlserver.query_post_execution_plan_profile(
ACTION(sqlos.task_time, sqlserver.database_id,
sqlserver.database_name, sqlserver.query_hash_signed,
sqlserver.query_plan_hash_signed, sqlserver.sql_text))
ADD TARGET package0.event_file(SET filename=N'C:\Temp\QueryPlanLWP.xel')
WITH (MAX_MEMORY=4096 KB, EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS,
MAX_DISPATCH_LATENCY=30 SECONDS, MAX_EVENT_SIZE=0 KB,
MEMORY_PARTITION_MODE=NONE, TRACK_CAUSALITY=OFF, STARTUP_STATE=OFF);The new DMF sys.dm_exec_query_plan_stats returns the equivalent of the last known actual execution plan for most queries, based on lightweight profiling. For more information, see sys.dm_exec_query_plan_stats and Query profiling infrastructure. See the following script as an example:
SELECT *
FROM sys.dm_exec_cached_plans
CROSS APPLY sys.dm_exec_query_plan_stats(plan_handle)
WHERE objtype ='Trigger';
GOThis is an opt-in feature and requires trace flag 2451 to be enabled.
In order to enable Transparent Data Encryption (TDE) on a database, [!INCLUDEssNoVersion] must perform an encryption scan which reads each page from the data file(s) into the buffer pool, and then writes the encrypted pages back out to disk. To provide the user with more control over the encryption scan, [!INCLUDEsql-server-2019] introduces TDE scan - suspend and resume syntax so that you can pause the scan while the workload on the system is heavy, or during business-critical hours, and then resume the scan later.
Use the following syntax to pause the TDE encryption scan:
ALTER DATABASE <db_name> SET ENCRYPTION SUSPEND;Similarly, the following syntax resumes the TDE encryption scan:
ALTER DATABASE <db_name> SET ENCRYPTION RESUME;To show the current state of the encryption scan, encryption_scan_state has been added to the sys.dm_database_encryption_keys dynamic management view. There is also a new column called encryption_scan_modify_date which will contain the date and time of the last encryption scan state change. Also note that if the [!INCLUDEssNoVersion] instance is restarted while the encryption scan is in a suspended state, a message will be logged in the errorlog on startup indicating that there is an existing scan which has been paused.
Accelerated database recovery greatly improves database availability, especially in the presence of long running transactions, by redesigning the SQL Server database engine recovery process. Database recovery is the process SQL Server uses for each database to start at a transactionally consistent - or clean - state. A database, with accelerated database recovery enabled, completes recovery significantly faster after a fail over or other non-clean shutdown. As of CTP 2.3, accelerated database recovery can be enabled per-database using the following syntax:
ALTER DATABASE <db_name> SET ACCELERATED_DATABASE_RECOVERY = {ON | OFF}Note
This syntax is not required to take advantage of this feature in Azure SQL DB, where it is enabled by request during public preview. After it is enabled, the feature is on by default.
If you have critical databases that are prone to large transactions, experiment with this feature during the preview. Provide feedback to [[!INCLUDEssNoVersion] team](https://aka.ms/sqlfeedback).
Query Store now supports the ability to force query execution plans for fast forward and static T-SQL and API cursors. Forcing is now supported via sp_query_store_force_plan or through SQL Server Management Studio Query Store reports.
Prior to this feature, when referencing a temporary table with a data manipulation language (DML) statement (SELECT, INSERT, UPDATE, DELETE), if the temporary table was created by an outer scope batch, this would result in a recompile of the DML statement each time it is executed. With this improvement, SQL Server performs additional lightweight checks to avoid unnecessary recompilations:
- Check if the outer-scope module used for creating the temporary table at compile time is the same one used for consecutive executions.
- Keep track of any data definition language (DDL) changes made at initial compilation and compare them with DDL operations for consecutive executions.
The end result is a reduction in extraneous recompilations and CPU-overhead.
In previous versions of [!INCLUDEssNoVersion], users may experience non-yielding scheduler errors when there is a database that generates a large number of dirty pages, such as tempdb. [!INCLUDEsql-server-2019] introduces improved scalability for Indirect Checkpoint, which should help avoid these errors on databases that have a heavy UPDATE/INSERT workload.
Full support for the widely used UTF-8 character encoding as an import or export encoding, or as database-level or column-level collation for text data. UTF-8 is allowed in the CHAR and VARCHAR datatypes, and is enabled when creating or changing an object's collation to a collation with the UTF8 suffix.
For example,LATIN1_GENERAL_100_CI_AS_SC to LATIN1_GENERAL_100_CI_AS_SC_UTF8. UTF-8 is only available to Windows collations that support supplementary characters, as introduced in [!INCLUDEssSQL11]. NCHAR and NVARCHAR allow UTF-16 encoding only, and remain unchanged.
This feature may provide significant storage savings, depending on the character set in use. For example, changing an existing column data type with ASCII (Latin) strings from NCHAR(10) to CHAR(10) using an UTF-8 enabled collation, translates into 50% reduction in storage requirements. This reduction is because NCHAR(10) requires 20 bytes for storage, whereas CHAR(10) requires 10 bytes for the same Unicode string.
For more information, see Collation and Unicode Support.
CTP 2.1 Adds support to select UTF-8 collation as default during [!INCLUDEsql-server-2019] setup.
CTP 2.2 Adds support to use UTF-8 character encoding with SQL Server Replication.
CTP 2.3 Adds support to use UTF-8 character encoding with a BIN2 collation (UTF8_BIN2).
Scalar UDF inlining automatically transforms scalar user-defined functions (UDF) into relational expressions and embeds them in the calling SQL query, thereby improving the performance of workloads that leverage scalar UDFs. Scalar UDF inlining facilitates cost-based optimization of operations inside UDFs, and results in efficient plans that are set-oriented and parallel as opposed to inefficient, iterative, serial execution plans. This feature is enabled by default under database compatibility level 150.
For more information, see Scalar UDF inlining.
The error message ID 8152 String or binary data would be truncated is familiar to many [!INCLUDEssNoVersion] developers and administrators who develop or maintain data movement workloads; the error is raised during data transfers between a source and a destination with different schemas when the source data is too large to fit into the destination data type. This error message can be time-consuming to troubleshoot. [!INCLUDEsql-server-2019] introduces a new, more specific error message (2628) for this scenario:
String or binary data would be truncated in table '%.*ls', column '%.*ls'. Truncated value: '%.*ls'.
The new error message 2628 provides more context for the data truncation problem, simplifying the troubleshooting process.
CTP 2.1 and CTP 2.2 This is an opt-in error message and requires trace flag 460 to be enabled.
CTP 2.4 Error message 2628 becomes the default truncation message and replaces error message 8152 under database compatibility level 150. A new database scoped configuration VERBOSE_TRUNCATION_WARNINGS is introduced to switch between error message 2628 and 8152 when the database compatibility level is 150. For more information, see ALTER DATABASE SCOPED CONFIGURATION.
For database compatibility level 140 or lower, error message 2628 remains an opt-in error message that requires trace flag 460 to be enabled.
[!INCLUDEsql-server-2019] provides improved diagnostic data for long-running queries that wait on synchronous statistics update operations. The dynamic management view sys.dm_exec_requests column command shows SELECT (STATMAN) if a SELECT is waiting for a synchronous statistics update operation to complete prior to continuing query execution. Additionally, the new wait type WAIT_ON_SYNC_STATISTICS_REFRESH is surfaced in the sys.dm_os_wait_stats dynamic management view. It shows the accumulated instance-level time spent on synchronous statistics refresh operations.
Hybrid buffer pool is a new feature of the SQL Server database engine where database pages sitting on database files placed on a persistent memory (PMEM) device will be directly accessed when required. Since PMEM devices provide very low latency for data access, the engine can forgo making a copy of the data in a "clean pages" area of the buffer pool and simply access the page directly on PMEM. Access is performed using memory mapped I/O, as is the case with enlightenment. This brings performance benefits from avoiding a copy of the page to DRAM, and from the avoidance of the I/O stack of the operating system to access the page on persistent storage. This feature is available on both SQL Server on Windows and SQL Server on Linux.
For more information, see Hybrid buffer pool
[!INCLUDEsql-server-2019] introduces static data masking. You can use static data masking to sanitize sensitive data in copies of SQL Server databases. Static data masking helps create a sanitized copy of databases where all sensitive information has been altered in a way that makes the copy sharable with non-production users. Static data masking can be used for development, testing, analytics and business reporting, compliance, troubleshooting, and any other scenario where specific data cannot be copied to different environments.
Static data masking operates at the column level. Select which columns to mask, and for each column selected, specify a masking function. Static data masking copies the database and then applies the specified masking functions to the columns.
Data masking is the process of applying a mask on a database to hide sensitive information and replacing it with new data or scrubbed data. Microsoft offers two masking options, static data masking and dynamic data masking. Dynamic data masking was introduced in [!INCLUDEssSQL16]. The following table compares these two solutions:
| Static data masking | Dynamic data masking |
|---|---|
| Happens on a copy of the database Original data not retrievable Mask occurs at the storage level All users have access to the same masked data Geared toward continuous team-wide access |
Happens on the original database Original data intact Mask occurs on-the-fly at query time Mask varies based on user permission Geared toward punctual user-specific access |
Database COMPATIBILITY_LEVEL 150 is added. To enable for a specific user database, execute:
ALTER DATABASE database_name SET COMPATIBILITY_LEVEL = 150;Resumable online index create allows an index create operation to pause and resume later from where the operation was paused or failed, instead of restarting from the beginning.
Resumable online index create supports the follow scenarios:
- Resume an index create operation after an index create failure, such as after a database failover or after running out of disk space.
- Pause an ongoing index create operation and resume it later allowing to temporarily free system resources as required and resume this operation later.
- Create large indexes without using as much log space and a long-running transaction that blocks other maintenance activities and allowing log truncation.
In case of an index create failure, without this feature an online index create operation must be executed again and the operation must be restarted from the beginning.
With this release, we extend the resumable functionality adding this feature to available resumable online index rebuild.
In addition, this feature can be set as the default for a specific database using database scoped default setting for online and resumable DDL operations.
For more information, see Resumable Online Index Create.
Convert row-store tables into columnstore format. Creating clustered columnstore indexes (CCI) was an offline process in the previous versions of [!INCLUDEssNoVersion] - requiring all changes stop while the CCI is created. With [!INCLUDEsql-server-2019] and [!INCLUDEssSDSfull] you can create or re-create CCI online. Workload will not be blocked and all changes made on the underlying data are transparently added into the target columnstore table. Examples of new [!INCLUDEtsql] statements that can be used are:
CREATE CLUSTERED COLUMNSTORE INDEX cci
ON <tableName>
WITH (ONLINE = ON);ALTER INDEX cci
ON <tableName>
REBUILD WITH (ONLINE = ON);Expands upon Always Encrypted with in-place encryption and rich computations. The expansions come from the enabling of computations on plaintext data, inside a secure enclave on the server side.
Cryptographic operations include the encryption of columns, and the rotating of column encryption keys. These operations can now be issued by using [!INCLUDEtsql], and they do not require that data be moved out of the database. Secure enclaves provide Always Encrypted to a broader set of scenarios that have both of the following requirements:
- The demand that sensitive data are protected from high-privilege, yet unauthorized users, including database administrators, system administrators, cloud operators, or malware.
- The requirement that rich computations on protected data be supported within the database system.
For details, see Always Encrypted with secure enclaves.
Note
Always Encrypted with secure enclaves is only available on Windows OS.
-
Row mode memory grant feedback expands on the memory grant feedback feature introduced in [!INCLUDEssSQL17] by adjusting memory grant sizes for both batch and row mode operators. For an excessive memory grant condition, if the granted memory is more than two times the size of the actual used memory, memory grant feedback will recalculate the memory grant. Consecutive executions will then request less memory. For an insufficiently sized memory grant that results in a spill to disk, memory grant feedback will trigger a recalculation of the memory grant. Consecutive executions will then request more memory. This feature is enabled by default under database compatibility level 150.
-
Approximate COUNT DISTINCT returns the approximate number of unique non-null values in a group. This function is designed for use in big data scenarios. This function is optimized for queries where all the following conditions are true:
- Accesses data sets of at least millions of rows.
- Aggregates a column or columns that have a large number of distinct values.
- Responsiveness is more critical than absolute precision.
APPROX_COUNT_DISTINCTreturns results that are typically within 2% of the precise answer.- And it returns the approximate answer in a small fraction of the time needed for the precise answer.
-
Batch mode on rowstore no longer requires a columnstore index to process a query in batch mode. Batch mode allows query operators to work on a set of rows, instead of just one row at a time. This feature is enabled by default under database compatibility level 150. Batch mode improves the speed of queries that access rowstore tables when all the following are true:
- The query uses analytic operators such as joins or aggregation operators.
- The query involves 100,000 or more rows.
- The query is CPU bound, rather than input/output data bound.
- Creation and use of a columnstore index would have one of the following drawbacks:
- Would add too much overhead to the query.
- Or, is not feasible because your application depends on a feature that is not yet supported with columnstore indexes.
-
Table variable deferred compilation improves plan quality and overall performance for queries referencing table variables. During optimization and initial compilation, this feature will propagate cardinality estimates that are based on actual table variable row counts. This accurate row count information will be used for optimizing downstream plan operations. This feature is enabled by default under database compatibility level 150.
To use intelligent query processing features, set database COMPATIBILITY_LEVEL = 150.
- Java language extension (preview): Use the Java language extension to execute Java code in [!INCLUDEssNoVersion]. In [!INCLUDEsql-server-2019], this extension is installed when you add the feature 'Machine Learning Services (in-database)' to your [!INCLUDEssNoVersion] instance.
-
Use derived table or view aliases in graph match query (CTP 2.1) Graph queries on [!INCLUDEsql-server-2019] preview support using view and derived table aliases in the
MATCHsyntax. To use these aliases inMATCH, the views and derived tables must be created on either a set of node or a set of edge tables, using theUNION ALLoperator. The node or edge tables may or may not have filters on it. The ability to use derived table and view aliases inMATCHqueries can be very useful in scenarios where you are looking to query heterogeneous entities or heterogeneous connections between two or more entities in your graph. -
Match support in
MERGEDML (CTP 2.0) allows you to specify graph relationships in a single statement, instead of separateINSERT,UPDATE, orDELETEstatements. Merge your current graph data from node or edge tables with new data using theMATCHpredicates in theMERGEstatement. This feature enablesUPSERTscenarios on edge tables. Users can now use a single merge statement to insert a new edge or update an existing one between two nodes. -
Edge constraints (CTP 2.0) are introduced for edge tables in SQL Graph. Edge tables can connect any node to any other node in the database. With introduction of edge constraints, you can now apply some restrictions on this behavior. The new
CONNECTIONconstraint can be used to specify the type of nodes a given edge table will be allowed to connect to in the schema.(CTP 2.3) Extending this feature further, you can define cascaded delete actions on an edge constraint. You can define the actions that the database engine takes when a user deletes the node(s), that a given edge connects.
-
Database scoped default setting for online and resumable DDL operations allows a default behavior setting for
ONLINEandRESUMABLEindex operations at the database level, rather than defining these options for each individual index DDL statement such as index create or rebuild. -
Set these defaults using the
ELEVATE_ONLINEandELEVATE_RESUMABLEdatabase scoped configuration options. Both options will cause the engine to automatically elevate supported operations to index online or resumable execution. You can enable the following behaviors using these options:FAIL_UNSUPPORTEDoption allows all index operations online or resumable and fail index operations that are not supported for online or resumable.WHEN_SUPPPORTEDoption allows supported operations online or resumable and run index unsupported operations offline or non-resumable.OFFoption allows the current behavior of executing all index operations offline and non-resumable unless explicitly specified in the DDL statement.
To override the default setting, include the ONLINE or RESUMABLE option in the index create and rebuild commands.
Without this feature, you have to specify the online and resumable options directly in the index DDL statement such as index create and rebuild.
For more information on index resumable operations, see Resumable Online Index Create.
-
Up to five synchronous replicas: [!INCLUDEsql-server-2019] increases the maximum number of synchronous replicas to 5, up from 3 in [!INCLUDEssSQL17]. You can configure this group of five replicas to have automatic failover within the group. There is one primary replica, plus four synchronous secondary replicas.
-
Secondary-to-primary replica connection redirection: Allows client application connections to be directed to the primary replica regardless of the target server specified in the connection string. This capability allows connection redirection without a listener. Use secondary-to-primary replica connection redirection in the following cases:
- The cluster technology does not offer a listener capability.
- A multi subnet configuration where redirection becomes complex.
- Read scale-out or disaster recovery scenarios where cluster type is
NONE.
For details, see Secondary to primary replica read/write connection redirection (Always On Availability Groups).
Data discovery and classification provides advanced capabilities that are natively built into [!INCLUDEssNoVersion]. Classifying and labeling your most sensitive data provides the following benefits:
- Helps meet data privacy standards and regulatory compliance requirements.
- Supports security scenarios, such as monitoring (auditing), and alerting on anomalous access to sensitive data.
- Makes it easier to identify where sensitive data resides in the enterprise, so that administrators can take the right steps to secure the database.
For more information, see SQL Data Discovery and Classification.
Auditing has also been enhanced to include a new field in the audit log called data_sensitivity_information, which logs the sensitivity classifications (labels) of the actual data that was returned by the query. For details and examples, see Add sensitivity classification.
Note
There are no changes in terms of how audit is enabled. There is a new field added to the audit records, data_sensitivity_information, which logs the sensitivity classifications (labels) of the actual data that was returned by the query. See Auditing access to sensitive data.
Any [!INCLUDEssNoVersion] file that is placed on a persistent memory device can now operate in enlightened mode. [!INCLUDEssNoVersion] directly accesses the device, bypassing the storage stack of the operating system using efficient memcpy operations. This mode improves performance because it allows low latency input/output against such devices. - Examples of [!INCLUDEssNoVersion] files include: - Database files - Transaction log files - In-Memory OLTP checkpoint files - Persistent memory is also known as storage class memory. - Persistent memory is occasionally referred to informally as pmem on some non-Microsoft websites.
Note
For this preview release, enlightenment of files on persistent memory devices is only available on Linux. [!INCLUDEssNoVersion] on Windows supports persistent memory devices starting with [!INCLUDEssSQL15].
DBCC CLONEDATABASE creates a schema-only copy of a database that includes all the elements necessary to troubleshoot query performance issues without copying the data. In previous versions of [!INCLUDEssNoVersion], the command did not copy the statistics necessary to accurately troubleshoot columnstore index queries and manual steps were required to capture this information. Now in [!INCLUDEsql-server-2019], DBCC CLONEDATABASE automatically captures the stats blobs for columnstore indexes, so no manual steps will be required.
sp_estimate_data_compression_savings returns the current size of the requested object and estimates the object size for the requested compression state. Currently this procedure supports three options: NONE, ROW, and PAGE. [!INCLUDEsql-server-2019] introduces two new options: COLUMNSTORE and COLUMNSTORE_ARCHIVE. These new options will allow you to estimate the space savings if a columnstore index is created on the table using either standard or archive columnstore compression.
-
Partition-based modeling: Process external scripts per partition of your data using the new parameters added to
sp_execute_external_script. This functionality supports training many small models (one model per partition of data) instead of one large model. -
Windows Server Failover Cluster: Configure high availability for Machine Learning Services on a Windows Server Failover Cluster.
For detailed information, see What's new in SQL Server Machine Learning Services.
The lightweight query profiling infrastructure (LWP) provides query performance data more efficiently than standard profiling mechanisms. Lightweight profiling is now enabled by default. It was introduced in [!INCLUDEssSQL15] SP1. Lightweight profiling offers a query execution statistics collection mechanism with an expected overhead of 2% CPU, compared with an overhead of up to 75% CPU for the standard query profiling mechanism. On previous versions, it was OFF by default. Database administrators could enable it with trace flag 7412.
For more information on lightweight profiling, see Query Profiling Infrastructure.
CTP 2.3 A new database scoped configuration LIGHTWEIGHT_QUERY_PROFILING is introduced to enable or disable the lightweight query profiling infrastructure.
- New connectors for [!INCLUDEssNoVersion], Oracle, Teradata, and MongoDB: [!INCLUDEsql-server-2019] introduces new connectors to external data for [!INCLUDEssNoVersion], Oracle, Teradata, and MongoDB.
sys.dm_db_page_info(database_id, file_id, page_id, mode) returns information about a page in a database. The function returns a row that contains the header information from the page, including the object_id, index_id, and partition_id. This function replaces the need to use DBCC PAGE in most cases.
In order to facilitate troubleshooting of page-related waits, a new column called page_resource was also added to sys.dm_exec_requests and sys.sysprocesses. This new column allows you to join sys.dm_db_page_info to these views via another new system function - sys.fn_PageResCracker. See the following script as an example:
SELECT page_info.*
FROM sys.dm_exec_requests AS d
CROSS APPLY sys.fn_PageResCracker(d.page_resource) AS r
CROSS APPLY sys.dm_db_page_info(r.db_id, r.file_id, r.page_id,'DETAILED')
AS page_info;-
Always On Availability Group on Docker containers with Kubernetes (CTP 2.2): Kubernetes can orchestrate containers running [!INCLUDEssNoVersion] instances to provide a highly available set of databases with SQL Server Always On Availability Groups. A Kubernetes operator deploys a StatefulSet including a container with mssql-server container, and a health monitor.
-
New container registry (CTP 2.1): All container images for [!INCLUDEsql-server-2019] as well as [!INCLUDEssSQL17] are now located in the Microsoft Container Registry. Microsoft Container Registry is the official container registry for the distribution of Microsoft product containers. In addition, certified RHEL-based images are now published.
- Microsoft Container Registry:
mcr.microsoft.com/mssql/server:vNext-CTP2.0 - Certified RHEL-based container images:
mcr.microsoft.com/mssql/rhel/server:vNext-CTP2.0
- Microsoft Container Registry:
-
Replication support (CTP 2.0): [!INCLUDEsql-server-2019] supports SQL Server Replication on Linux. A Linux virtual machine with SQL Agent can be a publisher, distributor, or subscriber.
Create the following types of publications:
- Transactional
- Snapshot
- Merge
Configure replication [!INCLUDEssManStudioFull] or use replication stored procedures.
-
Support for the Microsoft Distributed Transaction Coordinator (MSDTC) (CTP 2.0): [!INCLUDEsql-server-2019] on Linux supports the Microsoft Distributed Transaction Coordinator (MSDTC). For details, see How to configure MSDTC on Linux.
-
OpenLDAP support for third-party AD providers (CTP 2.0): [!INCLUDEsql-server-2019] on Linux supports OpenLDAP, which allows third-party providers to join Active Directory.
-
Machine Learning on Linux (CTP 2.0): [!INCLUDEsql-server-2019] Machine Learning Services (In-Database) is now supported on Linux. Support includes
sp_execute_external_scriptstored procedure. For instructions on how to install Machine Learning Services on Linux, see [Install [!INCLUDEsql-server-2019] Machine Learning Services R and Python support on Linux](../linux/sql-server-linux-setup-machine-learning.md).
- Silverlight controls replaced with HTML (CTP 2.0): The Master Data Services (MDS) portal no longer depends on Silverlight. All the former Silverlight components have been replaced with HTML controls.
-
Certificate management in SQL Server Configuration Manager (CTP 2.0): SSL/TLS certificates are widely used to secure access to SQL Server instances. Certificate management is now integrated into the SQL Server Configuration Manager, simplifying common tasks such as:
- Viewing and validating certificates installed in a [!INCLUDEssNoVersion] instance.
- Viewing certificates close to expiration.
- Deploy certificates across machines participating in Always On Availability Groups (from the node holding the primary replica).
- Deploy certificates across machines participating in a failover cluster instance (from the active node).
[!NOTE] User must have administrator permissions on all the cluster nodes.
- Azure Data Studio: Previously released under the preview name SQL Operations Studio, Azure Data Studio is a lightweight, modern, open source, cross-platform desktop tool for the most common tasks in data development and administration. With Azure Data Studio and the [[!INCLUDEsql-server-2019] Preview extension](../azure-data-studio/sql-server-2019-extension.md) you can connect to SQL Server on premises and in the cloud on Windows, macOS, and Linux. Azure Data Studio allows you to:
- AAD is now supported. (CTP 2.3)
- Notebook view UI has moved into Azure Data Studio core. (CTP 2.3)
- Added new wizard to create external data sources from Hadoop Distributed File System (HDFS) to SQL Server Big Data Cluster. (CTP 2.3)
- Improved Notebook viewer UI. (CTP 2.3)
- Added new Notebook APIs. (CTP 2.3)
- Added "Reinstall Notebook dependencies" command to assist with Python package updates. (CTP 2.3)
- Connect and manage [!INCLUDEsql-server-2019] big data clusters. (CTP 2.1)
- Edit and run queries in a modern development environment with lightning fast Intellisense, code snippets, and source control integration. (CTP 2.0)
- Quickly visualize data with built-in charting of your result sets. (CTP 2.0)
- Create custom dashboards for your servers and databases using customizable widgets. (CTP 2.0)
- Easily manage your broader environment with the built-in terminal. (CTP 2.0)
- Analyze data in an integrated notebook experience built on Jupyter. (CTP 2.0)
- Enhance your experience with custom theming and extensions.(CTP 2.0)
- And explore your Azure resources with a built-in subscription and resource browser. (CTP 2.0)
- Supports scenarios using SQL Server big data cluster. (CTP 2.0)
Tip
For the latest improvements to Azure Data Studio, see the Azure Data Studio release notes.
-
SQL Server Management Studio (SSMS) 18.0 (preview): Supports [!INCLUDEsql-server-2019].
- Launch Azure Data Studio from SSMS. (CTP 2.3)
- Support for Always Encrypted with secure enclaves. (CTP 2.0)
- Smaller download size. (CTP 2.0)
- Now based on the Visual Studio 2017 Isolated Shell. (CTP 2.0)
- For a complete list, see the SSMS changelog. (CTP 2.0)
-
SQL Server PowerShell module: The SqlServer PowerShell module allows SQL Server developers, admins, and BI professionals to automate database deployment and server administration.
- Upgrade from 21.0 to 21.1 to support SMO v150.
- Updated SQL Server provider (SQLRegistration) to display AS/IS/RS groups.
- Fixed issue in
New-SqlAvailabilityGroupcmdlet when targeting SQL Server 2014. - Added
–LoadBalancedReadOnlyRoutingListparameter toSet-SqlAvailabilityReplicaandNew-SqlAvailabilityReplica. - Updated
AnalysisServicecmdlet to use cached login token fromLogin-AzureAsAccountfor Azure Analysis Services.
[!INCLUDEssNoVersion] Analysis Services (SSAS)
This feature allows many-to-many relationships between tables where both columns are non-unique. A relationship can be defined between a dimension and fact table at a granularity higher than the key column of the dimension. This avoids having to normalize dimension tables and can improve the user experience because the resulting model has a smaller number of tables with logically grouped columns. For this CTP 2.4 release, many-to-many relationships are engine-only features.
Many-to-many relationships require models be at the 1470 compatibility level, which is currently supported only in [!INCLUDEsql-server-2019] CTP 2.3 and later. For this CTP 2.4 release, many-to-many relationships can be created by using the Tabular Object Model (TOM) API, Tabular Model Scripting Language (TMSL), and the open-source Tabular Editor tool. Support in SQL Server Data Tools (SSDT) will be included in a future release, as will documentation. Additional information for this and other CTP feature releases will be provided in the Analysis Services blog.
The memory settings described here are already available in Azure Analysis Services. Beginning with CTP 2.4, they are now also supported by [!INCLUDEsql-server-2019] Analysis Services.
- Memory\QueryMemoryLimit - This memory property can be used to limit memory spools built by DAX queries submitted to the model.
- DbpropMsmdRequestMemoryLimit - This XMLA property can be used to override the Memory\QueryMemoryLimit server property value for a connection.
- OLAP\Query\RowsetSerializationLimit - This server property limits the number of rows returned in a rowset, protecting server resources from extensive data export usage. This property applies to both applies to both DAX and MDX queries.
These properties can be set by using the latest version of SQL Server Management Studio (SSMS). Additional information for this feature will be provided in the Analysis Services blog.
Calculation groups address a common issue in complex models where there can be a proliferation of measures using the same calculations, such as time-intelligence. Calculation groups are shown in reporting clients as a table with a single column. Each value in the column represents a reusable calculation, or calculation item, that can be applied to any of the measures.
A calculation group can have any number of calculation items. Each calculation item is defined by a DAX expression. Three new DAX functions are introduced to work with calculation groups:
-
SELECTEDMEASURE()- Returns a reference to the measure currently in context. -
SELECTEDMEASURENAME()- Returns a string containing the name of the measure currently in context. -
ISSELECTEDMEASURE(M1, M2, …)- Returns a Boolean indicating whether the measure currently in context is one of those specified as an argument.
In addition to new DAX functions, two new Dynamic Management Views are introduced:
TMSCHEMA_CALCULATION_GROUPSTMSCHEMA_CALCULATION_ITEMS
- The
ALLSELECTED DAXfunction is not yet supported. - Row Level Security defined on the calculation-group table is not yet supported.
- Object Level Security defined on the calculation-group table is not yet supported.
- DetailsRows expressions referring to calculation items are not yet supported.
- MDX is not yet supported.
- The presence of calculation groups in a model may cause measures to return variant data types, which can cause refresh failures for calculated columns and tables that refer to measures.
Calculation groups require models be at the 1470 compatibility level, which is currently supported only in [!INCLUDEsql-server-2019] CTP 2.3 and later. At this time, calculation groups can be created by using the Tabular Object Model (TOM) API, Tabular Model Scripting Language (TMSL), and the open-source Tabular Editor tool. Support in SQL Server Data Tools (SSDT) will be included in a future release, as will documentation. Additional information for this and other CTP feature releases will be provided in the Analysis Services blog.
-
[[!INCLUDEsql-server-2019] Release Notes](sql-server-ver15-release-notes.md).
-
[Microsoft [!INCLUDEsql-server-2019]: Technical white paper](http://info.microsoft.com/rs/157-GQE-382/images/EN-US-CNTNT-white-paper-DBMod-Microsoft-SQL-Server-2019-Technical-white-paper.pdf)
Published in September 2018. Applies to Microsoft [!INCLUDEsql-server-2019] CTP 2.0 for Windows, Linux, and Docker containers.
[!INCLUDEget-help-options]