Skip to content

Latest commit

 

History

History
604 lines (325 loc) · 27.5 KB

File metadata and controls

604 lines (325 loc) · 27.5 KB
title Updated - Relational Databases docs | Microsoft Docs
description Display snippets of updated content for recently changed in documentation, for Relational Databases.
manager craigg
author MightyPen
ms.author genemi
ms.topic article
ms.custom UpdArt.exe
ms.suite sql
ms.technology release-landing
ms.prod sql
ms.prod_service sql-non-specified
ms.component relational-databases
ms.date 04/28/2018

New and Recently Updated: Relational Databases docs

Nearly every day Microsoft updates some of its existing articles on its Docs.Microsoft.com documentation website. This article displays excerpts from recently updated articles. Links to new articles might also be listed.

This article is generated by a program that is rerun periodically. Occasionally an excerpt can appear with imperfect formatting, or as markdown from the source article. Images are never displayed here.

Recent updates are reported for the following date range and subject:

  • Date range of updates:   2018-02-03   -to-   2018-04-28
  • Subject area:   Relational Databases.

 

New Articles Created Recently

The following links jump to new articles that have been added recently.

  1. Joins (SQL Server)
  2. Subqueries (SQL Server)
  3. Set up replication distribution database in Always On availability group
  4. SQL Data Discovery and Classification
  5. Transaction Locking and Row Versioning Guide
  6. sys.dm_os_job_object (Azure SQL Database)
  7. Filestream and FileTable system stored procedures (Transact-SQL)

 

Updated Articles with Excerpts

This section displays the excerpts of updates gathered from articles that have recently experienced a large update.

The excerpts displayed here appear separated from their proper semantic context. Also, sometimes an excerpt is separated from important markdown syntax that surrounds it in the actual article. Therefore these excerpts are for general guidance only. The excerpts only enable you to know whether your interests warrant taking the time to click and visit the actual article.

For these and other reasons, do not copy code from these excerpts, and do not take as exact truth any text excerpt. Instead, visit the actual article.

 

This compact list provides links to all the updated articles that are listed in the Excerpts section.

  1. Use a Format File to Skip a Table Column (SQL Server)
  2. JSON data in SQL Server
  3. Query Processing Architecture Guide
  4. Tutorial: Prepare SQL Server For Replication - Publisher, Distributor, Subscriber
  5. Tutorial: Configure Replication between Two Fully Connected Servers (Transactional)
  6. Tutorial: Configure Replication between a Server and Mobile Clients (Merge)
  7. Query with Full-Text Search
  8. Transparent Data Encryption with Bring Your Own Key support for Azure SQL Database and Data Warehouse
  9. PowerShell and CLI: Enable Transparent Data Encryption using your own key from Azure Key Vault
  10. About Change Data Capture (SQL Server)

 

 

Updated: 2018-04-13           (Next)

 

Using OPENROWSET(BULK...)

To use an XML format file to skip a table column by using OPENROWSET(BULK...), you have to provide an explicit list of columns in the select list and also in the target table, as follows:

    INSERT ...<column_list> SELECT <column_list> FROM OPENROWSET(BULK...)

The following example uses the OPENROWSET bulk rowset provider and the myTestSkipCol2.xml format file. The example bulk imports the myTestSkipCol2.dat data file into the myTestSkipCol table. The statement contains an explicit list of columns in the select list and also in the target table, as required.

In SSMS, run the following code. Update the file system paths for the location of the sample files on your computer.

USE WideWorldImporters;
GO
INSERT INTO myTestSkipCol
  (Col1,Col3)
    SELECT Col1,Col3
      FROM  OPENROWSET(BULK  'C:\myTestSkipCol2.Dat',
      FORMATFILE='C:\myTestSkipCol2.Xml'
       ) as t1 ;
GO

 

 


Updated: 2018-04-13           (Previous | Next)

 

JSON documents may have sub-elements and hierarchical data that cannot be directly mapped into the standard relational columns. In this case, you can flatten JSON hierarchy by joining parent entity with sub-arrays.

In the following example, the second object in the array has sub-array representing person skills. Every sub-object can be parsed using additional OPENJSON function call:

DECLARE @json NVARCHAR(MAX)
SET @json =
N'[
       { "id" : 2,"info": { "name": "John", "surname": "Smith" }, "age": 25 },
       { "id" : 5,"info": { "name": "Jane", "surname": "Smith", "skills": ["SQL", "C#", "Azure"] }, "dob": "2005-11-04T12:00:00" }
 ]'

SELECT *
FROM OPENJSON(@json)
  WITH (id int 'strict $.id',
        firstName nvarchar(50) '$.info.name', lastName nvarchar(50) '$.info.surname',
        age int, dateOfBirth datetime2 '$.dob',
	skills nvarchar(max) '$.skills' as json)
	outer apply openjson( a.skills )
                     with ( skill nvarchar(8) '$' ) as b

skills array is returned in the first OPENJSON as original JSON text fragment and passed to another OPENJSON function using APPLY operator. The second OPENJSON function will parse JSON array and return string values as single column rowset that will be joined with the result of the first OPENJSON. The result of this query is shown in the following table:

Results

id firstName lastName age dateOfBirth skill
2 John Smith 25
5 Jane Smith 2005-11-04T12:00:00 SQL
5 Jane Smith 2005-11-04T12:00:00 C#
5 Jane Smith 2005-11-04T12:00:00 Azure

OUTER APPLY OPENJSON will join first level entity with sub-array and return flatten resultset. Due to JOIN, the second row will be repeated for every skill.

 

 


Updated: 2018-04-13           (Previous | Next)

 

Logical Operator Precedence

When more than one logical operator is used in a statement, NOT is evaluated first, then AND, and finally OR. Arithmetic, and bitwise, operators are handled before logical operators. For more information, see [Operator Precedence].

In the following example, the color condition pertains to product model 21, and not to product model 20, because AND has precedence over OR.

SELECT ProductID, ProductModelID
FROM Production.Product
WHERE ProductModelID = 20 OR ProductModelID = 21
  AND Color = 'Red';
GO

You can change the meaning of the query by adding parentheses to force evaluation of the OR first. The following query finds only products under models 20 and 21 that are red.

SELECT ProductID, ProductModelID
FROM Production.Product
WHERE (ProductModelID = 20 OR ProductModelID = 21)
  AND Color = 'Red';
GO

Using parentheses, even when they are not required, can improve the readability of queries, and reduce the chance of making a subtle mistake because of operator precedence. There is no significant performance penalty in using parentheses. The following example is more readable than the original example, although they are syntactically the same.

SELECT ProductID, ProductModelID
FROM Production.Product
WHERE ProductModelID = 20 OR (ProductModelID = 21
  AND Color = 'Red');
GO

 

 


Updated: 2018-04-13           (Previous | Next)

 

Note

  • Replication is not supported on SQL Servers that are more than two versions apart. For more information, please see Supported SQL Versions in Repl Topology.
  • In {Included-Content-Goes-Here} , you must connect to the Publisher and Subscriber using a login that is a member of the sysadmin fixed server role. For more information on the sysadmin role, please see Server Level Roles.

Estimated time to complete this tutorial: 30 minutes

Create Windows Accounts for Replication

In this section, you will create Windows accounts to run replication agents. You will create a separate Windows account on the local server for the following agents:

Agent Location Account name
Snapshot Agent Publisher <machine_name>\repl_snapshot

 

 


Updated: 2018-04-13           (Previous | Next)

 

Create a subscription to the Transactional publication

In this section, you will add a subscriber to the Publication that was previously created. This tutorial uses a remote subscriber (NODE2\SQL2016) but a subscription can also be added locally to the publisher.

To create the subscription

  1. Connect to the Publisher in {Included-Content-Goes-Here} , expand the server node, and then expand the Replication folder.

  2. In the Local Publications folder, right-click the AdvWorksProductTrans publication, and then select New Subscriptions. The New Subscription Wizard launches:

    New Subscription

  3. On the Publication page, select AdvWorksProductTrans, and then select Next:

    Select Tran Publisher

  4. On the Distribution Agent Location page, select Run all agents at the Distributor, and then select Next. For more information on pull and push subscriptions, please see Subscribe to Publications:

    Run Agents at Dist

  5. On the Subscribers page, if the name of the Subscriber instance is not displayed, select Add Subscriber and then select Add SQL Server Subscriber from the drop-down. This will launch the Connect to Server dialog box. Enter the Subscriber instance name and then select Connect.

 

 


Updated: 2018-04-13           (Previous | Next)

 

The Employee table contains a column (OrganizationNode) that has the hierarchyid data type, which is only supported for replication in SQL 2017. If you're using a build lower than SQL 2017, you'll see a message at the bottom of the screen notifying you of potential data loss for using this column in bi-directional replication. For the purpose of this tutorial, this message can be ignored. However, this datatype should not be replicated in a production environment unless you're using the supported build. For more inforamtion about replicating the hierarchyid datatype, please see Using Hierarchyid Columns in Replication

  • On the Filter Table Rows page, select Add and then select Add Filter.

  • In the Add Filter dialog box, select Employee (HumanResources) in Select the table to filter. Select the LoginID column, select the right arrow to add the column to the WHERE clause of the filter query, and modify the WHERE clause as follows:

    WHERE [LoginID] = HOST_NAME()
    

    a. Select A row from this table will go to only one subscription, and select OK:

    Add Filter

  • On the Filter Table Rows page, select Employee (Human Resources), select Add, and then select Add Join to Extend the Selected Filter.

    a. In the Add Join dialog box, select Sales.SalesOrderHeader under Joined table. Select Write the join statement manually, and complete the join statement as follows:

 

 


Updated: 2018-04-13           (Previous | Next)

 

More info about generation term searches

The inflectional forms are the different tenses and conjugations of a verb or the singular and plural forms of a noun.

For example, search for the inflectional form of the word "drive." If various rows in the table include the words "drive," "drives," "drove," "driving," and "driven," all would be in the result set because each of these can be inflectionally generated from the word drive.

[FREETEXT] and [FREETEXTTABLE] look for inflectional terms of all specified words by default. [CONTAINS] and [CONTAINSTABLE] support an optional INFLECTIONAL argument.

Search for synonyms of a specific word

A thesaurus defines user-specified synonyms for terms. For more info about thesaurus files, see [Configure and Manage Thesaurus Files for Full-Text Search].

For example, if an entry, "{car, automobile, truck, van}," is added to a thesaurus, you can search for the thesaurus form of the word "car." All rows in the table queried that include the words "automobile," "truck," "van," or "car," appear in the result set because each of these words belongs to the synonym expansion set containing the word "car."

[FREETEXT] and [FREETEXTTABLE] use the thesaurus by default. [CONTAINS] and [CONTAINSTABLE] support an optional THESAURUS argument.

 

 


Updated: 2018-04-24           (Previous | Next)

 

How to configure Geo-DR with Azure Key Vault

To maintain high availability of TDE Protectors for encrypted databases, it is required to configure redundant Azure Key Vaults based on the existing or desired SQL Database failover groups or active geo-replication instances. Each geo-replicated server requires a separate key vault, that must be co-located with the server in the same Azure region. Should a primary database become inaccessible due to an outage in one region and a failover is triggered, the secondary database is able to take over using the secondary key vault.

For Geo-Replicated Azure SQL databases, the following Azure Key Vault configuration is required:

  • One primary database with a key vault in region and one secondary database with a key vault in region.
  • At least one secondary is required, up to four secondaries are supported.
  • Secondaries of secondaries (chaining) are not supported.

The following section will go over the setup and configuration steps in more detail.

Azure Key Vault Configuration Steps

  • Install PowerShell
  • Create two Azure Key Vaults in two different regions using PowerShell to enable the "soft-delete" property on the key vaults (this option is not available from the AKV Portal yet – but required by SQL).
  • Both Azure Key Vaults must be located in the two regions available in the same Azure Geo in order for backup and restore of keys to work. If you need the two key vaults to be located in different geos to meet SQL Geo-DR requirements, follow the BYOK Process that allows keys to be imported from an on-prem HSM.

 

 


Updated: 2018-04-24           (Previous | Next)

 

Prerequisites for CLI

Step: Create a server and assign an Azure AD identity to your server

  cli
  # create server (with identity) and database

 

 


Updated: 2018-04-17           (Previous)

 

Working with database and table collation differences

It is important to be aware of a situation where you have different collations between the database and the columns of a table configured for change data capture. CDC uses interim storage to populate side tables. If a table has CHAR or VARCHAR columns with collations that are different from the database collation and if those columns store non-ASCII characters (such as double byte DBCS characters), CDC might not be able to persist the changed data consistent with the data in the base tables. This is due to the fact that the interim storage variables cannot have collations associated with them.

Please consider one of the following approaches to ensure change captured data is consistent with base tables:

  • Use NCHAR or NVARCHAR data type for columns containing non-ASCII data.

  • Or, Use the same collation for columns and for the database.

For example, if you have one database that uses a collation of SQL_Latin1_General_CP1_CI_AS, consider the following table:

CREATE TABLE T1(
     C1 INT PRIMARY KEY,
     C2 VARCHAR(10) collate Chinese_PRC_CI_AI)

CDC might fail to capture the binary data for column C2, because its collation is different (Chinese_PRC_CI_AI). Use NVARCHAR to avoid this problem:

CREATE TABLE T1(
     C1 INT PRIMARY KEY,
     C2 NVARCHAR(10) collate Chinese_PRC_CI_AI --Unicode data type, CDC works well with this data type)

Similar articles about new or updated articles

This section lists very similar articles for recently updated articles in other subject areas, within our public GitHub.com repository: MicrosoftDocs/sql-docs.

Subject areas that do have new or recently updated articles

Subject areas that do not have any new or recently updated articles