Skip to content

Commit d4b1a4f

Browse files
committed
2 parents 4bf0b9b + 83f6eb3 commit d4b1a4f

9 files changed

Lines changed: 53 additions & 48 deletions

File tree

docs/database-engine/log-shipping/log-shipping-and-replication-sql-server.md

Lines changed: 4 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ title: "Log Shipping and Replication (SQL Server)"
33
description: Learn how log shipping applies the transaction log from every insertion, update, or deletion made on the primary database to the secondary database.
44
author: MikeRayMSFT
55
ms.author: mikeray
6-
ms.date: "03/14/2017"
6+
ms.date: "04/11/2023"
77
ms.service: sql
88
ms.subservice: log-shipping
99
ms.topic: conceptual
@@ -24,7 +24,7 @@ helpviewer_keywords:
2424
For information about recovering databases involved in replication without any need to reconfigure replication, see [Back Up and Restore Replicated Databases](../../relational-databases/replication/administration/back-up-and-restore-replicated-databases.md).
2525

2626
> [!NOTE]
27-
> We recommend using database mirroring, rather than log shipping, to provide availability for the publication database. For more information, see [Database Mirroring and Replication (SQL Server)](../../database-engine/database-mirroring/database-mirroring-and-replication-sql-server.md).
27+
> Use Always On availability groups, rather than log shipping, to provide availability for the publication database. For more information, see [Configure replication with Always On availability groups](../availability-groups/windows/configure-replication-for-always-on-availability-groups-sql-server.md).
2828
2929
## Requirements and Procedures for Replicating from the Secondary If the Primary Is Lost
3030
Be aware of the following requirements and considerations:
@@ -107,7 +107,5 @@ helpviewer_keywords:
107107

108108
## See Also
109109
[SQL Server Replication](../../relational-databases/replication/sql-server-replication.md)
110-
[About Log Shipping (SQL Server)](../../database-engine/log-shipping/about-log-shipping-sql-server.md)
111-
[Database Mirroring and Replication (SQL Server)](../../database-engine/database-mirroring/database-mirroring-and-replication-sql-server.md)
112-
113-
110+
[About Log Shipping (SQL Server)](../../database-engine/log-shipping/about-log-shipping-sql-server.md)
111+
[Configure replication with Always On availability groups](../availability-groups/windows/configure-replication-for-always-on-availability-groups-sql-server.md)

docs/relational-databases/native-client-ole-db-transactions/supporting-distributed-transactions.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -133,6 +133,7 @@ if (FAILED(pITransactionJoin->JoinTransaction(
133133
```
134134

135135
## See Also
136-
[Transactions](../../relational-databases/native-client-ole-db-transactions/transactions.md)
136+
[Transactions](../../relational-databases/native-client-ole-db-transactions/transactions.md)
137+
[MS DTC for Azure SQL Managed Instance](https://learn.microsoft.com/azure/azure-sql/managed-instance/distributed-transaction-coordinator-dtc)
137138

138139

docs/relational-databases/polybase/polybase-configure-s3-compatible.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -70,7 +70,7 @@ The following sample script creates a database scoped credential `s3-dc` in the
7070
```sql
7171
USE [database_name];
7272
GO
73-
IF NOT EXISTS(SELECT * FROM sys.credentials WHERE name = 's3_dc')
73+
IF NOT EXISTS(SELECT * FROM sys.database_scoped_credentials WHERE name = 's3_dc')
7474
BEGIN
7575
CREATE DATABASE SCOPED CREDENTIAL s3_dc
7676
WITH IDENTITY = 'S3 Access Key',

docs/relational-databases/security/authentication-access/getting-started-with-database-engine-permissions.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -192,11 +192,11 @@ For a graphic showing the relationships among the [!INCLUDE[ssDE](../../../inclu
192192

193193
- The permissions granted to users and user-defined fixed database roles can be examined by using the `sys.database_permissions` view.
194194

195-
- Database role membership can be examined by using the `sys. sys.database_role_members` view.
195+
- Database role membership can be examined by using the `sys.database_role_members` view.
196196

197197
- Server role membership can be examined by using the `sys.server_role_members` view. This view isn't available in [!INCLUDE[ssSDS](../../../includes/sssds-md.md)].
198198

199-
- For additional security related views, see [Security Catalog Views (Transact-SQL)](../../../relational-databases/system-catalog-views/security-catalog-views-transact-sql.md) .
199+
- For additional security related views, see [Security Catalog Views (Transact-SQL)](../../../relational-databases/system-catalog-views/security-catalog-views-transact-sql.md).
200200

201201
## Examples
202202

docs/relational-databases/system-catalog-views/sys-server-principals-transact-sql.md

Lines changed: 8 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -27,9 +27,9 @@ monikerRange: ">=aps-pdw-2016||>=sql-server-2016||>=sql-server-linux-2017||=azur
2727
|-----------------|---------------|-----------------|
2828
|**name**|**sysname**|Name of the principal. Is unique within a server.|
2929
|**principal_id**|**int**|ID number of the Principal. Is unique within a server.|
30-
|**sid**|**varbinary(85)**|SID (Security-IDentifier) of the principal. If Windows principal, then it matches Windows SID.|
31-
|**type**|**char(1)**|Principal type:<br /><br /> S = SQL login<br /><br /> U = Windows login<br /><br /> G = Windows group<br /><br /> R = Server role<br /><br /> C = Login mapped to a certificate<br /><br /> E = External Login from Azure Active Directory<br /><br /> X = External group from Azure Active Directory group or applications<br /><br /> K = Login mapped to an asymmetric key|
32-
|**type_desc**|**nvarchar(60)**|Description of the principal type:<br /><br /> SQL_LOGIN<br /><br /> WINDOWS_LOGIN<br /><br /> WINDOWS_GROUP<br /><br /> SERVER_ROLE<br /><br /> CERTIFICATE_MAPPED_LOGIN<br /><br /> EXTERNAL_LOGIN<br /><br /> EXTERNAL_GROUP<br /><br /> ASYMMETRIC_KEY_MAPPED_LOGIN|
30+
|**sid**|**varbinary(85)**|SID (Security-IDentifier) of the principal.|
31+
|**type**|**char(1)**|Principal type:<br /><br /> S = SQL login<br /> R = Server role<br /><br /> E = External Login from Azure Active Directory<br /><br /> X = External group from Azure Active Directory group or applications<br />|
32+
|**type_desc**|**nvarchar(60)**|Description of the principal type:<br /><br /> SQL_LOGIN<br /><br /> SERVER_ROLE<br /><br /> EXTERNAL_LOGIN<br /><br /> EXTERNAL_GROUP<br />|
3333
|**is_disabled**|**int**|1 = Login is disabled.|
3434
|**create_date**|**datetime**|Time at which the principal was created.|
3535
|**modify_date**|**datetime**|Time at which the principal definition was last modified.|
@@ -40,26 +40,14 @@ monikerRange: ">=aps-pdw-2016||>=sql-server-2016||>=sql-server-linux-2017||=azur
4040
|**is_fixed_role**|**bit**|Returns 1 if the principal is one of the built-in server roles with fixed permissions. For more information, see [Server-Level Roles](../../relational-databases/security/authentication-access/server-level-roles.md).|
4141

4242
## Permissions
43-
Any login can see their own login name, the system logins, and the fixed server roles. To see other logins, requires ALTER ANY LOGIN, or a permission on the login. To see user-defined server roles, requires ALTER ANY SERVER ROLE, or membership in the role.
44-
45-
Azure SQL Database: only members of the server role **##MS_LoginManager##** or the special database role loginmanager in `master` or the Azure AD admin and server sdmin can see all logins.
46-
43+
Any login can see their own login name, the system logins, and the fixed server roles. Only members of the server role **##MS_LoginManager##** or the special database role loginmanager in `master` or the Azure AD admin and server Admin can see all logins.
44+
4745

4846
[!INCLUDE[ssCatViewPerm](../../includes/sscatviewperm-md.md)] For more information, see [Metadata Visibility Configuration](../../relational-databases/security/metadata-visibility-configuration.md).
47+
48+
> [!NOTE]
49+
> The permissions of fixed server roles do not appear in sys.server_permissions.
4950
50-
## Examples
51-
The following query lists the permissions explicitly granted or denied to server principals.
52-
53-
> [!IMPORTANT]
54-
> The permissions of fixed server roles (other than public) do not appear in sys.server_permissions. Therefore, server principals may have additional permissions not listed here.
55-
56-
```
57-
SELECT pr.principal_id, pr.name, pr.type_desc,
58-
pe.state_desc, pe.permission_name
59-
FROM sys.server_principals AS pr
60-
JOIN sys.server_permissions AS pe
61-
ON pe.grantee_principal_id = pr.principal_id;
62-
```
6351

6452
## See Also
6553
[Security Catalog Views &#40;Transact-SQL&#41;](../../relational-databases/system-catalog-views/security-catalog-views-transact-sql.md)

docs/relational-databases/track-changes/track-data-changes-sql-server.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -24,8 +24,6 @@ monikerRange: "=azuresqldb-current||>=sql-server-2016||>=sql-server-linux-2017||
2424

2525
[!INCLUDE [ssnoversion-md](../../includes/ssnoversion-md.md)] provides two features that track changes to data in a database: [change data capture](#Capture) and [change tracking](#Tracking). These features enable applications to determine the DML changes (insert, update, and delete operations) that were made to user tables in a database. Change data capture and change tracking can be enabled on the same database; no special considerations are required. For the editions of [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] that support change data capture and change tracking, see [Editions and supported features of SQL Server](../../sql-server/editions-and-components-of-sql-server-2019.md).
2626

27-
Change tracking is supported by [!INCLUDE[ssazure_md](../../includes/ssazure_md.md)]. Change data capture is only supported in [!INCLUDE [ssnoversion-md](../../includes/ssnoversion-md.md)] and Azure SQL Managed Instance.
28-
2927
## Benefits of using change data capture or change tracking
3028

3129
The ability to query for data that has changed in a database is an important requirement for some applications to be efficient. Typically, to determine data changes, application developers must implement a custom tracking method in their applications by using a combination of triggers, **timestamp** columns, and additional tables. Creating these applications usually involves a lot of work to implement, leads to schema updates, and often carries a high performance overhead.

docs/t-sql/functions/openrowset-transact-sql.md

Lines changed: 28 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -155,26 +155,27 @@ SELECT * FROM OPENROWSET(
155155
SINGLE_CLOB) AS DATA;
156156
```
157157

158-
**Applies to:** [!INCLUDE [sssql17-md](../../includes/sssql17-md.md)] CTP 1.1.
159-
Beginning with [!INCLUDE [sssql17-md](../../includes/sssql17-md.md)] CTP 1.1, the data_file can be in Azure Blob Storage. For examples, see [Examples of Bulk Access to Data in Azure Blob Storage](../../relational-databases/import-export/examples-of-bulk-access-to-data-in-azure-blob-storage.md).
158+
Beginning with [!INCLUDE [sssql17-md](../../includes/sssql17-md.md)], the data_file can be in Azure Blob Storage. For examples, see [Examples of Bulk Access to Data in Azure Blob Storage](../../relational-databases/import-export/examples-of-bulk-access-to-data-in-azure-blob-storage.md).
160159

161160
> [!IMPORTANT]
162161
> Azure SQL Database only supports reading from Azure Blob Storage.
163162
164163
#### BULK Error handling options
165164

166165
##### ERRORFILE
166+
167167
`ERRORFILE` ='*file_name*' specifies the file used to collect rows that have formatting errors and cannot be converted to an OLE DB rowset. These rows are copied into this error file from the data file "as is."
168168

169169
The error file is created at the start of the command execution. An error will be raised if the file already exists. Additionally, a control file that has the extension .ERROR.txt is created. This file references each row in the error file and provides error diagnostics. After the errors have been corrected, the data can be loaded.
170-
**Applies to:** [!INCLUDE [sssql17-md](../../includes/sssql17-md.md)] CTP 1.1.
170+
171171
Beginning with [!INCLUDE [sssql17-md](../../includes/sssql17-md.md)], the `error_file_path` can be in Azure Blob Storage.
172172

173173
##### ERRORFILE_DATA_SOURCE_NAME
174-
**Applies to:** [!INCLUDE [sssql17-md](../../includes/sssql17-md.md)] CTP 1.1.
175-
Is a named external data source pointing to the Azure Blob storage location of the error file that will contain errors found during the import. The external data source must be created using the `TYPE = BLOB_STORAGE` option added in [!INCLUDE [sssql17-md](../../includes/sssql17-md.md)] CTP 1.1. For more information, see [CREATE EXTERNAL DATA SOURCE](../../t-sql/statements/create-external-data-source-transact-sql.md).
174+
175+
Beginning with [!INCLUDE [sssql17-md](../../includes/sssql17-md.md)], is a named external data source pointing to the Azure Blob storage location of the error file that will contain errors found during the import. The external data source must be created using the `TYPE = BLOB_STORAGE`. For more information, see [CREATE EXTERNAL DATA SOURCE](../../t-sql/statements/create-external-data-source-transact-sql.md).
176176

177177
##### MAXERRORS
178+
178179
`MAXERRORS` =*maximum_errors* specifies the maximum number of syntax errors or nonconforming rows, as defined in the format file, that can occur before OPENROWSET throws an exception. Until MAXERRORS is reached, OPENROWSET ignores each bad row, not loading it, and counts the bad row as one error.
179180

180181
The default for *maximum_errors* is 10.
@@ -185,22 +186,29 @@ The default for *maximum_errors* is 10.
185186
#### BULK Data processing options
186187

187188
##### FIRSTROW
189+
188190
`FIRSTROW` =*first_row*
191+
189192
Specifies the number of the first row to load. The default is 1. This indicates the first row in the specified data file. The row numbers are determined by counting the row terminators. FIRSTROW is 1-based.
190193

191194
##### LASTROW
195+
192196
`LASTROW` =*last_row*
197+
193198
Specifies the number of the last row to load. The default is 0. This indicates the last row in the specified data file.
194199

195200
##### ROWS_PER_BATCH
201+
196202
`ROWS_PER_BATCH` =*rows_per_batch*
203+
197204
Specifies the approximate number of rows of data in the data file. This value should be of the same order as the actual number of rows.
198205

199206
`OPENROWSET` always imports a data file as a single batch. However, if you specify *rows_per_batch* with a value > 0, the query processor uses the value of *rows_per_batch* as a hint for allocating resources in the query plan.
200207

201208
By default, ROWS_PER_BATCH is unknown. Specifying ROWS_PER_BATCH = 0 is the same as omitting ROWS_PER_BATCH.
202209

203210
##### ORDER
211+
204212
`ORDER` ( { *column* [ ASC | DESC ] } [ ,... *n* ] [ UNIQUE ] )
205213
An optional hint that specifies how the data in the data file is sorted. By default, the bulk operation assumes the data file is unordered. Performance might improve if the order specified can be exploited by the query optimizer to generate a more efficient query plan. Examples for when specifying a sort can be beneficial include the following:
206214

@@ -210,22 +218,26 @@ An optional hint that specifies how the data in the data file is sorted. By defa
210218
- Using the rowset as a source table in the FROM clause of a query, where the sort and join columns match.
211219

212220
##### UNIQUE
221+
213222
`UNIQUE` specifies that the data file does not have duplicate entries.
214223

215224
If the actual rows in the data file are not sorted according to the order that is specified, or if the UNIQUE hint is specified and duplicates keys are present, an error is returned.
216225

217226
Column aliases are required when ORDER is used. The column alias list must reference the derived table that is being accessed by the BULK clause. The column names that are specified in the ORDER clause refer to this column alias list. Large value types (**varchar(max)**, **nvarchar(max)**, **varbinary(max)**, and **xml**) and large object (LOB) types (**text**, **ntext**, and **image**) columns cannot be specified.
218227

219228
##### SINGLE_BLOB
229+
220230
Returns the contents of *data_file* as a single-row, single-column rowset of type **varbinary(max)**.
221231

222232
> [!IMPORTANT]
223233
> We recommend that you import XML data only using the SINGLE_BLOB option, rather than SINGLE_CLOB and SINGLE_NCLOB, because only SINGLE_BLOB supports all Windows encoding conversions.
224234
225235
##### SINGLE_CLOB
236+
226237
By reading *data_file* as ASCII, returns the contents as a single-row, single-column rowset of type **varchar(max)**, using the collation of the current database.
227238

228239
##### SINGLE_NCLOB
240+
229241
By reading *data_file* as UNICODE, returns the contents as a single-row, single-column rowset of type **nvarchar(max)**, using the collation of the current database.
230242

231243
```sql
@@ -253,9 +265,10 @@ Specifies the code page of the data in the data file. CODEPAGE is relevant only
253265
|*code_page*|Indicates the source code page on which the character data in the data file is encoded; for example, 850.<br /><br /> **Important** Versions prior to [!INCLUDE[sssql16-md](../../includes/sssql16-md.md)] do not support code page 65001 (UTF-8 encoding).|
254266

255267
##### FORMAT
268+
256269
`FORMAT` **=** 'CSV'
257-
**Applies to:** [!INCLUDE [sssql17-md](../../includes/sssql17-md.md)] CTP 1.1.
258-
Specifies a comma separated values file compliant to the [RFC 4180](https://tools.ietf.org/html/rfc4180) standard.
270+
271+
Beginning with [!INCLUDE [sssql17-md](../../includes/sssql17-md.md)], specifies a comma separated values file compliant to the [RFC 4180](https://tools.ietf.org/html/rfc4180) standard.
259272

260273
```sql
261274
SELECT *
@@ -266,20 +279,21 @@ FROM OPENROWSET(BULK N'D:\XChange\test-csv.csv',
266279
```
267280

268281
##### FORMATFILE
282+
269283
`FORMATFILE` ='*format_file_path*'
270284
Specifies the full path of a format file. [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] supports two types of format files: XML and non-XML.
271285

272286
A format file is required to define column types in the result set. The only exception is when SINGLE_CLOB, SINGLE_BLOB, or SINGLE_NCLOB is specified; in which case, the format file is not required.
273287

274288
For information about format files, see [Use a Format File to Bulk Import Data &#40;SQL Server&#41;](../../relational-databases/import-export/use-a-format-file-to-bulk-import-data-sql-server.md).
275289

276-
**Applies to:** [!INCLUDE [sssql17-md](../../includes/sssql17-md.md)] CTP 1.1.
277-
Beginning with [!INCLUDE [sssql17-md](../../includes/sssql17-md.md)] CTP 1.1, the format_file_path can be in Azure Blob Storage. For examples, see [Examples of Bulk Access to Data in Azure Blob Storage](../../relational-databases/import-export/examples-of-bulk-access-to-data-in-azure-blob-storage.md).
290+
Beginning with [!INCLUDE [sssql17-md](../../includes/sssql17-md.md)], the format_file_path can be in Azure Blob Storage. For examples, see [Examples of Bulk Access to Data in Azure Blob Storage](../../relational-databases/import-export/examples-of-bulk-access-to-data-in-azure-blob-storage.md).
278291

279292
##### FIELDQUOTE
293+
280294
`FIELDQUOTE` **=** 'field_quote'
281-
**Applies to:** [!INCLUDE [sssql17-md](../../includes/sssql17-md.md)] CTP 1.1.
282-
Specifies a character that will be used as the quote character in the CSV file. If not specified, the quote character (") will be used as the quote character as defined in the [RFC 4180](https://tools.ietf.org/html/rfc4180) standard.
295+
296+
Beginning with [!INCLUDE [sssql17-md](../../includes/sssql17-md.md)], specifies a character that will be used as the quote character in the CSV file. If not specified, the quote character (") will be used as the quote character as defined in the [RFC 4180](https://tools.ietf.org/html/rfc4180) standard.
283297

284298
## Remarks
285299

@@ -453,7 +467,7 @@ OPENROWSET (BULK N'D:\data.csv', FORMATFILE =
453467

454468
### G. Accessing data from a CSV file with a format file
455469

456-
**Applies to:** [!INCLUDE [sssql17-md](../../includes/sssql17-md.md)] CTP 1.1.
470+
Beginning with [!INCLUDE [sssql17-md](../../includes/sssql17-md.md)].
457471

458472
```sql
459473
SELECT *
@@ -490,8 +504,7 @@ FROM OPENROWSET
490504
491505
### I. Accessing data from a file stored on Azure Blob storage
492506

493-
**Applies to:** [!INCLUDE [sssql17-md](../../includes/sssql17-md.md)] CTP 1.1.
494-
The following example uses an external data source that points to a container in an Azure storage account and a database scoped credential created for a shared access signature.
507+
Beginning with [!INCLUDE [sssql17-md](../../includes/sssql17-md.md)], the following example uses an external data source that points to a container in an Azure storage account and a database scoped credential created for a shared access signature.
495508

496509
```sql
497510
SELECT * FROM OPENROWSET(
@@ -569,7 +582,7 @@ SINGLE_CLOB
569582
570583
### L. Use OPENROWSET to access several parquet files using S3-compatible object storage
571584

572-
**Applies to:** [!INCLUDE [sssql22-md](../../includes/sssql22-md.md)]
585+
**Applies to:** [!INCLUDE [sssql22-md](../../includes/sssql22-md.md)] and later.
573586

574587
The following example uses access several parquet files from different location, all stored on S3-compatible object storage:
575588

0 commit comments

Comments
 (0)