Skip to content

Commit a31db73

Browse files
committed
Merge branch 'master' of https://github.com/MicrosoftDocs/sql-docs-pr into dreplay
2 parents 3a0d8ca + e2f2ed4 commit a31db73

90 files changed

Lines changed: 1123 additions & 360 deletions

File tree

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

docs/azure-data-studio/release-notes-azure-data-studio.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ February 18, 2021   /   version: 1.26.0
2626
| Bug Fixes | For a complete list of fixes see [Bugs and issues on GitHub](https://github.com/microsoft/azuredatastudio/issues?q=is%3Aissue+milestone%3A%22February+2021+Release%22+is%3Aclosed). |
2727
| Extension(s) update | [Dacpac](extensions/sql-server-dacpac-extension.md) <br/> [Kusto (KQL)](extensions/kusto-extension.md) </br> [MachineLearning](extensions/machine-learning-extension.md) </br> [Profiler](extensions/sql-server-profiler-extension.md) </br> [SchemaCompare](extensions/schema-compare-extension.md) </br> [SQLDatabaseProjects](extensions/sql-database-project-extension.md) |
2828
| New Azure Arc features | Multiple data controllers now supported <br/> New connection dialog options like kube config file <br/> Postgres dashboard enhancements |
29-
| New Notebook features | Improved Jupyter server start-up time by 50% on Windows <br/> Added support to edit Jupyter Books through right-click <br/> Added [URI notebook parameterization support](https://docs.microsoft.com/sql/azure-data-studio/notebooks/notebooks-parameterization) |
29+
| New Notebook features | Improved Jupyter server start-up time by 50% on Windows <br/> Added support to edit Jupyter Books through right-click <br/> Added URI notebook parameterization support and [added notebook parameterization documentation](https://docs.microsoft.com/sql/azure-data-studio/notebooks/notebooks-parameterization) |
3030

3131
## December 2020 (hotfix)
3232

docs/big-data-cluster/change-azdata-password.md

Lines changed: 152 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ description: Update the `AZDATA_PASSWORD` manually
44
author: NelGson
55
ms.author: negust
66
ms.reviewer: mikeray
7-
ms.date: 12/19/2019
7+
ms.date: 03/01/2021
88
ms.topic: conceptual
99
ms.prod: sql
1010
ms.technology: big-data-cluster
@@ -14,13 +14,13 @@ ms.technology: big-data-cluster
1414

1515
[!INCLUDE[SQL Server 2019](../includes/applies-to-version/sqlserver2019.md)]
1616

17-
Whether or not the cluster is operating with Active Directory integration, `AZDATA_PASSWORD` is set during deployment. It provides a basic authentication to the cluster controller and master instance. This document describes how to manually update `AZDATA_PASSWORD`.
17+
Whether or not the [!INCLUDE[ssbigdataclusters-ss-nover](../includes/ssbigdataclusters-ss-nover.md)] is operating with Active Directory integration, `AZDATA_PASSWORD` is set during deployment. It provides a basic authentication to the cluster controller and master instance. This document describes how to manually update `AZDATA_PASSWORD`.
1818

1919
## Change `AZDATA_PASSWORD` for controller
2020

2121
If the cluster is operating in non-Active Directory mode, update the Apache Knox Gateway password by doing the following:
2222

23-
1. Obtain the controller SQL Server credentials by running the following commands:
23+
1. Obtain the controller [!INCLUDE[ssNoVersion](../includes/ssnoversion-md.md)] credentials by running the following commands:
2424

2525
a. Run this command as a Kubernetes administrator:
2626

@@ -67,7 +67,7 @@ If the cluster is operating in non-Active Directory mode, update the Apache Knox
6767

6868
1. Update the password in the users table:
6969

70-
```SQL
70+
```sql
7171
UPDATE [auth].[users] SET password = 'J2y4E4dhlgwHOaRr3HKiiVAKBfjuGDyYmzn88VXmrzM=' WHERE username = '<username>'
7272
```
7373

@@ -80,3 +80,151 @@ If the cluster is operating in non-Active Directory mode, update the Apache Knox
8080
```sql
8181
ALTER LOGIN <AZDATA_USERNAME> WITH PASSWORD = 'newPassword'
8282
```
83+
84+
## Manually updating password for Grafana and Kibana
85+
86+
After following the steps to update AZDATA_PASSWORD, you will see that [Grafana](app-monitor.md) and [Kibana](cluster-logging-kibana.md) still accept the old password. This is because Grafana and Kibana does not have visibility to the new Kubernetes secret. You must manually update the password for Grafana and Kibana separately.
87+
88+
## Update Grafana password
89+
90+
Follow these options for manually updating the password for [Grafana](app-monitor.md).
91+
92+
1. The htpasswd utility is required. You can install this on any client machine.
93+
94+
#### [For Ubuntu](#tab/ubuntu):
95+
```bash
96+
sudo apt install apache2-utils
97+
```
98+
99+
#### [For RHEL](#tab/rhel):
100+
```bash
101+
sudo yum install httpd-tools
102+
```
103+
104+
---
105+
106+
2. Generate the new password.
107+
108+
```bash
109+
htpasswd -nbs <username> <password>
110+
admin:{SHA}<secret>
111+
```
112+
113+
Replace values for /<username/>, /<password/>, /<secret/> as appropriate, for example:
114+
115+
```bash
116+
htpasswd -nbs admin Test@12345
117+
admin:{SHA}W/5VKRjIzjusUJ0ih0gHyEPjC/s=
118+
```
119+
120+
3. Now encode the password:
121+
122+
```bash
123+
echo "admin:{SHA}W/5VKRjIzjusUJ0ih0gHyEPjC/s=" | base64
124+
```
125+
126+
Retain the output base64 string for later.
127+
128+
4. Next, edit the mgmtproxy-secret:
129+
130+
```bash
131+
kubectl edit secret -n mssql-cluster mgmtproxy-secret
132+
```
133+
134+
5. Update the controller-login-htpasswd with the new encoded password base64 string generated above:
135+
136+
```console
137+
# Please edit the object below. Lines beginning with a '#' will be ignored,
138+
# and an empty file will abort the edit. If an error occurs while saving this file will be
139+
# reopened with the relevant failures.
140+
#
141+
apiVersion: v1
142+
data:
143+
controller-login-htpasswd: <base64 string from before>
144+
mssql-internal-controller-password: <password>
145+
mssql-internal-controller-username: <username>
146+
```
147+
148+
6. Identify and delete the mgmtproxy pod.
149+
150+
If necessary, identify the name of your mgmtproxy prod.
151+
152+
#### [For Windows](#tab/windows):
153+
On a Windows server you can use the following:
154+
155+
```bash
156+
kubectl get pods -n <namespace> -l app=mgmtproxy
157+
```
158+
159+
#### [For Linux](#tab/linux):
160+
On Linux you can use the following:
161+
162+
```bash
163+
kubectl get pods -n <namespace> | grep 'mgmtproxy'
164+
```
165+
166+
---
167+
168+
Remove the mgmtproxy pod:
169+
```bash
170+
kubectl delete pod mgmtproxy-xxxxx -n mssql-clutser
171+
```
172+
173+
7. Wait for the mgmtproxy pod to come online and Grafana Dashboard to start.
174+
175+
The wait is not significant and the pod should be online within seconds. To check the status of the pod you can use the same `get pods` command as used in the previous step.
176+
If you see the mgmtproxy pod is not promptly returning to Ready status, use kubectl to describe the pod:
177+
178+
```bash
179+
kubectl describe pods mgmtproxy-xxxxx -n <namespace>
180+
```
181+
182+
For troubleshooting and further log collection, use the Azure Data CLI `[azdata bdc debug copy-logs](../azdata/reference/reference-azdata-bdc-debug.md)` command.
183+
184+
8. Now login to Grafana using new password.
185+
186+
187+
## Update the Kibana password
188+
189+
Follow these options for manually updating the password for [Kibana](cluster-logging-kibana.md).
190+
191+
> [!NOTE]
192+
> The older Microsoft Edge browser is incompatible with Kibana, you must use the Edge chromium-based browser for the dashboard to display correctly. You will see a blank page when loading the dashboards using an unsupported browser, see [supported browsers for Kibana](https://www.elastic.co/support/matrix#matrix_browsers).
193+
194+
1. Open the Kibana URL.
195+
196+
You can find the Kibana service endpoint URL from within [Azure Data Studio](manage-with-controller-dashboard#controller-dashboard), or use the following **azdata** command:
197+
198+
```azurecli
199+
azdata login
200+
azdata bdc endpoint list -e logsui -o table
201+
```
202+
203+
For example: https://11.111.111.111:30777/kibana/app/kibana#/discover
204+
205+
2. On the left side pane click on the **Security** option.
206+
207+
![A screenshot of the menu on the left pane of Kibana, with the Security option chosen](\media\big-data-cluster-change-kibana-password\big-data-cluster-change-kibana-password-1.jpg)
208+
209+
3. On the security page, under the heading Authentication Backends, click on **Internal User Database**.
210+
211+
![A screenshot of the security page, with the Internal User Database box chosen.](\media\big-data-cluster-change-kibana-password\big-data-cluster-change-kibana-password-2.jpg)
212+
213+
4. Now you will see the list of users under the heading Internal Users Database. Use this page to add, modify and remove any users for Kibana endpoint access. For the user that need the updated password, click on **Edit** button on the right hand side for the user.
214+
215+
![A screenshot of the Internal User Database page. In the list of users, for the KubeAdmin user, the Edit button is chosen.](\media\big-data-cluster-change-kibana-password\big-data-cluster-change-kibana-password-3.jpg)
216+
217+
5. Enter the new password twice and click on **Submit**:
218+
219+
![A screenshot of the Internal User edit form. A new password has been entered in the Password and Repeat password fields.](\media\big-data-cluster-change-kibana-password\big-data-cluster-change-kibana-password-4.jpg)
220+
221+
6. Close the browser and reconnect to the Kibana URL using updated password.
222+
223+
> [!Note]
224+
> After logging in with new password, if you see blank pages in Kibana, manually logout using the logout option at top right corner and login again.
225+
226+
## See also
227+
228+
* [azdata bdc (Azure Data CLI)](../../sql/azdata/reference/reference-azdata-bdc.md)
229+
* [Monitor applications with azdata and Grafana Dashboard](app-monitor.md)
230+
* [Check out cluster logs with Kibana Dashboard](cluster-logging-kibana.md)

docs/big-data-cluster/cluster-logging-kibana.md

Lines changed: 9 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -6,28 +6,28 @@ author: cloudmelon
66
ms.author: melqin
77
ms.reviewer: mikeray
88
ms.metadata: seo-lt-2019
9-
ms.date: 10/01/2020
9+
ms.date: 02/25/2021
1010
ms.topic: conceptual
1111
ms.prod: sql
1212
ms.technology: big-data-cluster
1313
---
1414

1515
# Check out cluster logs with Kibana Dashboard
1616

17-
This article describes how to monitor an application inside a SQL Server Big Data Cluster.
17+
This article describes how to monitor an application inside [!INCLUDE[ssbigdataclusters-ss-nover](../includes/ssbigdataclusters-ss-nover.md)].
1818

1919
## Prerequisites
2020

21-
- [SQL Server 2019 big data cluster](deployment-guidance.md)
21+
- [[!INCLUDE[ssbigdataclusters-ss-nover](../includes/ssbigdataclusters-ss-nover.md)]](deployment-guidance.md)
2222
- [azdata command-line utility](../azdata/install/deploy-install-azdata.md)
2323

2424
## Capabilities
2525

26-
In SQL Server 2019 you can create, delete, describe, initialize, list run and update your application. The following table describes the application deployment commands that you can use with **azdata**.
26+
In [!INCLUDE[sssql19-md](../includes/sssql19-md.md)] you can create, delete, describe, initialize, list run and update your application. The following table describes the application deployment commands that you can use with **azdata**.
2727

2828
|Command |Description |
2929
|:---|:---|
30-
|`azdata bdc endpoint list` | Lists the endpoints for the Big Data Cluster. |
30+
|`azdata bdc endpoint list` | Lists the endpoints for the [!INCLUDE[ssbigdataclusters-ss-nover](../includes/ssbigdataclusters-ss-nover.md)]. |
3131

3232

3333
You can use the following example to list the endpoint of **Kibana dashboard**:
@@ -46,8 +46,10 @@ The link to a Kibana dashboard:
4646
![Kibana dashboard](./media/view-cluster-status/kibana-dashboard.png)
4747

4848
> [!NOTE]
49-
> (Old) Microsoft Edge browser is incompatible with Kibana, you must use the chromium based browser for the dashboard to display correctly. You will see a blank page when loading the dashboards using an unsupported browser. See here for supported browsers for Kibana.
49+
> The older Microsoft Edge browser is incompatible with Kibana, you must use the Edge chromium-based browser for the dashboard to display correctly. You will see a blank page when loading the dashboards using an unsupported browser, see [supported browsers for Kibana](https://www.elastic.co/support/matrix#matrix_browsers).
5050
5151
## Next steps
5252

53-
For more information about [!INCLUDE[big-data-clusters-2019](../includes/ssbigdataclusters-ss-nover.md)], see [What are [!INCLUDE[big-data-clusters-2019](../includes/ssbigdataclusters-ver15.md)]?](big-data-cluster-overview.md).
53+
For more information about [!INCLUDE[big-data-clusters-2019](../includes/ssbigdataclusters-ss-nover.md)], see [What are [!INCLUDE[big-data-clusters-2019](../includes/ssbigdataclusters-ver15.md)]?](big-data-cluster-overview.md).
54+
55+
56.4 KB
Loading
105 KB
Loading
85.9 KB
Loading
140 KB
Loading

docs/big-data-cluster/release-notes-big-data-cluster.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -93,7 +93,7 @@ SQL Server 2019 CU9 for SQL Server Big Data Clusters, includes important capabil
9393
Clusters using `mssql-conf` for SQL Server master instance configurations require additional steps after upgrading to CU9. Follow the instructions [here](bdc-upgrade-configuration.md).
9494

9595
- Improved [!INCLUDE[azdata](../includes/azure-data-cli-azdata.md)] experience for encryption at rest.
96-
- Ability to dynamically install Python Spark packages using virtual environments.
96+
- Ability to dynamically [install Python Spark packages](spark-install-packages.md) using virtual environments.
9797
- Upgraded software versions for most of our OSS components (Grafana, Kibana, FluentBit, etc.) to ensure BDC images are up to date with the latest enhancements and fixes. See [Open-source software reference](reference-open-source-software.md).
9898
- Other miscellaneous improvements and bug fixes.
9999

docs/big-data-cluster/spark-install-packages.md

Lines changed: 55 additions & 28 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ description: Spark Library Management
55
author: MikeRayMSFT
66
ms.author: mikeray
77
ms.reviewer: rahul.ajmera
8-
ms.date: 01/25/2021
8+
ms.date: 02/25/2021
99
ms.topic: reference
1010
ms.prod: sql
1111
ms.technology: big-data-cluster
@@ -18,59 +18,86 @@ ms.technology: big-data-cluster
1818
This article provides guidance on how to import and install packages for a Spark session through session and notebook configurations.
1919

2020
## Built-in tools
21-
Spark and Hadoop base packages
22-
Python 3.7 and Python 2.7
23-
Pandas, Sklearn, Numpy, and other data processing packages.
24-
R and MRO packages
25-
Sparklyr
21+
22+
Scala Spark (Scala 2.11) and Hadoop base packages.
23+
24+
PySpark (Python 3.7). Pandas, Sklearn, Numpy, and other data processing and machine learning packages.
25+
26+
MRO 3.5.2 packages. Sparklyr and SparkR for R Spark workloads.
2627

2728
## Install packages from a Maven repository onto the Spark cluster at runtime
29+
2830
Maven packages can be installed onto your Spark cluster using notebook cell configuration at the start of your spark session. Before starting a spark session in Azure Data Studio, run the following code:
2931

30-
```
32+
```python
3133
%%configure -f \
3234
{"conf": {"spark.jars.packages": "com.microsoft.azure:azure-eventhubs-spark_2.11:2.3.1"}}
3335
```
3436

35-
## Install Python packages at PySpark job-submission time
36-
1. Specify the path to a requirements.txt file in HDFS to use as a reference for packages to install.
37-
```
37+
## Install Python packages at PySpark at runtime
38+
39+
Session and Job level package management guarantees library consistency and isolation. The configuration is a Spark standard library configuration that can be applied on Livy sessions. __azdata spark__ support these configurations. The examples bellow are presented as __Azure Data Studio Notebooks__ configure cells that need to be run after attaching to a cluster with the PySpark kernel.
40+
41+
If the __"spark.pyspark.virtualenv.enabled" : "true"__ configuration is not set, the session will use the cluster default python and installed libraries.
42+
43+
### Session/Job configuration with requirements.txt
44+
45+
If
46+
Specify the path to a requirements.txt file in HDFS to use as a reference for packages to install.
47+
48+
```python
3849
%%configure -f \
39-
{"conf": {
40-
"spark.pyspark.virtualenv.enabled" : "true",
41-
"spark.pyspark.virtualenv.type" : "conda",
42-
"spark.pyspark.virtualenv.requirements" : "requirements.txt",
43-
"spark.pyspark.virtualenv.bin.path" : "/opt/mls/python/bin/conda"
44-
},
45-
"files": ["hdfs://nmnode-0/tmp/requirements.txt"]
50+
{
51+
"conf": {
52+
"spark.pyspark.virtualenv.enabled" : "true",
53+
"spark.pyspark.virtualenv.python_version": "3.7",
54+
"spark.pyspark.virtualenv.requirements" : "hdfs://user/project-A/requirements.txt"
55+
}
4656
}
4757
```
48-
2. Create a conda virtualenv without a requirements file and dynamically add packages during the Spark session.
49-
```
58+
59+
### Session/Job configuration with different python versions
60+
61+
Create a conda virtualenv without a requirements file and dynamically add packages during the Spark session.
62+
63+
```python
5064
%%configure -f \
51-
{"conf": {
52-
'spark.pyspark.virtualenv.enabled' : 'true',
53-
'spark.pyspark.virtualenv.type' : 'conda',
54-
'spark.pyspark.virtualenv.bin.path' : '/opt/mls/python/bin/conda',
55-
'spark.pyspark.virtualenv.python_version': '3.6'
56-
}
57-
```
65+
{
66+
"conf": {
67+
"spark.pyspark.virtualenv.enabled" : "true",
68+
"spark.pyspark.virtualenv.python_version": "3.6"
69+
}
70+
}
71+
```
72+
73+
### Library installation
74+
75+
Execute the __sc.install_packages__ to install libraries dynamically in your session. Libraries will be installed into the driver and across all executor nodes.
5876

5977
```python
6078
sc.install_packages("numpy==1.11.0")
6179
import numpy as np
6280
```
6381

82+
Is is also possible to install multiple libraries in the same command using an array.
83+
84+
```python
85+
sc.install_packages(["numpy==1.11.0", "xgboost"])
86+
import numpy as np
87+
import xgboost as xgb
88+
```
89+
6490
## Import .jar from HDFS for use at runtime
6591
Import jar at runtime through Azure Data Studio notebook cell configuration.
6692

67-
```
93+
```python
6894
%%configure -f
6995
{"conf": {"spark.jars": "/jar/mycodeJar.jar"}}
7096
```
7197

7298
### Import .jar at runtime through Azure Data Studio notebook cell configuration
73-
```
99+
100+
```python
74101
%%configure -f
75102
{"conf": {"spark.jars": "/jar/mycodeJar.jar"}}
76103
```

docs/connect/ado-net/appcontext-switches.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ ms.technology: connectivity
1010
ms.topic: conceptual
1111
author: johnnypham
1212
ms.author: v-jopha
13-
ms.reviewer:
13+
ms.reviewer: v-daenge
1414
---
1515
# AppContext switches in Sqlclient
1616

0 commit comments

Comments
 (0)