You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
| Bug Fixes | For a complete list of fixes see [Bugs and issues on GitHub](https://github.com/microsoft/azuredatastudio/issues?q=is%3Aissue+milestone%3A%22February+2021+Release%22+is%3Aclosed). |
| New Azure Arc features | Multiple data controllers now supported <br/> New connection dialog options like kube config file <br/> Postgres dashboard enhancements |
29
-
| New Notebook features | Improved Jupyter server start-up time by 50% on Windows <br/> Added support to edit Jupyter Books through right-click <br/> Added [URI notebook parameterization support](https://docs.microsoft.com/sql/azure-data-studio/notebooks/notebooks-parameterization)|
29
+
| New Notebook features | Improved Jupyter server start-up time by 50% on Windows <br/> Added support to edit Jupyter Books through right-click <br/> Added URI notebook parameterization support and [added notebook parameterization documentation](https://docs.microsoft.com/sql/azure-data-studio/notebooks/notebooks-parameterization)|
[!INCLUDE[SQL Server 2019](../includes/applies-to-version/sqlserver2019.md)]
16
16
17
-
Whether or not the cluster is operating with Active Directory integration, `AZDATA_PASSWORD` is set during deployment. It provides a basic authentication to the cluster controller and master instance. This document describes how to manually update `AZDATA_PASSWORD`.
17
+
Whether or not the [!INCLUDE[ssbigdataclusters-ss-nover](../includes/ssbigdataclusters-ss-nover.md)] is operating with Active Directory integration, `AZDATA_PASSWORD` is set during deployment. It provides a basic authentication to the cluster controller and master instance. This document describes how to manually update `AZDATA_PASSWORD`.
18
18
19
19
## Change `AZDATA_PASSWORD` for controller
20
20
21
21
If the cluster is operating in non-Active Directory mode, update the Apache Knox Gateway password by doing the following:
22
22
23
-
1. Obtain the controller SQL Server credentials by running the following commands:
23
+
1. Obtain the controller [!INCLUDE[ssNoVersion](../includes/ssnoversion-md.md)] credentials by running the following commands:
24
24
25
25
a. Run this command as a Kubernetes administrator:
26
26
@@ -67,7 +67,7 @@ If the cluster is operating in non-Active Directory mode, update the Apache Knox
67
67
68
68
1. Update the password in the users table:
69
69
70
-
```SQL
70
+
```sql
71
71
UPDATE [auth].[users] SET password ='J2y4E4dhlgwHOaRr3HKiiVAKBfjuGDyYmzn88VXmrzM='WHERE username ='<username>'
72
72
```
73
73
@@ -80,3 +80,151 @@ If the cluster is operating in non-Active Directory mode, update the Apache Knox
80
80
```sql
81
81
ALTER LOGIN <AZDATA_USERNAME> WITH PASSWORD ='newPassword'
82
82
```
83
+
84
+
## Manually updating password for Grafana and Kibana
85
+
86
+
After following the steps to update AZDATA_PASSWORD, you will see that [Grafana](app-monitor.md) and [Kibana](cluster-logging-kibana.md) still accept the old password. This is because Grafana and Kibana does not have visibility to the new Kubernetes secret. You must manually update the password for Grafana and Kibana separately.
87
+
88
+
## Update Grafana password
89
+
90
+
Follow these options for manually updating the password for [Grafana](app-monitor.md).
91
+
92
+
1. The htpasswd utility is required. You can install this on any client machine.
93
+
94
+
#### [For Ubuntu](#tab/ubuntu):
95
+
```bash
96
+
sudo apt install apache2-utils
97
+
```
98
+
99
+
#### [For RHEL](#tab/rhel):
100
+
```bash
101
+
sudo yum install httpd-tools
102
+
```
103
+
104
+
---
105
+
106
+
2. Generate the new password.
107
+
108
+
```bash
109
+
htpasswd -nbs <username><password>
110
+
admin:{SHA}<secret>
111
+
```
112
+
113
+
Replace values for /<username/>, /<password/>, /<secret/> as appropriate, for example:
5. Update the controller-login-htpasswd with the new encoded password base64 string generated above:
135
+
136
+
```console
137
+
# Please edit the object below. Lines beginning with a '#' will be ignored,
138
+
# and an empty file will abort the edit. If an error occurs while saving this file will be
139
+
# reopened with the relevant failures.
140
+
#
141
+
apiVersion: v1
142
+
data:
143
+
controller-login-htpasswd: <base64 string from before>
144
+
mssql-internal-controller-password: <password>
145
+
mssql-internal-controller-username: <username>
146
+
```
147
+
148
+
6. Identify and delete the mgmtproxy pod.
149
+
150
+
If necessary, identify the name of your mgmtproxy prod.
151
+
152
+
#### [For Windows](#tab/windows):
153
+
On a Windows server you can use the following:
154
+
155
+
```bash
156
+
kubectl get pods -n <namespace> -l app=mgmtproxy
157
+
```
158
+
159
+
#### [For Linux](#tab/linux):
160
+
On Linux you can use the following:
161
+
162
+
```bash
163
+
kubectl get pods -n <namespace>| grep 'mgmtproxy'
164
+
```
165
+
166
+
---
167
+
168
+
Remove the mgmtproxy pod:
169
+
```bash
170
+
kubectl delete pod mgmtproxy-xxxxx -n mssql-clutser
171
+
```
172
+
173
+
7. Wait for the mgmtproxy pod to come online and Grafana Dashboard to start.
174
+
175
+
The wait is not significant and the pod should be online within seconds. To check the status of the pod you can use the same `get pods`command as used in the previous step.
176
+
If you see the mgmtproxy pod is not promptly returning to Ready status, use kubectl to describe the pod:
For troubleshooting and further log collection, use the Azure Data CLI `[azdata bdc debug copy-logs](../azdata/reference/reference-azdata-bdc-debug.md)` command.
183
+
184
+
8. Now login to Grafana using new password.
185
+
186
+
187
+
## Update the Kibana password
188
+
189
+
Follow these options for manually updating the password for [Kibana](cluster-logging-kibana.md).
190
+
191
+
> [!NOTE]
192
+
> The older Microsoft Edge browser is incompatible with Kibana, you must use the Edge chromium-based browser for the dashboard to display correctly. You will see a blank page when loading the dashboards using an unsupported browser, see [supported browsers for Kibana](https://www.elastic.co/support/matrix#matrix_browsers).
193
+
194
+
1. Open the Kibana URL.
195
+
196
+
You can find the Kibana service endpoint URL from within [Azure Data Studio](manage-with-controller-dashboard#controller-dashboard), or use the following **azdata** command:
197
+
198
+
```azurecli
199
+
azdata login
200
+
azdata bdc endpoint list -e logsui -o table
201
+
```
202
+
203
+
For example: https://11.111.111.111:30777/kibana/app/kibana#/discover
204
+
205
+
2. On the left side pane click on the **Security** option.
206
+
207
+

208
+
209
+
3. On the security page, under the heading Authentication Backends, click on **Internal User Database**.
210
+
211
+

212
+
213
+
4. Now you will see the list of users under the heading Internal Users Database. Use this page to add, modify and remove any users for Kibana endpoint access. For the user that need the updated password, click on **Edit** button on the right hand side for the user.
214
+
215
+

216
+
217
+
5. Enter the new password twice and click on **Submit**:
218
+
219
+

220
+
221
+
6. Close the browser and reconnect to the Kibana URL using updated password.
222
+
223
+
> [!Note]
224
+
> After logging in with new password, if you see blank pages in Kibana, manually logout using the logout option at top right corner and login again.
225
+
226
+
## See also
227
+
228
+
* [azdata bdc (Azure Data CLI)](../../sql/azdata/reference/reference-azdata-bdc.md)
229
+
* [Monitor applications with azdata and Grafana Dashboard](app-monitor.md)
230
+
* [Check out cluster logs with Kibana Dashboard](cluster-logging-kibana.md)
In SQL Server 2019 you can create, delete, describe, initialize, list run and update your application. The following table describes the application deployment commands that you can use with **azdata**.
26
+
In [!INCLUDE[sssql19-md](../includes/sssql19-md.md)] you can create, delete, describe, initialize, list run and update your application. The following table describes the application deployment commands that you can use with **azdata**.
27
27
28
28
|Command |Description |
29
29
|:---|:---|
30
-
|`azdata bdc endpoint list`| Lists the endpoints for the Big Data Cluster. |
30
+
|`azdata bdc endpoint list`| Lists the endpoints for the [!INCLUDE[ssbigdataclusters-ss-nover](../includes/ssbigdataclusters-ss-nover.md)]. |
31
31
32
32
33
33
You can use the following example to list the endpoint of **Kibana dashboard**:
@@ -46,8 +46,10 @@ The link to a Kibana dashboard:
> (Old) Microsoft Edge browser is incompatible with Kibana, you must use the chromium based browser for the dashboard to display correctly. You will see a blank page when loading the dashboards using an unsupported browser. See here for supported browsers for Kibana.
49
+
> The older Microsoft Edge browser is incompatible with Kibana, you must use the Edge chromium-based browser for the dashboard to display correctly. You will see a blank page when loading the dashboards using an unsupported browser, see [supported browsers for Kibana](https://www.elastic.co/support/matrix#matrix_browsers).
50
50
51
51
## Next steps
52
52
53
-
For more information about [!INCLUDE[big-data-clusters-2019](../includes/ssbigdataclusters-ss-nover.md)], see [What are [!INCLUDE[big-data-clusters-2019](../includes/ssbigdataclusters-ver15.md)]?](big-data-cluster-overview.md).
53
+
For more information about [!INCLUDE[big-data-clusters-2019](../includes/ssbigdataclusters-ss-nover.md)], see [What are [!INCLUDE[big-data-clusters-2019](../includes/ssbigdataclusters-ver15.md)]?](big-data-cluster-overview.md).
Copy file name to clipboardExpand all lines: docs/big-data-cluster/release-notes-big-data-cluster.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -93,7 +93,7 @@ SQL Server 2019 CU9 for SQL Server Big Data Clusters, includes important capabil
93
93
Clusters using `mssql-conf` for SQL Server master instance configurations require additional steps after upgrading to CU9. Follow the instructions [here](bdc-upgrade-configuration.md).
94
94
95
95
- Improved [!INCLUDE[azdata](../includes/azure-data-cli-azdata.md)] experience for encryption at rest.
96
-
- Ability to dynamically install Python Spark packages using virtual environments.
96
+
- Ability to dynamically [install Python Spark packages](spark-install-packages.md) using virtual environments.
97
97
- Upgraded software versions for most of our OSS components (Grafana, Kibana, FluentBit, etc.) to ensure BDC images are up to date with the latest enhancements and fixes. See [Open-source software reference](reference-open-source-software.md).
This article provides guidance on how to import and install packages for a Spark session through session and notebook configurations.
19
19
20
20
## Built-in tools
21
-
Spark and Hadoop base packages
22
-
Python 3.7 and Python 2.7
23
-
Pandas, Sklearn, Numpy, and other data processing packages.
24
-
R and MRO packages
25
-
Sparklyr
21
+
22
+
Scala Spark (Scala 2.11) and Hadoop base packages.
23
+
24
+
PySpark (Python 3.7). Pandas, Sklearn, Numpy, and other data processing and machine learning packages.
25
+
26
+
MRO 3.5.2 packages. Sparklyr and SparkR for R Spark workloads.
26
27
27
28
## Install packages from a Maven repository onto the Spark cluster at runtime
29
+
28
30
Maven packages can be installed onto your Spark cluster using notebook cell configuration at the start of your spark session. Before starting a spark session in Azure Data Studio, run the following code:
## Install Python packages at PySpark job-submission time
36
-
1. Specify the path to a requirements.txt file in HDFS to use as a reference for packages to install.
37
-
```
37
+
## Install Python packages at PySpark at runtime
38
+
39
+
Session and Job level package management guarantees library consistency and isolation. The configuration is a Spark standard library configuration that can be applied on Livy sessions. __azdata spark__ support these configurations. The examples bellow are presented as __Azure Data Studio Notebooks__ configure cells that need to be run after attaching to a cluster with the PySpark kernel.
40
+
41
+
If the __"spark.pyspark.virtualenv.enabled" : "true"__ configuration is not set, the session will use the cluster default python and installed libraries.
42
+
43
+
### Session/Job configuration with requirements.txt
44
+
45
+
If
46
+
Specify the path to a requirements.txt file in HDFS to use as a reference for packages to install.
Execute the __sc.install_packages__ to install libraries dynamically in your session. Libraries will be installed into the driver and across all executor nodes.
58
76
59
77
```python
60
78
sc.install_packages("numpy==1.11.0")
61
79
import numpy as np
62
80
```
63
81
82
+
Is is also possible to install multiple libraries in the same command using an array.
83
+
84
+
```python
85
+
sc.install_packages(["numpy==1.11.0", "xgboost"])
86
+
import numpy as np
87
+
import xgboost as xgb
88
+
```
89
+
64
90
## Import .jar from HDFS for use at runtime
65
91
Import jar at runtime through Azure Data Studio notebook cell configuration.
66
92
67
-
```
93
+
```python
68
94
%%configure -f
69
95
{"conf": {"spark.jars": "/jar/mycodeJar.jar"}}
70
96
```
71
97
72
98
### Import .jar at runtime through Azure Data Studio notebook cell configuration
0 commit comments