Skip to content

Commit e05069a

Browse files
Update quickstart-big-data-cluster-deploy-aro.md
1 parent 8050ad5 commit e05069a

1 file changed

Lines changed: 4 additions & 0 deletions

File tree

docs/big-data-cluster/quickstart-big-data-cluster-deploy-aro.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -20,6 +20,10 @@ In this tutorial, you use a sample python deployment script to deploy [!INCLUDE[
2020
> [!TIP]
2121
> ARO is only one option for hosting Kubernetes for your big data cluster. To learn about other deployment options as well as how to customize deployment options, see [How to deploy [!INCLUDE[big-data-clusters-2019](../includes/ssbigdataclusters-ss-nover.md)] on Kubernetes](deployment-guidance.md).
2222
23+
24+
> [!WARNING]
25+
> Persistent volumes created with the built-in storage class *managed-premium* have a reclaim policy of *Delete*. So, when you delete the SQL Server big data cluster, persistent volume claims are deleted as are the persistent volumes. You should create custom storage classes by using azure-disk provisioner with a *Retain* reclaim policy, as described in [Concepts storage](/azure/aks/concepts-storage/#storage-classes). The script below is using the *managed-premium* storage class. See [Data persistence](concept-data-persistence.md) topic for more details.
26+
2327
The default big data cluster deployment used here consists of a SQL Master instance, one compute pool instance, two data pool instances, and two storage pool instances. Data is persisted using Kubernetes persistent volumes that use the ARO default storage classes. The default configuration used in this tutorial is suitable for dev/test environments.
2428

2529
## Prerequisites

0 commit comments

Comments
 (0)