| title | Deploy single node kubeadm cluster |
|---|---|
| titleSuffix | SQL Server Big Data Clusters |
| description | Use a bash deployment script to deploy a [!INCLUDE[big-data-clusters-2019](../includes/ssbigdataclusters-ver15.md)] to a single node kubeadm cluster. |
| author | mihaelablendea |
| ms.author | mihaelab |
| ms.reviewer | mikeray |
| ms.metadata | seo-lt-2019 |
| ms.date | 12/13/2019 |
| ms.topic | conceptual |
| ms.prod | sql |
| ms.technology | big-data-cluster |
[!INCLUDEtsql-appliesto-ssver15-xxxx-xxxx-xxx]
In this tutorial, you use a sample bash deployment script to deploy a single node Kubernetes cluster using kubeadm and a SQL Server big data cluster on it.
-
A vanilla Ubuntu 18.04 or 16.04 server virtual or physical machine. All dependencies are set up by the script, and you run the script from within the VM.
[!NOTE] Using Azure Linux VMs is not yet supported.
-
VM should have at least 8 CPUs, 64-GB RAM, and 100 GB of disk space. After pulling all big data cluster Docker images, you will be left with 50 GB for data and logs to use across all components.
-
Update existing packages using commands below to ensure that the OS image is up-to-date.
sudo apt update && sudo apt upgrade -y sudo systemctl reboot
-
Use static memory configuration for the virtual machine. For example, in Hyper-V installations do not use dynamic memory allocation but instead allocate the recommended 64 GB or higher.
-
Use checkpoint or snapshot capability in your hyper visor so that you can roll back the virtual machine to a clean state.
-
Download the script on the VM you are planning to use for the deployment.
curl --output setup-bdc.sh https://raw.githubusercontent.com/microsoft/sql-server-samples/master/samples/features/sql-big-data-cluster/deployment/kubeadm/ubuntu-single-node-vm/setup-bdc.sh
-
Make the script executable with the following command.
chmod +x setup-bdc.sh
-
Run the script (make sure you are running with sudo)
sudo ./setup-bdc.sh
When prompted, provide your input for the password to use for the following external endpoints: controller, SQL Server master, and gateway. The password should be sufficiently complex based on existing rules for SQL Server password. The controller username defaults to admin.
-
Set up an alias for the azdata tool.
source ~/.bashrc
-
Refresh alias setup for azdata.
azdata --version
The cleanup-bdc.sh script is provided as convenience to reset the environment if necessary. However, we recommend that you use a virtual machine for testing purposes and use the snapshot capability in your hypervisor to roll back the virtual machine to a clean state.
To get started with using big data clusters, see Tutorial: Load sample data into a SQL Server big data cluster.