Kubernetes the Easy Way with AKS Automatic
This workshop will guide you up to speed with working with Azure Kubernetes Service (AKS) Automatic. AKS Automatic is a new way to deploy and manage Kubernetes clusters on Azure. It is a fully managed Kubernetes service that simplifies the deployment, management, and operations of Kubernetes clusters. With AKS Automatic, you can deploy a Kubernetes cluster with just a few clicks in the Azure Portal. AKS Automatic is designed to be simple and easy to use, so you can focus on building and deploying your applications.
Objectivesβ
After completing this workshop, you will be able to:
- Deploy an application to an AKS Automatic cluster
- Troubleshoot application issues
- Integrate applications with Azure services
- Scale your cluster and applications
- Observe your cluster and applications
Prerequisitesβ
Before you begin, you will need an Azure subscription with Owner permissions and a GitHub account.
In addition, you will need the following tools installed on your local machine:
- Azure CLI
- Visual Studio Code
- Git
- GitHub CLI
- Bash shell (e.g. Windows Terminal with WSL or Azure Cloud Shell)
To keep focus on AKS-specific features, this workshop will need some Azure preview features enabled and resources to be pre-provisioned. You can use the Azure CLI commands below to register the preview features.
Start by logging in to the Azure CLI.
az login
Register preview features.
az feature register --namespace Microsoft.ContainerService --name AutomaticSKUPreview
az feature register --namespace Microsoft.ContainerService --name AzureMonitorAppMonitoringPreview
Register resource providers.
az provider register --namespace Microsoft.Insights
az provider register --namespace Microsoft.ServiceLinker
Check the status of the feature registration.
az feature show --namespace Microsoft.ContainerService --name AutomaticSKUPreview --query properties.state
Once the feature is registered, run the following command to re-register the Microsoft.ContainerService provider.
az provider register --namespace Microsoft.ContainerService
Once the resource provider and preview features have been registered, jump over to Lab Environment Setup and follow the instructions to provision the required resources and come back here to continue with the workshop.
As noted in the AKS Automatic documentation, AKS Automatic tries to dynamically select a virtual machine size for the system node pool based on the capacity available in the subscription. Make sure your subscription has quota for 16 vCPUs of any of the following sizes in the region you're deploying the cluster to: Standard_D4pds_v5, Standard_D4lds_v5, Standard_D4ads_v5, Standard_D4ds_v5, Standard_D4d_v5, Standard_D4d_v4, Standard_DS3_v2, Standard_DS12_v2. You can view quotas for specific VM-families and submit quota increase requests through the Azure portal.
After you have provisioned the required resources, the last thing you need to do is create an Azure CosmosDB database with a MongoDB API (version 7.0) and a database named test
.
You can do that by running the following commands.
# Create an Azure CosmosDB account with a random name and save it for later reference
AZURE_COSMOSDB_NAME=$(az cosmosdb create \
--name mymongo$(date +%s) \
--resource-group ${RG_NAME} \
--kind MongoDB \
--server-version 7.0 \
--query name -o tsv)
# Create a MongoDB database and collection
az cosmosdb mongodb collection create \
--account-name $AZURE_COSMOSDB_NAME \
--name test \
--database-name test \
--resource-group ${RG_NAME}
Make sure to replace ${RG_NAME}
with the name of the resource group you created earlier.
Once the resources are deployed, you can proceed with the workshop.
Keep your terminal open as you will need it to run commands throughout the workshop.
Deploy your app to AKS Automaticβ
With AKS, the Automated Deployments feature allows you to create GitHub Actions workflows that allows you to start deploying your applications to your AKS cluster with minimal effort, even if you don't already have an AKS cluster. All you need to do is point it at a GitHub repository with your application code.
If you have Dockerfiles or Kubernetes manifests in your repository, that's great, you can simply point to them in the Automated Deployments setup. If you don't have Dockerfiles or Kubernetes manifests in your repository, don't sweat π Automated Deployments can create them for you π
Fork and clone the sample repositoryβ
Open a bash shell and run the following command then follow the instructions printed in the terminal to complete the login process.
gh auth login
Here is an example of the login process with options selected.
$ gh auth login
? Where do you use GitHub? GitHub.com
? What is your preferred protocol for Git operations on this host? HTTPS
? Authenticate Git with your GitHub credentials? Yes
? How would you like to authenticate GitHub CLI? Login with a web browser
! First copy your one-time code: 1234-ABCD
Press Enter to open https://github.com/login/device in your browser...
After you've completed the login process, run the following command to fork the contoso-air repository to your GitHub account.
gh repo fork Azure-Samples/contoso-air --clone
Change into the contoso-air
directory.
cd contoso-air
Set the default repository to your forked repository.
gh repo set-default
When prompted, select your fork of the repository and press Enter.
You're now ready to deploy the sample application to your AKS cluster.
Automated Deployments setupβ
In the Azure portal (https://portal.azure.com) type Kubernetes services in the search box at the top of the page and click the Kubernetes services option from the search results.
In the upper left portion of the screen, click the + Create button to view all the available options for creating a new AKS cluster. Click on the Deploy application (new) option.
In the Basics tab, click on the Deploy your application option, then select your Azure subscription and the resource group you created during the lab environment setup.
In the Repository details section, type contoso-air
as your Workflow name.
If you have not already authorized Azure to access your GitHub account, you will be prompted to do so. Click the Authorize access button to continue.
Once your GitHub account is authorized, you will be able to select the repository you forked earlier. Click the Select repository drop down, then select the contoso-air repository you forked earlier and select the main branch.
Click Next.
In the Application tab, complete the Image section with the following details:
- Container configuration: Select Auto-containerize (generate Dockerfile)
- Save files in repository: Click the Select link to open the directory explorer, then navigate to the Root/src directory, select the checkbox next to the web folder, then click Select.
In the Dockerfile configuration section, fill in the following details:
- Application environment: Select JavaScript - Node.js 22
- Application port: Enter
3000
- Dockerfile build context: Enter
./src/web
- Azure Container Registry: Select the Azure Container Registry in the resource group you created earlier
- Azure Container Registry image: Click the Create new link then enter
contoso-air
In the Deployment configuration section and fill in the following details:
- Deployment options: Select Generate application deployment files
- Save files in repository: Click the Select link to open the directory explorer, then select the checkbox next to the Root folder, then click Select.
Click Next.
In the Cluster configuration section, ensure the Create Automatic Kubernetes cluster option is chosen and specify myakscluster
as the Kubernetes cluster name.
For Namespace, select Create new and enter dev
.
You can leave the remaining fields as their default values.
You will see that the monitoring and logging options have been enabled by default and set to use the Azure resources that are available in your subscription. If you don't have these resources available, AKS Automatic will create them for you. If you want to change the monitoring and logging settings, you can do so by clicking on the Change link and selecting the desired target resources for monitoring and logging.
Click Next.
In the Review tab, you will see a summary of the configuration you have selected and view a preview of the Dockerfile and Kubernetes deployment files that will be generated for you.
When ready, click the Deploy button to start the deployment.
This process can take up to 20 minutes to complete. Do not close the browser window or navigate away from the page until the deployment is complete.
Review the pull requestβ
Once the deployment is complete, click on the Approve pull request button to view the pull request to be taken to the pull request page in your GitHub repository.
In the pull request review, click on the Files changed tab to view the changes that were made by the Automated Deployments workflow.
Navigate back to the Conversation tab and click on the Merge pull request button to merge the pull request, then click Confirm merge.
With the pull request merged, the changes will be automatically deployed to your AKS cluster. You can view the deployment logs by clicking on the Actions tab in your GitHub repository.
In the Actions tab, you will see the Automated Deployments workflow running. Click on the workflow run to view the logs.
In the workflow run details page, you can view the logs of each job in the workflow by simply clicking on the job.
After a few minutes, the workflow will complete and you will see two green check marks next to the buildImage and deploy jobs. This means that the application has been successfully deployed to your AKS cluster.
If the deploy job fails, it is likely that Node Autoprovisioning (NAP) is still provisioning a new node for the cluster. Try clicking the "Re-run" button at the top of the page to re-run the deploy workflow job.
With AKS Automated Deployments, every time you push application code changes to your GitHub repository, the GitHub Action workflow will automatically build and deploy your application to your AKS cluster. This is a great way to automate the deployment process and ensure that your applications are always up-to-date!
Test the deployed applicationβ
Back in the Azure portal, click the Close button to close the Automated Deployments setup.
In the left-hand menu, click on Services and ingresses under the Kubernetes resources section. You should see a new service called contoso-air
with a public IP address assigned to it. Click on the IP address to view the deployed application.
Let's test the application functionality by clicking the Login link in the upper right corner of the page.
There is no real authentication provider in this application, so you can simply type in whatever you like for the username and password and click the Log in button.
Click on the Book link in the top navigation bar and fill in the form with your trip details and click the Find flights button.
You will see some available flight options. Scroll to the bottom of the page and click Next to continue.
The application will either redirect you back to the login page or show a connection failure. What happened? π€
Let's find out...
Troubleshoot the applicationβ
Navigate back to the Azure portal and select Logs from the Monitoring section in the AKS cluster's left-hand menu. This section allows you to access the logs gathered by the Azure Monitor agent operating on the cluster nodes.
Close the Queries hub pop-up to get to the query editor, type the following query, then click the Run button to view container logs.
ContainerLogV2
| where LogLevel contains "error" and ContainerName == "contoso-air"
If the query editor is in Simple mode, switch to KQL mode by using the drop-down menu in the top-right corner. To make KQL mode the default, select the corresponding radio button in the pop-up and click Save.
Expand some of the logs to view the error messages that were generated by the application.
You should see an error message that says Azure CosmosDB settings not found. Booking functionality not available.
.
This error occurred because the application is trying to connect to an Azure CosmosDB database to store the booking information, but the connection settings are not configured. We can fix this by adding configuration to the application using the AKS Service Connector!
Integrating apps with Azure servicesβ
AKS Service Connector streamlines connecting applications to Azure resources like Azure CosmosDB by automating the configuration of Workload Identity. This feature allows you to assign identities to pods, enabling them to authenticate with Microsoft Entra ID and access Azure services securely without passwords. For a deeper understanding, check out the Workload Identity overview.
Workload Identity is the recommended way to authenticate with Azure services from your applications running on AKS. It is more secure than using service principals and does not require you to manage credentials in your application. To read more about the implementation of Workload Identity for Kubernetes, see this doc.
Service Connector setupβ
In the left-hand menu, click on Service Connector under Settings then click on the + Create button.
In the Basics tab, enter the following details:
- Kubernetes namespace: Enter
dev
- Service type: Select Cosmos DB
- API type: Select MongoDB
- Cosmos DB account: Select the CosmosDB account you created earlier
- MongoDB database: Select test
Click Next: Authentication.
In the Authentication tab, select the Workload Identity option. You should see a user-assigned managed identity that was created during your lab setup. If no managed identities appear in the dropdown, click the Create new link to provision a new one.
Optionally, you can expand the Advanced section to customize the managed identity settings. By default, the DocumentDB Account Contributor role is assigned, granting permissions to read, write, and delete resources in the CosmosDB account. This role enables the workload identity to properly authenticate and interact with your database.
Click Next: Networking then click Next: Review + create and finally click Create.
This process will take a few minutes while Service Connector configures the Workload Identity infrastructure. Behind the scenes, it's:
- Assigning appropriate Azure role permissions to the managed identity for CosmosDB access
- Creating a Federated Credential that establishes trust between your Kubernetes cluster and the managed identity
- Setting up a Kubernetes ServiceAccount linked to the managed identity
- Creating a Kubernetes Secret containing the CosmosDB connection information
Configure the application for Workload Identityβ
Once you've successfully set up the Service Connector for your Azure CosmosDB, it's time to configure your application to use these connection details.
In the Service Connector page, select the checkbox next to the CosmosDB connection and click the Yaml snippet button.
In the YAML snippet window, select Kubernetes Workload for Resource type, then select contoso-air for Kubernetes Workload.
You will see the YAML manifest for the contoso-air application with the highlighted edits required to connect to CosmosDB via Workload Identity.
Scroll through the YAML manifest to view the changes highlighted in yellow, then click Apply to apply the changes to the application. This will redeploy the contoso-air application with the new connection details.
This will apply changes directly to the application deployment but ideally you would want to commit these changes to your repository so that they are versioned and can be tracked and automatically deployed using the Automated Deployments workflow that you set up earlier.
Wait a minute or two for the new pod to be rolled out then navigate back to the application and attempt to book a flight. Now, you should be able to book a flight without any errors!
Observing your cluster and appsβ
Monitoring and observability are key components of running applications in production. With AKS Automatic, you get a lot of monitoring and observability features enabled out-of-the-box. You experienced some of these features when you used ran queries to look for error logs in the application. Let's take a closer look at how you can monitor and observe your application and cluster.
At the start of the workshop, you set up the AKS Automatic cluster and integrated it with Azure Log Analytics Workspace for logging, Azure Monitor Managed Workspace for metrics collection, and Azure Managed Grafana for data visualization.
Now, you can also enable the Azure Monitor Application Insights for AKS feature to automatically instrument your applications with Azure Application Insights.
Application insightsβ
Azure Monitor Application Insights is an Application Performance Management (APM) solution designed for real-time monitoring and observability of your applications. Leveraging OpenTelemetry (OTel), it collects telemetry data from your applications and streams it to Azure Monitor. This enables you to evaluate application performance, monitor usage trends, pinpoint bottlenecks, and gain actionable insights into application behavior. With AKS, you can enable the AutoInstrumentation feature which allows you to collect telemetry for your applications without requiring any code changes.
At the time of this writing, the AutoInstrumentation feature is in public preview. Please refer to the official documentation for the most up-to-date information.
You can enable the feature on your AKS cluster with the following command.
az aks update \
-g ${RG_NAME} \
-n myakscluster \
--enable-azure-monitor-app-monitoring
This can take a few minutes to complete.
With this feature enabled, you can now deploy a new Instrumentation custom resource to your AKS cluster to automatically instrument your applications without any modifications to the code.
Before proceeding, retrieve the Application Insights connection string from your Azure deployment by running the command below and saving the result to an environment variable.
APPLICATION_INSIGHTS_CONNECTION_STRING=$(az monitor app-insights component show \
-g ${RG_NAME} \
--query "[0].connectionString" \
-o tsv)
If you don't have app-insights
available in your Azure CLI, you can install the extension by running the following command:
az extension add --name application-insights
Connect to the AKS cluster by running the following command.
az aks get-credentials -g ${RG_NAME} -n myakscluster
Now, you can deploy the Instrumentation custom resource to the AKS cluster.
kubectl apply -f - <<EOF
apiVersion: monitor.azure.com/v1
kind: Instrumentation
metadata:
name: default
namespace: dev
spec:
settings:
autoInstrumentationPlatforms:
- NodeJs
destination:
applicationInsightsConnectionString: $APPLICATION_INSIGHTS_CONNECTION_STRING
EOF
This will deploy the Instrumentation custom resource called default
and instrument all Node.js applications running in the dev
namespace.
With Microsoft Entra ID authentication with Azure RBAC in place, you will be asked to login to your Azure account.
Now you need to restart the application pods to apply the changes. Run the following command to restart the application pods.
kubectl rollout restart deployment contoso-air -n dev
Once the pods have restarted, you will notice an azure-monitor-auto-instrumentation-nodejs
Init Container has been added to the pod along. This container automatically instruments the application with Application Insights. By running the following command, you can review the entire Deployment configuration.
kubectl describe pods -n dev
This is a simple example of how to instrument your application across an entire namespace. You can also instrument individual deployments by deploying another Instrumentation custom resource with a different name then annotating the targeted deployment with with the following annotation: "instrumentation.opentelemetry.io/inject-nodejs": "<name-of-instrumentation-resource>"
and restarting the deployment. See the documentation for more details.
Now that the application is instrumented with Application Insights, you can view the application performance and usage metrics in the Azure portal.
Navigate to the Application Insights resource in your resource group.
Click on the Application map in the left-hand menu to view a high-level overview of the application components, their dependencies, and number of calls.
If the MongoDB does not appear in the application map, return to the Contoso Air website and book a flight to generate some data. Then, in the Application Map, click the Refresh button. The map will update in real time and should now display the MongoDB database connected to the application, along with the request latency to the database.
Click on the Live Metrics tab to view the live metrics for the application. Here you can see incoming and outgoing requests, response times, and exceptions in real-time.
Finally, click on the Performance tab to view the performance metrics for the application. Here you can see the average response time, request rate, and failure rate for the application.
Feel free to explore the other features of Application Insights and see how you can use it to monitor and observe your applications.
Container insightsβ
AKS Automatic simplifies monitoring your cluster using Container Insights which offers a detailed monitoring solution for your containerized applications running on AKS. It gathers and analyzes logs, metrics, and events from your cluster and applications, providing valuable insights into their performance and health.
To access this feature, navigate back to your AKS cluster in the Azure portal. Under the Monitoring section in the left-hand menu, click on Insights to view a high-level summary of your cluster's performance.
The AKS Automatic cluster was also pre-configured with basic CPU utilization and memory utilization alerts. You can also create additional alerts based on the metrics collected by the Prometheus workspace.
Click on the Recommended alerts (Preview) button to view the recommended alerts for the cluster. Expand the Prometheus community alert rules (Preview) section to see the list of Prometheus alert rules that are available. You can enable any of these alerts by clicking on the toggle switch.
Click Save to enable the alerts.
Workbooks and logsβ
With Container Insights enabled, you can query logs using Kusto Query Language (KQL) and create custom or pre-configured workbooks for data visualization. In the Monitoring section of the AKS cluster menu, click Workbooks to access pre-configured options. The Cluster Optimization workbook is particularly useful for identifying anomalies, detecting probe failures, and optimizing container resource requests and limits. Explore this and other available workbooks to monitor your cluster effectively.
The workbook visuals will include a query button that you can click to view the KQL query that powers the visual. This is a great way to learn how to write your own queries.
Refer back to the earlier step where we troubleshot the Contoso Air app using the Logs section in the left-hand menu. Here, you can create custom KQL queries or use pre-configured ones to analyze logs from your cluster and applications. The Queries hub offers a variety of pre-configured queriesβsimply navigate to the Container Logs table in the left-hand menu under All Queries, choose a query, and click Run to view the results.
Some of the queries might not have enough data to return results.
Visualizing with Grafanaβ
The Azure Portal provides a great way to view metrics and logs, but if you prefer to visualize the data using Grafana, or execute complex queries using PromQL, you can use the Azure Managed Grafana instance that was created with the AKS Automatic cluster.
In the AKS cluster's left-hand menu, click on Insights under the Monitoring section and click on the View Grafana button at the top of the page. This will open a window with the linked Azure Managed Grafana instance. Click on the Browse dashboards link. This will take you to the Azure Managed Grafana instance.
Log into the Grafana instance then in the Grafana home page, click on the Dashboards link in the left-hand menu. Here you will see a list of pre-configured dashboards that you can use to visualize the metrics collected by the Prometheus workspace.
In the Dashboards list, expand the Azure Managed Prometheus folder and explore the dashboards available. Each dashboard provides a different view of the metrics collected by the Prometheus workspace with controls to allow you to filter the data.
Click on a Kubernetes / Compute Resources / Workload dashboard.
Filter the namespace to dev the type to deployment, and the workload to contoso-air. This will show you the metrics for the contoso-air deployment.
Querying metrics with PromQLβ
If you prefer to write your own queries to visualize the data, you can use the Explore feature in Grafana. In the Grafana home page, click on the Explore link in the left-hand menu, and select the Managed_Prometheus_defaultazuremonitorworkspace data source.
The query editor supports a graphical query builder and a text-based query editor. The graphical query builder is a great way to get started with PromQL. You can select the metric you want to query, the aggregation function, and any filters you want to apply.
There is a lot you can do with Grafana and PromQL, so take some time to explore the features and visualize the metrics collected by the Prometheus workspace.
Scaling your cluster and appsβ
Now that you have learned how to deploy applications to AKS Automatic and monitor your cluster and applications, let's explore how to scale your cluster and applications to handle the demands of your workloads effectively.
Right now, the application is running a single pod. When the web app is under heavy load, it may not be able to handle the requests. To automatically scale your deployments, you should use Kubernetes Event-driven Autoscaling (KEDA) which allows you to scale your application workloads based on utilization metrics, number of events in a queue, or based on a custom schedule using CRON expressions.
But simply using implementing KEDA is not enough. KEDA can try to deploy more pods, but if the cluster is out of resources, the pods will not be scheduled and remain in pending status.
With AKS Automatic, Node Autoprovisioning (NAP) is enabled and is used over the traditional cluster autoscaler. With NAP, it can detect if there are pods pending scheduling and will automatically scale the node pool to meet the demands. We won't go into the details of working with NAP in this workshop, but you can read more about it in the AKS documentation.
NAP will not only automatically scale out additional nodes to meet demand, it will also find the most efficient VM configuration to host the demands of your workloads and scale nodes in when the demand is low to save costs.
For the Kubernetes scheduler to efficiently schedule pods on nodes, it is best practice to include resource requests and limits in your pod configuration. The Automated Deployment setup added some default resource requests and limits to the pod configuration, but they may not be optimal. Knowing what to set the request and limit values to can be challenging. This is where the Vertical Pod Autoscaler (VPA) can help.
Vertical Pod Autoscaler (VPA) setupβ
VPA is a Kubernetes resource that allows you to automatically adjust the CPU and memory requests and limits for your pods based on the actual resource utilization of the pods. This can help you optimize the resource utilization of your pods and reduce the risk of running out of resources.
AKS Automatic comes with the VPA controller pre-installed, so you can use the VPA resource immediately by simply deploying a VPA resource manifest to your cluster.
Navigate to the Custom resource section under Kubernetes resources in the AKS cluster left-hand menu. Scroll down to the bottom of the page and click on the Load more button to view all the available custom resources.
Click on the VerticalPodAutoscaler resource to view the VPA resources in the cluster.
Click on the + Create button where you'll see a Add with YAML editor.
Not sure what to add here? No worries! You can lean on Microsoft Copilot in Azure to help generate the VPA manifest.
Click in the text editor or press Alt + I to open the Copilot editor.
In the Draft with Copilot text box, type in the following prompt:
Help me create a vertical pod autoscaler manifest for the contoso-air deployment in the dev namespace and set min and max cpu and memory to something typical for a nodejs app. Please apply the values for both requests and limits.
Press Enter to generate the VPA manifest.
When the VPA manifest is generated, click the Accept all button to accept the changes, then click Add to create the VPA resource.
Microsoft Copilot in Azure may provide different results. If your results are different, simply copy the following VPA manifest and paste it into the Apply with YAML editor.
apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
name: contoso-air-vpa
namespace: dev
spec:
targetRef:
apiVersion: apps/v1
kind: Deployment
name: contoso-air
updatePolicy:
updateMode: Auto
resourcePolicy:
containerPolicies:
- containerName: contoso-air
minAllowed:
cpu: 100m
memory: 256Mi
maxAllowed:
cpu: 1
memory: 512Mi
controlledResources: ["cpu", "memory"]
The VPA resource will only update the CPU and memory requests and limits for the pods in the deployment if the number of replicas is greater than 1. Also the pod will be restarted when the VPA resource updates the pod configuration so it is important to create Pod Disruption Budgets (PDBs) to ensure that the pods are not restarted all at once.
KEDA scaler setupβ
AKS Automatic also comes with the KEDA controller pre-installed, so you can use the KEDA resource immediately by simply deploying a KEDA scaler to your cluster.
Navigate to Application scaling under Settings in the AKS cluster left-hand menu, then click on the + Create button.
In the Basics tab, enter the following details:
- Name: Enter
contoso-air-so
- Namespace: Select dev
- Target workload: Select contoso-air
- Minimum replicas: Enter
3
- Maximum replicas: Enter
10
- Trigger type: Select CPU
Leave the rest of the fields as their default values and click Next.
In the Review + create tab, click Customize with YAML to view the YAML manifest for the ScaledObject resource. You can see the YAML manifest the AKS portal generated for the ScaledObject resource. Here you can add additional configuration to the ScaledObject resource if needed.
Click Save and create to create the ScaledObject resource.
Head over to the Workloads section in the left-hand menu under Kubernetes resources. In the Filter by namespace drop down list, select dev. You should see the contoso-air deployment is now running (or starting) 3 replicas.
Now that the number of replicas has been increased, the VPA resource will be able to adjust the CPU and memory requests and limits for the pods in the deployment based on the actual resource utilization of the pods the next time it reconciles.
This was a simple example of using using KEDA. The real power of KEDA comes from its ability to scale your application based on external metrics. There are many scalers available for KEDA that you can use to scale your application based on a variety of external metrics.
If you have time, try to run a simple load test to see the scaling in action. You can use the hey tool to generate some traffic to the application.
If you don't have the hey
tool installed, checkout the installation guide and follow the instructions based on your operating system.
Run the following command to generate some traffic to the application:
hey -z 30s -c 100 http://<REPLACE_THIS_WITH_CONTOSO_AIR_SERVICE_IP>:3000
This will generate some traffic to the application for 30 seconds. You should see the number of replicas for the contoso-air deployment increase as the load increases.
Summaryβ
In this workshop, you learned how to create an AKS Automatic cluster and deploy an application to the cluster using Automated Deployments. From there, you learned how to troubleshoot application issues using the Azure portal and how to integrate applications with Azure services using the AKS Service Connector. You also learned how to enable application monitoring with AutoInstrumentation using Azure Monitor Application Insights, which provides deep visibility into your application's performance without requiring any code changes. Additionally, you explored how to configure your applications for resource specific scaling using the Vertical Pod Autoscaler (VPA) and scaling your applications with KEDA. Hopefully, you now have a better understanding of how easy it can be to build and deploy applications on AKS Automatic.
To learn more about AKS Automatic, visit the AKS documentation and checkout our other AKS Automatic lab in this repo to explore more features of AKS.
In addition to this workshop, you can also explore the following resources:
- Azure Kubernetes Service (AKS) documentation
- Kubernetes: Getting started
- Learning Path: Introduction to Kubernetes on Azure
- Learning Path: Deploy containers by using Azure Kubernetes Service (AKS)
If you have any feedback or suggestions for this workshop, please feel free to open an issue or pull request in the GitHub repository