Advanced Setup
This section provides advanced setup instructions for running the application either locally or in a containerized environment using Docker. It also covers how to deploy the application to Azure using the Azure Developer CLI (azd
) and Bicep infrastructure-as-code configuration.
Preview the application
Using Local LLM Providers
If you want to use local LLM providers like Docker models or Llama, you can set the LLM_PROVIDER
environment variable in the ./src/api/.env
file to the supported providers. This will configure the application to use the specified local LLM provider.
The application supports the following local LLM providers:
- Azure Foundry Local: This provider allows you to run models locally using Azure's AI Foundry Local service.
- Ollama Models: This provider allows you to run models locally using Ollama service.
- Docker Models: This provider allows you to run models locally using Docker's Model Runner service.
- Make sure to install Docker Desktop v4.42.0 (195023) or later to use this feature (docker engine 28.2.2 or later).
We also recommend you fork the repository to your own GitHub account so you can make changes and experiment with the code.
Using HTTPS
git clone https://github.com/YOUR-USERNAME/azure-ai-travel-agents.git
Using SSH
git clone git@github.com:YOUR-USERNAME/azure-ai-travel-agents.git
Using GitHub CLI
gh repo clone YOUR-USERNAME/azure-ai-travel-agents
To use a local LLM provider, you need to set the LLM_PROVIDER
environment variable in the ./src/api/.env
file, and provide the necessary configuration for the provider you want to use.
In order to run the application locally, you need to clone the repository and run the preview script. This will set up the necessary environment and start the application.
Using Azure Foundry Local
Before using Azure Foundry Local, ensure you have the Azure AI Foundry Local installed and running. You can find a list of available models by running the following command in your terminal:
foundry model list
Then set the following environment variables in your ./src/api/.env
file:
LLM_PROVIDER=foundry-local
AZURE_FOUNDRY_LOCAL_MODEL_ALIAS=phi-4-mini-reasoning
Using Docker Models
Prerequisites
- Git (for cloning the repository)
- Node.js (for the UI and API services)
- Docker v4.42.0 or later (for the MCP servers)
- ai/phi4:14B-Q4_0 model (7.80 GB)
- This is the model variant from the Phi-4 family that supports Function Calling which is required for the application to work.
Start the application
- Run the preview script from the root of the project:
For Linux and macOS users
./preview.sh
For Windows users
.\preview.ps1
Start the API service by running the following command in a terminal:
npm start --prefix=src/api
Open a new terminal and start the UI service by running the following command:
npm start --prefix=src/ui
Once all services are up and running, you can access the UI at http://localhost:4200.
You can also view the traces via the Aspire Dashboard at http://localhost:18888.
- On
Structured
tab you'll see the logging messages from the tool-echo-ping and api services. TheTraces
tab will show the traces across the services, such as the call from api to echo-agent.
Before using Docker Models, ensure you have the Docker Model Runner installed and running. You can find a list of available models by running the following command in your terminal:
docker model list
Then set the following environment variables in your ./src/api/.env
file:
LLM_PROVIDER=docker-models
# DOCKER_MODEL_ENDPOINT=http://model-runner.docker.internal/engines/llama.cpp/v1
# Use the following endpoint if you are running the model runner locally (default port is 12434)
DOCKER_MODEL_ENDPOINT=http://localhost:12434/engines/llama.cpp/v1
DOCKER_MODEL=ai/smollm2
Using Ollama Models
Before using Ollama Models, ensure you have the Ollama installed and running. You can find a list of available models by running the following command in your terminal:
ollama list
Then set the following environment variables in your ./src/api/.env
file:
LLM_PROVIDER=ollama-models
OLLAMA_MODEL_ENDPOINT=http://localhost:11434/v1
OLLAMA_MODEL=llama3.1
Using GitHub Codespaces
You can run this project directly in your browser by using GitHub Codespaces, which will open a web-based VS Code:
Using a VSCode dev container
A similar option to Codespaces is VS Code Dev Containers, that will open the project in your local VS Code instance using the Dev Containers extension.
You will also need to have Docker installed on your machine to run the container.
Running the MCP servers in a containerized environment
The included MCP servers are built using various technologies, such as Node.js, Python, and .NET. Each service has its own Dockerfile and is configured to run in a containerized environment.
To build and start all MCP servers containers (defined in the src/docker-compose.yml
file), run the following command:
docker compose -f src/docker-compose.yml up --build -d
This command will build and start all the services defined in the docker-compose.yml
file, including the UI and API services.
If you want to run the MCP servers containers only, you can use the following command:
docker compose -f src/docker-compose.yml up --build -d --no-deps customer-query destination-recommendation itinerary-planning echo-ping
Alternatively, if you're in VS Code you can use the Run Task command (Ctrl+Shift+P) and select the Run AI Travel Agents
task.
[!IMPORTANT] When running the application in a containerized environment, you will not be able to make changes to the code and see them reflected in the running services. You will need to rebuild the containers using
docker compose up --build
to see any changes. This is because the code is copied into the container during the build process, and any changes made to the code on your local machine will not be reflected in the container unless you rebuild it.
Environment Variables setup for containerized services
The application uses environment variables to configure the services. You can set them in a .env
file in the root directory or directly in your terminal. We recommend the following approach:
Create a
.env
file for each containerized service undersrc/
, and optionally a.env.docker
file for Docker-specific configurations:src/ui/.env
src/ui/.env.docker
src/api/.env
src/api/.env.docker
src/tools/customer-query/.env
src/tools/customer-query/.env.docker
src/tools/destination-recommendation/.env
src/tools/destination-recommendation/.env.docker
src/tools/itinerary-planning/.env
src/tools/itinerary-planning/.env.docker
src/tools/code-evaluation/.env
src/tools/code-evaluation/.env.docker
src/tools/model-inference/.env
src/tools/model-inference/.env.docker
src/tools/web-search/.env
src/tools/web-search/.env.docker
src/tools/echo-ping/.env
src/tools/echo-ping/.env.docker
.env.docker
files are used to set environment variables for Docker containers. These files should contain the same variables as.env
files, but with values specific to the Docker environment. For example:
# src/api/.env
MCP_CUSTOMER_QUERY_URL=http://localhost:8080
# src/api/.env.docker
MCP_CUSTOMER_QUERY_URL=http://tool-customer-query:8080
- Load the environment variable files in
docker-compose.yml
using theenv_file
directive, in the following order:
web-api:
container_name: web-api
# ...
env_file:
- "./api/.env"
- "./api/.env.docker" # override .env with .env.docker
[!Note] Adding the
- environment:
directive to thedocker-compose.yml
file will override the environment variables set in the.env.*
files.
Deploy to Azure
Prerequisites
Ensure you have the following installed before deploying the application:
Deploy the application
To deploy the application to Azure, you can use the provided azd
and Bicep infrastructure-as-code configuration (see /infra
folder). The azd
CLI is a command-line interface for deploying applications to Azure. It simplifies the process of provisioning, deploying and managing Azure resources.
To deploy the application, follow these steps:
- Open a terminal and navigate to the root directory of the project.
- Run the following command to initialize the Azure Developer CLI:
azd auth login
- Run the following command to deploy the application:
azd up
This command will provision the necessary Azure resources and deploy the application to Azure. To troubleshoot any issues, see troubleshooting.
Configure environment variables for running services
Configure environment variables for running services by updating main.parameters.json.
Configure CI/CD pipeline
Run azd pipeline config
to configure the deployment pipeline to connect securely to Azure.
Deploying with
GitHub Actions
: SelectGitHub
when prompted for a provider. If your project lacks theazure-dev.yml
file, accept the prompt to add it and proceed with pipeline configuration.Deploying with
Azure DevOps Pipeline
: SelectAzure DevOps
when prompted for a provider. If your project lacks theazure-dev.yml
file, accept the prompt to add it and proceed with pipeline configuration.
What's included in the infrastructure configuration
Infrastructure configuration
To describe the infrastructure and application, azure.yaml
along with Infrastructure as Code files using Bicep were added with the following directory structure:
- azure.yaml # azd project configuration
- infra/ # Infrastructure-as-code Bicep files
- main.bicep # Subscription level resources
- resources.bicep # Primary resource group resources
- modules/ # Library modules
The resources declared in resources.bicep are provisioned when running azd up
or azd provision
. This includes:
- Azure Container App to host the 'api' service.
- Azure Container App to host the 'ui' service.
- Azure Container App to host the 'itinerary-planning' service.
- Azure Container App to host the 'customer-query' service.
- Azure Container App to host the 'destination-recommendation' service.
- Azure Container App to host the 'echo-ping' service.
- Azure OpenAI resource to host the 'model-inference' service.
More information about Bicep language.
Troubleshooting
Q: I visited the service endpoint listed, and I'm seeing a blank page, a generic welcome page, or an error page.
A: Your service may have failed to start, or it may be missing some configuration settings. To investigate further:
- Run
azd show
. Click on the link under "View in Azure Portal" to open the resource group in Azure Portal. - Navigate to the specific Container App service that is failing to deploy.
- Click on the failing revision under "Revisions with Issues".
- Review "Status details" for more information about the type of failure.
- Observe the log outputs from Console log stream and System log stream to identify any errors.
- If logs are written to disk, use Console in the navigation to connect to a shell within the running container.
Q: I tried to provision or deploy the application, but it failed with an error.
Deployment Error Details:
InvalidTemplateDeployment: The template deployment 'openai' is not valid according to the validation procedure. The tracking id is 'xxxxxxxx-xxx-xxxx-xxxx-xxxxxxxxxxxx'. See inner errors for details.
SpecialFeatureOrQuotaIdRequired: The subscription does not have QuotaId/Feature required by SKU 'S0' from kind 'AIServices' or contains blocked QuotaId/Feature.
A: This error indicates that the Azure OpenAI service is not available in your subscription or region. To resolve this, you can either:
- Request access to the Azure OpenAI service by following the instructions in the Azure OpenAI Service documentation.
- Change the Azure OpenAI service SKU to a different one that is available in your subscription or region. You can do this by updating the
location
(orAZURE_LOCATION
) parameter in themain.parameters.json
file under theinfra
folder. - If you are using a free Azure subscription, consider upgrading to a paid subscription that supports the Azure OpenAI service.
Q: I deployed the application, but the UI is not loading or showing errors.
A: This could be due to several reasons, such as misconfigured environment variables, network issues, or service dependencies not being available. To troubleshoot:
- Check the logs of the UI service in Azure Portal to see if there are any errors or warnings.
- Ensure that all required environment variables are set correctly in the Azure Portal under the Container App settings.
- Verify that all dependent services (like the API, customer query, etc.) are running and accessible. For more troubleshooting information, visit Container Apps troubleshooting.
Q: Error: FunctionAgent must have at least one tool
A: This error indicates that your MCP servers are not running. Ensure that you have started the MCP servers using the docker compose up
command as described in the Running the MCP servers in a containerized environment section. If the services are running, check their logs for any errors or issues that might prevent them from functioning correctly.
Additional information
For additional information about setting up your azd
project, visit our official docs.