Microsoft Teams Android Devices – Device Code Flow Sign-In Issue – Remediation Guidance
July 3, 2025Responsible AI and the Evolution of AI Security
July 3, 2025We’re excited to share that Radius now supports deploying your applications to additional container platforms, beginning with Azure Container Instances. Radius enables platform engineers to build internal developer platforms that improve collaboration between enterprise application teams and platform engineering teams. As a CNCF open-source application platform, an important part of the Radius vision is to be platform agnostic, including the underlying compute such that Radius can deploy the same application across different compute platforms. This integration provides Radius users with a serverless container compute option that enables platform engineers to build developer platforms that are decoupled from specific container runtimes while still benefiting from the Radius application-centric approach and separation of concerns.
To see a demo of this feature, check out the recording of Mark Russinovich’s Inside Azure Innovations session from the Microsoft Build 2025 event
In this post, we’ll walk through at a high-level how to deploy your Radius applications to Azure Container Instances, as well as explore specific details behind the integration. For a more detailed guide, check out the How-To: Deploy an Application to Azure Container Instances guide in the Radius documentation.
What is Azure Container Instances?
Azure Container Instances (ACI) is a serverless container platform in Microsoft Azure that allows users to run containerized applications without managing underlying infrastructure, such as virtual machines or complex orchestration systems. It provides a lightweight, unopinionated compute environment where containers can be deployed quickly, starting in seconds, with configurable CPU and memory resources. ACI is well-suited for cloud-native applications, task automation, build jobs, or any scenario where a lightweight, fast-starting container is beneficial without the overhead of managing a full orchestration system. To learn more about ACI, check out the official documentation from Azure.
Why ACI with Radius?
Radius brings an application model to ACI that abstracts away the deployment process, allowing developers to focus on application logic while leaving infrastructure setup and provisioning to platform engineers or IT operators. For developers, the experience is streamlined: they can use the rad CLI to deploy to ACI without needing to understand the underlying infrastructure complexities. Platform teams are able to enforce cost, operations, and security requirements as a part of the development workflow. Since Radius is platform-agnostic, the same application definition can be used to deploy across different compute environments—whether it’s Kubernetes, ACI, or future platforms like ECS—making it easier to evolve the architecture without rewriting deployment logic. This approach reduces vendor lock-in and simplifies multi-cloud or hybrid strategies.
Deploying to ACI using Radius
This section provides a high-level overview of the deployment process. For a step-by-step guide, see the How-To: Deploy an Application to Azure Container Instances guide in the Radius documentation.
To deploy your Radius applications to ACI, you’ll first need to ensure you have an Azure provider configured and registered with your Radius control plane. If you haven’t set up Radius yet, you can install it using the rad CLI and connect it to your Azure subscription. For detailed instructions, refer to the Radius installation and Azure provider guides.
Create an ACI environment
Once your Azure provider is set up, you can create a Radius Environment that is configured to ACI for its underlying compute. This environment will be the deployment target for your applications bound for ACI. Creating the ACI environment is the same as creating any other environment in Radius, except that you’ll specify ACI as the compute platform along with other needed configurations in the environment definition file, for example:
resource env ‘Applications.Core/environments@2023-10-01-preview’ = {
name: ‘aci-demo’
properties: {
compute: {
kind: ‘aci’
// Replace value with your resource group ID
resourceGroup: ‘/subscriptions//resourceGroups/’
identity: {
kind:’userAssigned’
// Replace value with your managed identity resource ID
managedIdentity: [‘/subscriptions//resourceGroups//providers/Microsoft.ManagedIdentity/userAssignedIdentities/’]
}
}
providers: {
azure: {
// Replace value with your resource group ID
scope: ‘/subscriptions//resourceGroups/’
}
}
}
}
Note that a managed identity is required for ACI deployments. If you choose to utilize a user-assigned managed identity, then you need to ensure it is assigned to the Contributor and Azure Container Instances Contributor roles on the subscription and resource group where the ACI containers will be deployed.
Then, just like any other Radius environment, you deploy this ACI environment using the rad deploy command. When Radius creates and deploys the environment, it will provision the relevant Azure resources required to host your applications in ACI, including the virtual network, internal load balancer, and network security group.
Define and deploy your application
With your environment ready, you can proceed to deploy your application to ACI without changing how you define your applications in Radius. Your Radius application definition includes your container specifications, environment variables, and any required connections to other resources. The beauty of Radius is that the application definitions remain consistent regardless of whether you’re targeting Kubernetes or ACI.
Once your application is defined, you can deploy it using the rad deploy command, specifying ACI as your target platform.
For example, if you have an application defined in a Bicep file named app.bicep, you can deploy it to your ACI environment like this:
rad deploy ./app.bicep –environment aci-demo
Alternatively, if you have a workspace set up for ACI, you can deploy your application using the workspace flag:
rad deploy ./app.bicep –workspace aci-workspace
Once your deployment completes, you can run the rad app graph command in your terminal to view resources that were provisioned for your application:
Displaying application: demo-app
Name: frontend (Applications.Core/containers)
Connections:
gateway (Applications.Core/gateways) -> frontend
frontend -> database (Applications.Datastores/redisCaches)
Resources:
frontend (Microsoft.ContainerInstance/containerGroupProfiles)
frontend (Microsoft.ContainerInstance/nGroups)
frontend (Microsoft.Network/loadBalancers/applications)
frontend (Microsoft.Network/virtualNetworks/subnets)
Name: gateway (Applications.Core/gateways)
Connections:
gateway -> frontend (Applications.Core/containers)
Resources:
gateway (Microsoft.Network/applicationGateways)
gateway-nsg (Microsoft.Network/networkSecurityGroups)
gateway (Microsoft.Network/publicIPAddresses)
gateway (Microsoft.Network/virtualNetworks/subnets)
Name: database (Applications.Datastores/redisCaches)
Connections:
frontend (Applications.Core/containers) -> database
Resources:
cache-vxkt2iou25nht (Microsoft.Cache/redis)
The entire process leverages Radius’s application-centric approach, allowing you to focus on defining what your application needs rather than the underlying infrastructure details specific to ACI.
How it works
Currently, ACI support is hardcoded as imperative Go code in the Radius core codebase, including the API, Recipes, data model, and other components. The Environment and Container resource schemas were updated to include ACI-specific properties. If you’re interested in diving deeper into the implementation details, you can refer to the code changes in PR #9436 from the Radius repo.
ACI NGroups
The Radius integration leverages the ACI NGroups functionality, which provides a single NGroups API call to create and maintain N number of container instances using a common template. This type of orchestration capability made it possible to build the integration necessary to enable deployment of application containers and NGroups resources to ACI using Radius.
Azure resources provisioned by Radius
Behind the scenes, Radius handles the translation of your application model into the appropriate Azure resources, including container groups and networking components, and provisions them accordingly on your behalf:
- Load balancer: ACI requires an internal load balancer to manage traffic to the container instances. Radius provisions a load balancer that routes traffic to your application containers.
- Virtual Network: ACI requires a virtual network for networking and security. Radius provisions a virtual network and subnet for your ACI deployments.
- Network Security Group: ACI deployments require a network security group to control inbound and outbound traffic. Radius creates a security group with appropriate rules based on your application requirements.
- Container Group Profiles: ACI supports container group profiles, which allow you to define common settings for multiple container groups. Radius sets up these profiles based on your application definitions, enabling consistent configurations across deployments.
- Container NGroups: Radius creates container NGroups to manage multiple instances of your application containers.
- Container Instances: The actual container instances are created based on your application definitions, including the container images, environment variables, and resource requirements.
What’s Next?
This initial release of ACI support in Radius is just the beginning. The vision is to implement a compute platform extensibility model that allows Radius to support additional container runtimes in a more lightweight, flexible, and declarative way in lieu of the current imperative code.
To learn more about or provide feedback on this new compute platform extensibility model, check out the Compute Platform Extensibility design document currently in progress.
Redesigned compute platform extensibility model
With the way ACI integration is currently implemented through imperative code, it is not readily extensible for other platforms. Expansion into each new platform requires intimate knowledge of the Radius codebase in order to make the necessary changes to support the new platform. To address this, the Radius maintainers plan to refactor the ACI integration to make use of Radius extensibility features that leverage Radius Recipes to implement the ACI (and other platforms going forward) integration in a more declarative and extensible way.
This new design will enable:
- Architectural separation of Radius core logic from platform provisioning code
- Community-provided extensions to support new compute platforms without Radius code changes
- Consistent platform engineering and developer experience across all resource types
Support for platform-specific capabilities
As a part of the new extensibility model, the plan is to enable support for Radius users to access platform-specific capabilities in their applications. This means that while Radius will continue to provide a consistent application model across different platforms, users will also be able to leverage unique features of each platform when targeting deployments to applicable environments. For example, Radius users should be able to deploy to confidential containers when targeting the deployment to an ACI environment.
Learn more and contribute
- Review and provide feedback on the Radius Compute Platform Extensibility design document
- Join the Radius monthly community meeting to see demos and hear the latest updates
- Join the discussion or ask for help on the Radius Discord server
- Subscribe to the Radius YouTube channel for more demos