Announcing Native Azure Functions Support in Azure Container Apps
May 16, 2025What Is Platform Engineering?
May 16, 2025Introduction
Ever found yourself wishing your web interface could really talk and listen back to you? With a few clicks (and a bit of code), you can turn your plain Open WebUI into a full-on voice assistant. In this post, you’ll see how to spin up an Azure Speech resource, hook it into your frontend, and watch as user speech transforms into text and your app’s responses leap off the screen in a human-like voice.
By the end of this guide, you’ll have a voice-enabled web UI that actually converses with users, opening the door to hands-free controls, better accessibility, and a genuinely richer user experience. Ready to make your web app speak? Let’s dive in.
Why Azure AI Speech?
We use Azure AI Speech service in Open Web UI to enable voice interactions directly within web applications. This allows users to:
- Speak commands or input instead of typing, making the interface more accessible and user-friendly.
- Hear responses or information read aloud, which improves usability for people with visual impairments or those who prefer audio.
- Provide a more natural and hands-free experience especially on devices like smartphones or tablets.
In short, integrating Azure AI Speech service into Open Web UI helps make web apps smarter, more interactive, and easier to use by adding speech recognition and voice output features.
If you haven’t hosted Open WebUI already, follow my other step-by-step guide to host Ollama WebUI on Azure. Proceed to the next step if you have Open WebUI deployed already. Learn More about OpenWeb UI here.
Deploy Azure AI Speech service in Azure.
Navigate to the Azure Portal and search for Azure AI Speech on the Azure portal search bar. Create a new Speech Service by filling up the fields in the resource creation page. Click on “Create” to finalize the setup.
After the resource has been deployed, click on “View resource” button and you should be redirected to the Azure AI Speech service page. The page should display the API Keys and Endpoints for Azure AI Speech services, which you can use in Open Web UI.
Settings things up in Open Web UI
Speech to Text settings (STT)
Head to the Open Web UI Admin page > Settings > Audio. Paste the API Key obtained from the Azure AI Speech service page into the API key field below.
Unless you use different Azure Region, or want to change the default configurations for the STT settings, leave all settings to blank.
Text to Speech settings (TTS)
Now, let’s proceed with configuring the TTS Settings on OpenWeb UI by toggling the TTS Engine to Azure AI Speech option. Again, paste the API Key obtained from Azure AI Speech service page and leave all settings to blank.
You can change the TTS Voice from the dropdown selection in the TTS settings as depicted in the image below:
Click Save to reflect the change.
Expected Result
Now, let’s test if everything works well. Open a new chat / temporary chat on Open Web UI and click on the Call / Record button.
The STT Engine (Azure AI Speech) should identify your voice and provide a response based on the voice input.
To test the TTS feature, click on the Read Aloud (Speaker Icon) under any response from Open Web UI.
The TTS Engine should reflect Azure AI Speech service!
Conclusion
And that’s a wrap! You’ve just given your Open WebUI the gift of capturing user speech, turning it into text, and then talking right back with Azure’s neural voices. Along the way you saw how easy it is to spin up a Speech resource in the Azure portal, wire up real-time transcription in the browser, and pipe responses through the TTS engine.
From here, it’s all about experimentation. Try swapping in different neural voices or dialing in new languages. Tweak how you start and stop listening, play with silence detection, or add custom pronunciation tweaks for those tricky product names. Before you know it, your interface will feel less like a web page and more like a conversation partner.