I’m not a developer. I don’t live in the terminal. And until recently, I didn’t even know what a container was. But somehow, I found myself using Podman’s AI Lab—and actually enjoying it.
This article is a beginner’s guide to Podman AI Lab, written from a non-developer’s perspective. It walks through setting up Podman Desktop, installing the AI Lab extension, and launching a simple AI project (a RAG chatbot), highlighting how easy it is to run AI models locally with privacy, efficiency, and no cloud costs. It emphasizes Podman’s open source, secure, and user-friendly design—perfect for those new to containers or AI.
Podman overview
If this is your first time hearing about Podman, don’t worry—Podman is simply a container management tool, much like Docker, but with a few important differences that set it apart. Simply put, containers are portable units that hold an application along with everything it needs to run, so it works the same regardless of which software environment you’re on. Podman is a tool that helps you create and manage these containers, and because of its inherent security advantages, it has been gaining in popularity.
Now that we understand what Podman is, you might be wondering, what is Podman AI Lab? Podman AI Lab is an extension that you can install after setting up Podman. The AI Lab extension for Podman Desktop is open source and designed to facilitate working with large language models (LLMs) in a local environment. This approach is highly beneficial because in a local environment, your data never leaves your system, ensuring greater privacy and security.
It also offers faster response times due to low latency and high availability, and eliminates recurring usage fees, allowing for more predictable and customizable cost management. One of the easiest ways to get started with Podman and Podman AI Lab is by installing Podman Desktop on your computer. Podman Desktop offers a user-friendly graphical user interface (GUI) that makes it easy to manage containers and access a range of other features.
Set up Podman Desktop and Podman AI Lab
My first interaction with Podman Desktop started with a simple installation process. It’s completely free, and you can download it directly from any browser of your choice. Once you download the application to your computer, you're just a few clicks away from getting it up and running. Figure 1 shows the first screen you see when initiating the download process.

After launching the app, you’ll be prompted to install the latest version of Podman Desktop. From there, it’s just the usual setup steps, agreeing to the terms and entering your system password for permissions.
Next, if you're not using a Linux operating system, you'll need to create a Podman machine. This sets up a lightweight Linux virtual environment, enabling you to run containers on your system. This involves giving your machine a name and selecting how much CPU, disk space, and memory you'd like it to use. And just like that, you're ready to go.
After setting up Podman Desktop on your computer, installing Podman AI Lab is quick and easy. Simply navigate to the Extensions tab from the main dashboard and click Install next to Podman AI Lab.
A new tab will appear in the dashboard after installation, giving you direct access to Podman AI Lab and its features, as shown in Figure 2.

Discover Podman AI Lab
This is where the real excitement began for me. Here, you will be introduced to the dashboard of AI Lab with many side tabs that will help you navigate the extension. The AI Lab itself is a treasure trove of features, so let’s break down its core components:
- Recipes catalog: This is a great starting point for exploring ready-made AI use cases like chatbots, code generation, and text summarization. Each recipe includes clear explanations and sample apps that can run with different LLMs, making it easy to experiment and find the best fit. There are examples ranging from chatbot assistants and AI agents, to audio-to-text conversion, to object detection. It also includes a source code component that developers can use as a template to learn how to structure and build their own containerized applications.
- Built-in AI models: The platform features a curated collection of open-source AI models and LLMs that you can easily download and use to power apps, services, and experiments—no deep technical skills required. How cool is that?
- Model serving: Once you’ve downloaded a model, you can fire up an inference server for it, enabling you to test the model instantly in a built-in playground or connect it to external apps, using a standard chat API that makes integration seamless.
- Playgrounds: I personally think this is the coolest part. These built-in environments allow you to test models locally with an easy-to-use prompt interface, making it simple to explore their capabilities and find the right fit. Each playground also includes a chat client for direct interaction.
A glimpse into the possibilities
While exploring Podman AI Lab and discovering its range of features, I was inspired to try something new. From its diverse recipe catalog, I chose to experiment with a chatbot application. It was simple to put what I learned into practice, such as the retrieval-augmented generation (RAG) chatbot. This chatbot leverages RAG to enhance the language model’s responses by supplying it with relevant information from external sources, making the answers more accurate and context-aware.
Here is what I did to spin up this RAG chatbot in Podman AI Lab:
Navigate to the recipe catalog and click the install icon located at the top right corner of the RAG chatbot card (Figure 3).
Figure 3: Recipe Catalog (RAG Chatbot). - Once the installation is complete, click More details on the same card. This will take you to a page that provides you with everything you need to know about the RAG chatbot, including full instructions.
Next, click the Start button at the top right corner of the page (Figure 4), and you’ll be well on your way to launching your first chatbot with Podman AI Lab.
Figure 4: Main dashboard for RAG Chatbot application. After clicking Start, you’ll be prompted to select a model for your RAG chatbot. For this example, I chose “ibm-granite/granite-3.3-8B-instruct-GGUF.” Then, simply click the Start RAG Chatbot Recipe button and the AI Lab kicks off the container build process behind the scenes (Figure 5).
Figure 5: The model used for the RAG Chatbot. As shown in Figure 6, the AI takes care of all the setup for you, and the RAG chatbot is now up and running.
Figure 6: Process of AI Lab starting the chatbot. Next, click Open Details then go to the Actions tab of your model. Click the share icon, select a port to run it from, and your RAG chatbot will be ready to use (Figure 7).
Figure 7: Deploying the model to a local port.
Just like that, with only a few clicks, your RAG chatbot is up and running right in your local browser (Figure 8). From here, feel free to chat with it, fine-tune it, or optimize it to your liking using Podman Desktop and Podman AI Lab.

When I upload the PDF file to enhance the chatbot results and ask questions, the RAG retrieves the relevant text from the document and feeds it to the LLM, which then allows it to output an accurate response based on the context provided. Figure 9 shows an example of me uploading a PDF document to the RAG.

Why choose Podman?
Podman AI Lab has a lot to offer. I could go on and on about the curated catalog of AI models and example recipes or the seamless way you can interact with models through chat-like playgrounds. But what makes Podman AI Lab truly unique and stand out in its field?
Here’s my take on it:
- Run AI locally, keep your data private: Podman AI Lab runs entirely on your machine, so your data never leaves your system. There’s no need to rely on external servers or cloud platforms. Your privacy stays intact.
- More seamless container integration: Thanks to its Podman foundation, AI Lab uses lightweight, security-focused containers without requiring admin access, meaning it’s rootless. That means less hassle and more reliable performance.
- Lightweight and efficient: Without a heavy background daemon, Podman AI Lab’s architecture uses fewer system resources than many cloud-based or heavyweight solutions, making it ideal for running on personal laptops or modest servers.
- Easier to use: As mentioned, Podman is daemonless and rootless, which means it saves resources and enhances security. Furthermore, Podman Desktop is completely open source, so users can create issues and PRs to improve it.
- Open source with enterprise backing: Podman is open source, and Red Hat contributes to it, offering both the strength of community-driven development and the reliability of enterprise-level support.
Begin today, build the future
To begin your journey with Podman, start by downloading Podman Desktop on your computer. Once installed, you can add the AI Lab extension directly from the extensions tab as detailed previously. You can start exploring Podman by containerizing your own applications or experimenting in the playground environment to get hands-on experience and deepen your understanding right now.
At Red Hat, we believe the future belongs to you—the developers, the builders, the innovators. That’s why we’re committed to open source, making powerful tools accessible to everyone. We believe developers have the ability to shape what comes next. With open source at the core, you're empowered to build the next generation of AI-enabled applications that are flexible, transparent, and built for the future.