Skip to main content
Redhat Developers  Logo
  • Products

    Featured

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat OpenShift AI
      Red Hat OpenShift AI
    • Red Hat Enterprise Linux AI
      Linux icon inside of a brain
    • Image mode for Red Hat Enterprise Linux
      RHEL image mode
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • Red Hat Developer Hub
      Developer Hub
    • View All Red Hat Products
    • Linux

      • Red Hat Enterprise Linux
      • Image mode for Red Hat Enterprise Linux
      • Red Hat Universal Base Images (UBI)
    • Java runtimes & frameworks

      • JBoss Enterprise Application Platform
      • Red Hat build of OpenJDK
    • Kubernetes

      • Red Hat OpenShift
      • Microsoft Azure Red Hat OpenShift
      • Red Hat OpenShift Virtualization
      • Red Hat OpenShift Lightspeed
    • Integration & App Connectivity

      • Red Hat Build of Apache Camel
      • Red Hat Service Interconnect
      • Red Hat Connectivity Link
    • AI/ML

      • Red Hat OpenShift AI
      • Red Hat Enterprise Linux AI
    • Automation

      • Red Hat Ansible Automation Platform
      • Red Hat Ansible Lightspeed
    • Developer tools

      • Red Hat Trusted Software Supply Chain
      • Podman Desktop
      • Red Hat OpenShift Dev Spaces
    • Developer Sandbox

      Developer Sandbox
      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Openshift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • View All Technologies
    • Programming Languages & Frameworks

      • Java
      • Python
      • JavaScript
    • System Design & Architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer Productivity

      • Developer productivity
      • Developer Tools
      • GitOps
    • Secure Development & Architectures

      • Security
      • Secure coding
    • Platform Engineering

      • DevOps
      • DevSecOps
      • Ansible automation for applications and services
    • Automated Data Processing

      • AI/ML
      • Data Science
      • Apache Kafka on Kubernetes
      • View All Technologies
    • Start exploring in the Developer Sandbox for free

      sandbox graphic
      Try Red Hat's products and technologies without setup or configuration.
    • Try at no cost
  • Learn

    Featured

    • Kubernetes & Cloud Native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • Java
      Java icon
    • AI/ML
      AI/ML Icon
    • View All Learning Resources

    E-Books

    • GitOps Cookbook
    • Podman in Action
    • Kubernetes Operators
    • The Path to GitOps
    • View All E-books

    Cheat Sheets

    • Linux Commands
    • Bash Commands
    • Git
    • systemd Commands
    • View All Cheat Sheets

    Documentation

    • API Catalog
    • Product Documentation
    • Legacy Documentation
    • Red Hat Learning

      Learning image
      Boost your technical skills to expert-level with the help of interactive lessons offered by various Red Hat Learning programs.
    • Explore Red Hat Learning
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

AI meets containers: My first step into Podman AI Lab

July 14, 2025
Saroj Paudel
Related topics:
Artificial intelligenceContainersDeveloper ToolsLinuxKubernetesOpen sourceSecurity
Related products:
Podman DesktopRed Hat AI

Share:

    I’m not a developer. I don’t live in the terminal. And until recently, I didn’t even know what a container was. But somehow, I found myself using Podman’s AI Lab—and actually enjoying it.

    This article is a beginner’s guide to Podman AI Lab, written from a non-developer’s perspective. It walks through setting up Podman Desktop, installing the AI Lab extension, and launching a simple AI project (a RAG chatbot), highlighting how easy it is to run AI models locally with privacy, efficiency, and no cloud costs. It emphasizes Podman’s open source, secure, and user-friendly design—perfect for those new to containers or AI.

    Podman overview

    If this is your first time hearing about Podman, don’t worry—Podman is simply a container management tool, much like Docker, but with a few important differences that set it apart. Simply put, containers are portable units that hold an application along with everything it needs to run, so it works the same regardless of which software environment you’re on. Podman is a tool that helps you create and manage these containers, and because of its inherent security advantages, it has been gaining in popularity. 

    Now that we understand what Podman is, you might be wondering, what is Podman AI Lab? Podman AI Lab is an extension that you can install after setting up Podman. The AI Lab extension for Podman Desktop is open source and designed to facilitate working with large language models (LLMs) in a local environment. This approach is highly beneficial because in a local environment, your data never leaves your system, ensuring greater privacy and security. 

    It also offers faster response times due to low latency and high availability, and eliminates recurring usage fees, allowing for more predictable and customizable cost management. One of the easiest ways to get started with Podman and Podman AI Lab is by installing Podman Desktop on your computer. Podman Desktop offers a user-friendly graphical user interface (GUI) that makes it easy to manage containers and access a range of other features.   

    Set up Podman Desktop and Podman AI Lab 

    My first interaction with Podman Desktop started with a simple installation process. It’s completely free, and you can download it directly from any browser of your choice. Once you download the application to your computer, you're just a few clicks away from getting it up and running. Figure 1 shows the first screen you see when initiating the download process.

    Webpage that allows you to download Podman Desktop.
    Figure 1: Download Podman Desktop.

    After launching the app, you’ll be prompted to install the latest version of Podman Desktop. From there, it’s just the usual setup steps, agreeing to the terms and entering your system password for permissions.

    Next, if you're not using a Linux operating system, you'll need to create a Podman machine. This sets up a lightweight Linux virtual environment, enabling you to run containers on your system. This involves giving your machine a name and selecting how much CPU, disk space, and memory you'd like it to use. And just like that, you're ready to go.

    After setting up Podman Desktop on your computer, installing Podman AI Lab is quick and easy. Simply navigate to the Extensions tab from the main dashboard and click Install next to Podman AI Lab. 

    A new tab will appear in the dashboard after installation, giving you direct access to Podman AI Lab and its features, as shown in Figure 2.

    The page in Podman Desktop where you can install Podman AI Lab extension.
    Figure 2: Install Podman AI Lab.

    Discover Podman AI Lab 

    This is where the real excitement began for me. Here, you will be introduced to the dashboard of AI Lab with many side tabs that will help you navigate the extension. The AI Lab itself is a treasure trove of features, so let’s break down its core components: 

    • Recipes catalog: This is a great starting point for exploring ready-made AI use cases like chatbots, code generation, and text summarization. Each recipe includes clear explanations and sample apps that can run with different LLMs, making it easy to experiment and find the best fit. There are examples ranging from chatbot assistants and AI agents, to audio-to-text conversion, to object detection. It also includes a source code component that developers can use as a template to learn how to structure and build their own containerized applications.
    • Built-in AI models: The platform features a curated collection of open-source AI models and LLMs that you can easily download and use to power apps, services, and experiments—no deep technical skills required. How cool is that?
    • Model serving: Once you’ve downloaded a model, you can fire up an inference server for it, enabling you to test the model instantly in a built-in playground or connect it to external apps, using a standard chat API that makes integration seamless.
    • Playgrounds: I personally think this is the coolest part. These built-in environments allow you to test models locally with an easy-to-use prompt interface, making it simple to explore their capabilities and find the right fit. Each playground also includes a chat client for direct interaction.

    A glimpse into the possibilities

    While exploring Podman AI Lab and discovering its range of features, I was inspired to try something new. From its diverse recipe catalog, I chose to experiment with a chatbot application. It was simple to put what I learned into practice, such as the retrieval-augmented generation (RAG) chatbot. This chatbot leverages RAG to enhance the language model’s responses by supplying it with relevant information from external sources, making the answers more accurate and context-aware. 

    Here is what I did to spin up this RAG chatbot in Podman AI Lab:    

    1. Navigate to the recipe catalog and click the install icon located at the top right corner of the RAG chatbot card (Figure 3). 

      A section of the Recipe Catalog in Podman Desktop AI Lab, highlighting three available applications: Summarizer, Code Generator, and RAG Chatbot, each ready for installation.
      Figure 3: Recipe Catalog (RAG Chatbot).
    2. Once the installation is complete, click More details on the same card. This will take you to a page that provides you with everything you need to know about the RAG chatbot, including full instructions.
    3. Next, click the Start button at the top right corner of the page (Figure 4), and you’ll be well on your way to launching your first chatbot with Podman AI Lab. 

      The main interface of the RAG (Retrieval-Augmented Generation) Chat Application, which appears after installation and launching the app.
      Figure 4: Main dashboard for RAG Chatbot application.
    4. After clicking Start, you’ll be prompted to select a model for your RAG chatbot. For this example, I chose “ibm-granite/granite-3.3-8B-instruct-GGUF.” Then, simply click the Start RAG Chatbot Recipe button and the AI Lab kicks off the container build process behind the scenes (Figure 5).

      The model selection screen for the RAG Chatbot example, where I chose the new "ibm-granite/granite-3.3-8B-instruct-GGUF" model.
      Figure 5: The model used for the RAG Chatbot.
    5. As shown in Figure 6, the AI takes care of all the setup for you, and the RAG chatbot is now up and running.

      A checklist of all the things that the AI Lab is doing on it's own. For example, Pulling RAG Chatbot Recipe, checking the repository, starting inference server among many other tasks.
      Figure 6: Process of AI Lab starting the chatbot.
    6. Next, click Open Details then go to the Actions tab of your model. Click the share icon, select a port to run it from, and your RAG chatbot will be ready to use (Figure 7). 

      The currently running models in Podman Desktop. The highlighted Share button allows you to deploy the selected application to your local host.
      Figure 7: Deploying the model to a local port.

    Just like that, with only a few clicks, your RAG chatbot is up and running right in your local browser (Figure 8). From here, feel free to chat with it, fine-tune it, or optimize it to your liking using Podman Desktop and Podman AI Lab.

    The main interface of the RAG Chatbot application launched in the local host browser.
    Figure 8: Main dashboard of the RAG Chatbot application.

    When I upload the PDF file to enhance the chatbot results and ask questions, the RAG retrieves the relevant text from the document and feeds it to the LLM, which then allows it to output an accurate response based on the context provided. Figure 9 shows an example of me uploading a PDF document to the RAG.

    Uploading a PDF to the RAG Chatbot, enabling it to learn from the document and generate more accurate, context-aware responses. I then ask the chatbot questions based on the content of the uploaded PDF.
    Figure 9: Feeding the RAG information.

    Why choose Podman?

    Podman AI Lab has a lot to offer. I could go on and on about the curated catalog of AI models and example recipes or the seamless way you can interact with models through chat-like playgrounds. But what makes Podman AI Lab truly unique and stand out in its field?  

    Here’s my take on it:

    • Run AI locally, keep your data private: Podman AI Lab runs entirely on your machine, so your data never leaves your system. There’s no need to rely on external servers or cloud platforms. Your privacy stays intact.
    • More seamless container integration: Thanks to its Podman foundation, AI Lab uses lightweight, security-focused containers without requiring admin access, meaning it’s rootless. That means less hassle and more reliable performance.
    • Lightweight and efficient: Without a heavy background daemon, Podman AI Lab’s architecture uses fewer system resources than many cloud-based or heavyweight solutions, making it ideal for running on personal laptops or modest servers.
    • Easier to use: As mentioned, Podman is daemonless and rootless, which means it saves resources and enhances security. Furthermore, Podman Desktop is completely open source, so users can create issues and PRs to improve it.
    • Open source with enterprise backing: Podman is open source, and Red Hat contributes to it, offering both the strength of community-driven development and the reliability of enterprise-level support.

    Begin today, build the future

    To begin your journey with Podman, start by downloading Podman Desktop on your computer. Once installed, you can add the AI Lab extension directly from the extensions tab as detailed previously. You can start exploring Podman by containerizing your own applications or experimenting in the playground environment to get hands-on experience and deepen your understanding right now. 

    At Red Hat, we believe the future belongs to you—the developers, the builders, the innovators. That’s why we’re committed to open source, making powerful tools accessible to everyone. We believe developers have the ability to shape what comes next. With open source at the core, you're empowered to build the next generation of AI-enabled applications that are flexible, transparent, and built for the future.

    Related Posts

    • Getting started with Podman AI Lab

    • Introducing Podman AI Lab: Developer tooling for working with LLMs

    • Experiment and test AI models with Podman AI Lab

    • Retrieval-augmented generation with Node.js, Podman AI Lab & React

    • What is Podman Desktop? A developer's introduction

    Recent Posts

    • Create and enrich ServiceNow ITSM tickets with Ansible Automation Platform

    • Expand Model-as-a-Service for secure enterprise AI

    • OpenShift LACP bonding performance expectations

    • Build container images in CI/CD with Tekton and Buildpacks

    • How to deploy OpenShift AI & Service Mesh 3 on one cluster

    What’s up next?

    Discover how you can use the Podman AI Lab extension for Podman Desktop to work with large language models (LLMs) in a local environment.

    Start the activity
    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Products

    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform

    Build

    • Developer Sandbox
    • Developer Tools
    • Interactive Tutorials
    • API Catalog

    Quicklinks

    • Learning Resources
    • E-books
    • Cheat Sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site Status Dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2025 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Report a website issue