Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Volumes using image driver mounted by a systemd unit (quadlet) break other containers using the same volume on exit #27331

Open
Labels
kind/bugCategorizes issue or PR as related to a bug.
@TobinHall

Description

Issue Description

When using a systemd unit such as a quadlet to mount a volume using the image driver, other containers that start afterwards using the same volume will break when the first container is stopped.

The MountCount in podman volume inspect stays above 0, but the volume is not accessible in the second container.

If the first container is restarted, neither container has access to the volume.

I have been able to reproduce this in a fedora workstation live environment after upgrading podman.
I have used systemd-run to reproduce this, I believe this should be equivalent to a quadlet.

Steps to reproduce the issue

podman pull alpine
podman volume create --ignore --driver=image --opt image=alpine tiv #test image volume
systemd-run --user --unit podman-image-volume-test podman --log-level=debug run --network=none --rm --name sleeping-owner --volume=tiv:/tiv alpine sleep 10
sleep 1 # wait for the systemd unit to mount the volume
podman --log-level=debug run --rm -it --init --name watching-non-owner --network=none --volume=tiv:/tiv alpine sh -c "while ls /tiv; do sleep 1; done"

Describe the results you received

This is the (non-podman) output of the 'watching-non-owner' container, the files are available until the original container stops, then there is a "Socket not connected" error:

sleeping-owner.log
watching-non-owner.log

bin dev etc home lib media mnt opt proc root run sbin srv sys tmp usr var
bin dev etc home lib media mnt opt proc root run sbin srv sys tmp usr var
bin dev etc home lib media mnt opt proc root run sbin srv sys tmp usr var
bin dev etc home lib media mnt opt proc root run sbin srv sys tmp usr var
bin dev etc home lib media mnt opt proc root run sbin srv sys tmp usr var
bin dev etc home lib media mnt opt proc root run sbin srv sys tmp usr var
bin dev etc home lib media mnt opt proc root run sbin srv sys tmp usr var
bin dev etc home lib media mnt opt proc root run sbin srv sys tmp usr var
bin dev etc home lib media mnt opt proc root run sbin srv sys tmp usr var
bin dev etc home lib media mnt opt proc root run sbin srv sys tmp usr var
ls: can't open '/tiv': Socket not connected

Describe the results you expected

I expect that the volume would remain mounted and available to other containers.

podman info output

host:
 arch: amd64
 buildahVersion: 1.41.5
 cgroupControllers:
 - cpu
 - io
 - memory
 - pids
 cgroupManager: systemd
 cgroupVersion: v2
 conmon:
 package: conmon-2.1.13-1.fc42.x86_64
 path: /usr/bin/conmon
 version: 'conmon version 2.1.13, commit: '
 cpuUtilization:
 idlePercent: 78.72
 systemPercent: 15.36
 userPercent: 5.92
 cpus: 8
 databaseBackend: sqlite
 distribution:
 distribution: fedora
 variant: workstation
 version: "42"
 eventLogger: journald
 freeLocks: 2047
 hostname: localhost-live
 idMappings:
 gidmap:
 - container_id: 0
 host_id: 1000
 size: 1
 - container_id: 1
 host_id: 524288
 size: 65536
 uidmap:
 - container_id: 0
 host_id: 1000
 size: 1
 - container_id: 1
 host_id: 524288
 size: 65536
 kernel: 6.14.0-63.fc42.x86_64
 linkmode: dynamic
 logDriver: journald
 memFree: 120336384
 memTotal: 4091756544
 networkBackend: netavark
 networkBackendInfo:
 backend: netavark
 dns:
 package: aardvark-dns-1.14.0-1.fc42.x86_64
 path: /usr/libexec/podman/aardvark-dns
 version: aardvark-dns 1.14.0
 package: netavark-1.14.1-1.fc42.x86_64
 path: /usr/libexec/podman/netavark
 version: netavark 1.14.1
 ociRuntime:
 name: crun
 package: crun-1.20-2.fc42.x86_64
 path: /usr/bin/crun
 version: |-
 crun version 1.20
 commit: 9c9a76ac11994701dd666c4f0b869ceffb599a66
 rundir: /run/user/1000/crun
 spec: 1.0.0
 +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJL
 os: linux
 pasta:
 executable: /usr/bin/pasta
 package: passt-0^20250320.g32f6212-2.fc42.x86_64
 version: ""
 remoteSocket:
 exists: true
 path: /run/user/1000/podman/podman.sock
 rootlessNetworkCmd: pasta
 security:
 apparmorEnabled: false
 capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
 rootless: true
 seccompEnabled: true
 seccompProfilePath: /usr/share/containers/seccomp.json
 selinuxEnabled: true
 serviceIsRemote: false
 slirp4netns:
 executable: ""
 package: ""
 version: ""
 swapFree: 2406715392
 swapTotal: 4091539456
 uptime: 0h 5m 13.00s
 variant: ""
plugins:
 authorization: null
 log:
 - k8s-file
 - none
 - passthrough
 - journald
 network:
 - bridge
 - macvlan
 - ipvlan
 volume:
 - local
registries:
 search:
 - registry.fedoraproject.org
 - registry.access.redhat.com
 - docker.io
store:
 configFile: /home/liveuser/.config/containers/storage.conf
 containerStore:
 number: 0
 paused: 0
 running: 0
 stopped: 0
 graphDriverName: overlay
 graphOptions: {}
 graphRoot: /home/liveuser/.local/share/containers/storage
 graphRootAllocated: 818352128
 graphRootUsed: 299933696
 graphStatus:
 Backing Filesystem: overlayfs
 Native Overlay Diff: "false"
 Supports d_type: "true"
 Supports shifting: "true"
 Supports volatile: "true"
 Using metacopy: "false"
 imageCopyTmpDir: /var/tmp
 imageStore:
 number: 1
 runRoot: /run/user/1000/containers
 transientStore: false
 volumePath: /home/liveuser/.local/share/containers/storage/volumes
version:
 APIVersion: 5.6.2
 BuildOrigin: Fedora Project
 Built: 1759190400
 BuiltTime: Tue Sep 30 00:00:00 2025
 GitCommit: 9dd5e1ed33830612bc200d7a13db00af6ab865a4
 GoVersion: go1.24.7
 Os: linux
 OsArch: linux/amd64
 Version: 5.6.2

Podman in a container

No

Privileged Or Rootless

None

Upstream Latest Release

Yes

Additional environment details

Fedora workstation live environment in VM.

Additional information

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    kind/bugCategorizes issue or PR as related to a bug.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

      Relationships

      None yet

      Development

      No branches or pull requests

      Issue actions

        AltStyle によって変換されたページ (->オリジナル) /