Hi everyone! I want to be able to access a folder inside the guest that corresponds to a cloud drive that is mounted inside the guest for security purposes. I have tried setting up a shared filesystem inside Virt-Manager (KVM) with virtiofs (following this tutorial: https://absprog.com/post/qemu-kvm-shared-folder) but as soon as I mount the folder in order for it to be accessible on the guest host the cloud drive gets unmounted. I guess a folder cannot have two mounts at the same time. Aliasing the folder using bind and then sharing the aliased folder with the host doesn’t work either. The aliased folder is simply empty on the host.

Does anyone have an idea regarding how I might accomplish this? Is KVM the right choice or would something like docker or podman better suited for this job? Thank you.

Edit: To clarify: The cloud drive is mounted inside a virtual machine for security purposes as the binary is proprietary and I do not want to mount it on the host (bwrap and the like introduce a whole lot of problems, the drive doesn’t sync anymore and I have to relogin each time). I do not use the virtual machine per se, I just start it and leave it be.

  • eldavi@lemmy.ml
    link
    fedilink
    arrow-up
    2
    ·
    2 days ago

    What do you mean by intermediary? Do you mean syncing the files with the VM and then sharing the synced copy with the host?That wouldn’t work since my drive is smaller than the cloud drive and I need all the files on-demand.

    that’s one way. do you need them all at the same time? are they mostly the same size and type?

    • GathererStuff@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      do you need them all at the same time?

      I need to access all files conveniently and transparently depending on what I need at work in that particular moment.

      are they mostly the same size and type?

      Hard no.

      • eldavi@lemmy.ml
        link
        fedilink
        arrow-up
        1
        ·
        2 days ago

        sshfs might work if your fuse drive is mounted with options that will let it be shared and you have sudo access to enable sshfs. also ssh access is a requirement.

        how is it mounted now? it should also be in that same mount printout and usually at the end of the line inside parenthesis.

          • eldavi@lemmy.ml
            link
            fedilink
            arrow-up
            1
            ·
            18 hours ago

            user_id=0,group_id=0

            do you have sudo access and are there any rules in /etc/sudo* that match your username or any of your groups? which distribution?

            • GathererStuff@lemmy.mlOP
              link
              fedilink
              English
              arrow-up
              1
              ·
              18 hours ago

              Since originally writing the post I have switched to a rootless podman container. Running it how I did before (inside a VM) would simply yield user_id=1000,group_id=1000 I think.

              • eldavi@lemmy.ml
                link
                fedilink
                arrow-up
                1
                ·
                18 hours ago

                that implies that you’re not using the binary anymore since you’re in a container; is it using an overlay fs?

                • GathererStuff@lemmy.mlOP
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  18 hours ago

                  I am using the binary. Just running it inside a container instead of a VM.

                  overlay fs?

                  Yes.

                  • eldavi@lemmy.ml
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    17 hours ago

                    so the drive isn’t mounted when the container starts; but you execute it after it started and then the drive is mounted?