norden.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
Moin! Dies ist die Mastodon-Instanz für Nordlichter, Schnacker und alles dazwischen. Folge dem Leuchtturm.

Administered by:

Server stats:

3.5K
active users

#lxc

4 posts3 participants1 post today

I have finally caved in and dove into the rabbit hole of #Linux Container (#LXC) on #Proxmox during my exploration on how to split a GPU across multiple servers and... I totally understand now seeing people's Proxmox setups that are made up exclusively of LXCs rather than VMs lol - it's just so pleasant to setup and use, and superficially at least, very efficient.

I now have a
#Jellyfin and #ErsatzTV setup running on LXCs with working iGPU passthrough of my server's #AMD Ryzen 5600G APU. My #Intel #ArcA380 GPU has also arrived, but I'm prolly gonna hold off on adding that until I decide on which node should I add it to and schedule the shutdown, etc. In the future, I might even consider exploring (re)building a #Kubernetes, #RKE2 cluster on LXC nodes instead of VMs - and if that's viable or perhaps better.

Anyway, I've updated my
#Homelab Wiki with guides pertaining LXCs, including creating one, passing through a GPU to multiple unprivileged LXCs, and adding an #SMB share for the entire cluster and mounting them, also, on unprivileged LXC containers.

🔗 https://github.com/irfanhakim-as/homelab-wiki/blob/master/topics/proxmox.md#linux-containers-lxc

Wiki about everything Homelab. Contribute to irfanhakim-as/homelab-wiki development by creating an account on GitHub.
GitHubhomelab-wiki/topics/proxmox.md at master · irfanhakim-as/homelab-wikiWiki about everything Homelab. Contribute to irfanhakim-as/homelab-wiki development by creating an account on GitHub.

Bruh, I might've wasted my time learning how to passthrough a GPU to an #LXC container on #Proxmox (as well as mount a SMB/CIFS share) and write up a guide (haven't been able to test yet, cept with the latter) - all by doing some seemingly magic #Linux fu with some user/group mappings and custom configs, if it turns out that you could actually achieve the same result just as easily graphically using a standard wizard on PVE.

It's 4am, I'll prolly try to find time later during the day, or rather evening (open house to attend at noon), and try using the wizard to 1) Add a device passthrough on an LXC container for my
#AMD iGPU (until my #Intel #ArcA380 GPU arrives) and see if the root user + service user on the container could access it/use it for transcoding on #Jellyfin/#ErsatzTV, and 2) Add a SMB/CIFS storage on the Proxmox Datacenter, tho my #NAS is also just a Proxmox VM in the same cluster (not sure if this is a bad idea?) and see if I could mount that storage to the LXC container that way.

#Homelab folks who have done this, feel free to give some tips or wtv if you've done this before!

I'm writing a guide on splitting a GPU passthrough across multiple #Proxmox #LXC containers based on a few resources, including the amazing Jim's Garage video.

Does anyone know the answer to this question of mine though, on why he might've chosen to map a seemingly arbitrary GID
107 on the LXC Container to the Proxmox host's render group GID of 104 - instead of mapping 104 -> 104, as he did with the video group, where he mapped 44 -> 44 (which seems to make sense to me)?

I've watched his video seemingly a million times, and referred to his incredibly simplified guide on his GitHub that's mostly only meant for copy-pasting purposes, and I couldn't quite understand why yet - I'm not sure if it really is arbitrary and
107 on the LXC Container could be anything, including 104 if we wanted to... or if it (i.e. 107) should've been the LXC Container's actual render group GID, in which case then it should've also been 104 instead of 107 on his Debian LXC Container as it is on mine.

Anyway, super excited to test this out once my
#Intel #ArcA380 arrives. I could probably already test it by passing through one of my node's Ryzen 5 5600G iGPU, but I worry if I'd screw something up, seeing that it's the only graphics onboard the node.

🔗 https://github.com/JamesTurland/JimsGarage/issues/141

Referencing to the following resources: https://youtu.be/0ZDr5h52OOE https://github.com/JamesTurland/JimsGarage/tree/main/LXC/Jellyfin May I know the reasoning behind the GID mapping choice for the...
GitHub[QUESTION] Clarification on GID mapping choice for render group · Issue #141 · JamesTurland/JimsGarageBy irfanhakim-as

I'm at the absolute end of my comprehension of Wireguard and WG-Easy. I, for the love of anything, CANNOT get my VPN to stay connected for more than 3 minutes. I have tried connecting via direct Public IP, my Domain with A Certs, PersistentKeepalive, changing/removing UFW/Firewall, hosting on bare metal, LXC, VM. I am at a complete loss and simply do not understand this anymore. If anyone has any ideas, please send them my way. #proxmox #selfhosting #vpn #lxc #vm #wireguard

Hey networking/LXC specialists.

I have NextCloudPi running as an LXC container.

To access it, I set up routing on my Mikrotik router (screenshot).

The problem is that accessing NCP this way is very slow, I need to wait 5-10 seconds for the page to load.

I have Tailscale installed in the container, and accessing NCP using the Tailscale host name is nearly instantaneous.

Testing Open WebUi with Gemma:3 on my proxmox mini PC in a LXC. My hardware is limited, 12th Gen Intel Core i5-12450H so I’m only using the 1b (28 token/s) and 4b (11 token/s) version for now.

Image description is functioning, but it is slow; it takes 30 seconds to generate this text with the 4b version and 16G allocated for the LXC.

Next step, trying this on my Mac M1.

#selfhosting
Maybe of limited interest -
A ZFS ZVol is presented as a block device.
On ZFS storage, #Incus and #Lxc use ZVol's for VM storage.
When creating a ZVol for a VM Incus/Lxc will typically use udev to determine the Zvol ID.
Alpine Linux does not include udev by default which will cause Incus/Lxc VM creation to fail, albeit with different error messages.
Solution is simple. When using ZFS on Alpine make sure to "apk add zfs-udev"

Kein Bock mehr auf Werbung im Internet? Dann solltet ihr euch #pihole mal genauer ansehen. Passend zum Release der Version 6 gibt es nun einen neuen Blogbeitrag: Was es alles beim Update zu beachten gibt, die Installation und Konfiguration für Neueinsteiger und etliche Tipps & Tricks zu Pi-hole: decatec.de/home-server/schluss
#AdBlocker #HomeServer #Ubuntu #Proxmox #LXC

DecaTec · Schluss mit Werbung im Internet: Pi-hole auf Ubuntu Server » DecaTecPi-hole ermöglicht das Blockieren von Werbung im gesamten Netzwerk. In diesem Artikel werden Installation, Konfiguration und erweiterte Funktionen erklärt.

So I have this idea to move at least some of my self-hosted stuff from Docker to LXC.

Correct my if I'm wrong dear Fedisians, but I feel that LXC is better than Docker for services that are long-lasting and keeping an internal state, like Jellyfin or Forgejo or Immich? Docker containers are ephemeral in nature, whereas LXC containers are, from what I understand, somewhere between Docker containers and VMs.