norden.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
Moin! Dies ist die Mastodon-Instanz für Nordlichter, Schnacker und alles dazwischen. Folge dem Leuchtturm.

Administered by:

Server stats:

3.4K
active users

#arca380

0 posts0 participants0 posts today
Mika<p>I have finally caved in and dove into the rabbit hole of <a href="https://sakurajima.social/tags/Linux" rel="nofollow noopener noreferrer" target="_blank">#Linux</a> Container (<a href="https://sakurajima.social/tags/LXC" rel="nofollow noopener noreferrer" target="_blank">#LXC</a>) on <a href="https://sakurajima.social/tags/Proxmox" rel="nofollow noopener noreferrer" target="_blank">#Proxmox</a><span> during my exploration on how to split a GPU across multiple servers and... I totally understand now seeing people's Proxmox setups that are made up exclusively of LXCs rather than VMs lol - it's just so pleasant to setup and use, and superficially at least, very efficient.<br><br>I now have a </span><a href="https://sakurajima.social/tags/Jellyfin" rel="nofollow noopener noreferrer" target="_blank">#Jellyfin</a> and <a href="https://sakurajima.social/tags/ErsatzTV" rel="nofollow noopener noreferrer" target="_blank">#ErsatzTV</a> setup running on LXCs with working iGPU passthrough of my server's <a href="https://sakurajima.social/tags/AMD" rel="nofollow noopener noreferrer" target="_blank">#AMD</a> Ryzen 5600G APU. My <a href="https://sakurajima.social/tags/Intel" rel="nofollow noopener noreferrer" target="_blank">#Intel</a> <a href="https://sakurajima.social/tags/ArcA380" rel="nofollow noopener noreferrer" target="_blank">#ArcA380</a> GPU has also arrived, but I'm prolly gonna hold off on adding that until I decide on which node should I add it to and schedule the shutdown, etc. In the future, I might even consider exploring (re)building a <a href="https://sakurajima.social/tags/Kubernetes" rel="nofollow noopener noreferrer" target="_blank">#Kubernetes</a>, <a href="https://sakurajima.social/tags/RKE2" rel="nofollow noopener noreferrer" target="_blank">#RKE2</a><span> cluster on LXC nodes instead of VMs - and if that's viable or perhaps better.<br><br>Anyway, I've updated my </span><a href="https://sakurajima.social/tags/Homelab" rel="nofollow noopener noreferrer" target="_blank">#Homelab</a> Wiki with guides pertaining LXCs, including creating one, passing through a GPU to multiple unprivileged LXCs, and adding an <a href="https://sakurajima.social/tags/SMB" rel="nofollow noopener noreferrer" target="_blank">#SMB</a><span> share for the entire cluster and mounting them, also, on unprivileged LXC containers.<br><br></span>🔗 <a href="https://github.com/irfanhakim-as/homelab-wiki/blob/master/topics/proxmox.md#linux-containers-lxc" rel="nofollow noopener noreferrer" target="_blank">https://github.com/irfanhakim-as/homelab-wiki/blob/master/topics/proxmox.md#linux-containers-lxc</a></p>
Mika<p>Bruh, I might've wasted my time learning how to passthrough a GPU to an <a href="https://sakurajima.social/tags/LXC" rel="nofollow noopener noreferrer" target="_blank">#LXC</a> container on <a href="https://sakurajima.social/tags/Proxmox" rel="nofollow noopener noreferrer" target="_blank">#Proxmox</a> (as well as mount a SMB/CIFS share) and write up a guide (haven't been able to test yet, cept with the latter) - all by doing some seemingly <i>magic</i> <a href="https://sakurajima.social/tags/Linux" rel="nofollow noopener noreferrer" target="_blank">#Linux</a> <i>fu</i><span> with some user/group mappings and custom configs, if it turns out that you could actually achieve the same result just as easily graphically using a standard wizard on PVE.<br><br>It's 4am, I'll prolly try to find time later during the day, or rather evening (open house to attend at noon), and try using the wizard to 1) Add a device passthrough on an LXC container for my </span><a href="https://sakurajima.social/tags/AMD" rel="nofollow noopener noreferrer" target="_blank">#AMD</a> iGPU (until my <a href="https://sakurajima.social/tags/Intel" rel="nofollow noopener noreferrer" target="_blank">#Intel</a> <a href="https://sakurajima.social/tags/ArcA380" rel="nofollow noopener noreferrer" target="_blank">#ArcA380</a> GPU arrives) and see if the root user + service user on the container could access it/use it for transcoding on <a href="https://sakurajima.social/tags/Jellyfin" rel="nofollow noopener noreferrer" target="_blank">#Jellyfin</a>/<a href="https://sakurajima.social/tags/ErsatzTV" rel="nofollow noopener noreferrer" target="_blank">#ErsatzTV</a>, and 2) Add a SMB/CIFS storage on the Proxmox Datacenter, tho my <a href="https://sakurajima.social/tags/NAS" rel="nofollow noopener noreferrer" target="_blank">#NAS</a><span> is also just a Proxmox VM in the same cluster (not sure if this is a bad idea?) and see if I could mount that storage to the LXC container that way.<br><br></span><a href="https://sakurajima.social/tags/Homelab" rel="nofollow noopener noreferrer" target="_blank">#Homelab</a> folks who have done this, feel free to give some tips or wtv if you've done this before!</p>
Mika<p>I'm writing a guide on splitting a GPU passthrough across multiple <a href="https://sakurajima.social/tags/Proxmox" rel="nofollow noopener noreferrer" target="_blank">#Proxmox</a> <a href="https://sakurajima.social/tags/LXC" rel="nofollow noopener noreferrer" target="_blank">#LXC</a><span> containers based on a few resources, including the amazing Jim's Garage video.<br><br>Does anyone know the answer to this question of mine though, on why he might've chosen to map a seemingly arbitrary GID </span><code>107</code> on the LXC Container to the Proxmox host's <code>render</code> group GID of <code>104</code> - instead of mapping <code>104 -&gt; 104</code>, as he did with the <code>video</code> group, where he mapped <code>44 -&gt; 44</code><span> (which seems to make sense to me)?<br><br>I've watched his video seemingly a million times, and referred to his incredibly simplified guide on his GitHub that's mostly only meant for copy-pasting purposes, and I couldn't quite understand why yet - I'm not sure if it really is arbitrary and </span><code>107</code> on the LXC Container could be anything, including <code>104</code> if we wanted to... or if it (i.e. <code>107</code>) should've been the LXC Container's actual <code>render</code> group GID, in which case then it should've also been <code>104</code> instead of <code>107</code><span> on his Debian LXC Container as it is on mine.<br><br>Anyway, super excited to test this out once my </span><a href="https://sakurajima.social/tags/Intel" rel="nofollow noopener noreferrer" target="_blank">#Intel</a> <a href="https://sakurajima.social/tags/ArcA380" rel="nofollow noopener noreferrer" target="_blank">#ArcA380</a><span> arrives. I could probably already test it by passing through one of my node's Ryzen 5 5600G iGPU, but I worry if I'd screw something up, seeing that it's the only graphics onboard the node.<br><br></span>🔗 <a href="https://github.com/JamesTurland/JimsGarage/issues/141" rel="nofollow noopener noreferrer" target="_blank">https://github.com/JamesTurland/JimsGarage/issues/141</a></p>