QNAP TS-877 Truenas Journal

you mentioned optane so i found this video he mentioned it’s good as a slog device for zfs

:thinking:

Guess i was 1 year too late? :smiling_face_with_tear:

99 likes :smiling_face_with_three_hearts: *close enuff

So i did the 10gtek dac test, but unfortunately that didn’t work.

10GBase-CU DAC 2-m, 30AWG, Passive for CSC
brand: 10GTEK

Note: yes i did check i inserted both sides in with a click so the issue isn’t that. I even tried rebooting. Switch light was off for the port, so no connection

connected from truenas to switch. Also tried desktop to switch, neither worked.

will ask 10gtek why that is :thinking:

that said the sfp+ fiber optic transceiver did work.

Well this is my first experience with dacs.

seems there is a slew of discussions for compatibility issues :smiling_face_with_tear:

https://community.ui.com/questions/DAC-SFP-not-working-in-US-16-XG/89e2c4e6-6fc0-4c94-8e93-9d436f5b1510

May I know if you changed the link speed on XGS1250-12 from auto to DAC? If not, could you change it and test it again?

:face_with_raised_eyebrow:

*nope checked in switch. No dac option. tried switch 10g from auto, nothing.

*update

i have 2 other 24 port switches with sfp+ and sfp. so i tested with those

At first it didn’t work. But when i change my switch to 1g it worked

So the cable works at least :thinking: I’ll test if i set 1g it will work for desktop.

*update

no it didn’t. i went to the nic change to 1g ( changed from auto default), then in switch is also 1g (from auto), still no dice. not really sure why.

dac works from my main switch tested with 2 more other switches, connected fine when i set them all to 1g (the other 2 switches only 1g for sfp port as far as i am aware).

But the 10gtek pcie sfp+ on desktop (which is the same as used in the truenas) supports 10g. i even have 10g fiber working on it using their 10gtek transceiver. hm

:cry:

You need to be more clear and specific about what you are asking, please.

A 10GbE SFP+ twinax cable is technically copper. Twinax cables are fairly low-cost, deliver a full data rate, and very low latency connection, but are limited to about 10-15 meters.

Twinax cables are probably the recommended cabling solution for 10GbE host to switch connections.

The 10-15m distance challenge is the usual obstacle, as many environments have network equipment positioned farther away from the server devices.

In this case, using Fiber becomes the next logical option.

OM3-MMF, OM4-MMF or SMF are all perfectly capable of transporting 10GbE (and higher) for hundreds of meters.

If you use third party optics, you can get your costs per connection down under $100 without significant effort.

RJ45 to SFP+ transceivers are not supposed to exist. They do exist, but they are pushing the limits of the voltage wattage capabilities of the SFP+ sockets they are inserted into.

The SFP+ socket specification was never designed nor intended to deliver enough wattage to drive 10GbE data rates across 100 meters of CAT6.

The third-party, unsupported, unofficial RJ45 to SFP+ transceivers you find out in the wild have distance limitations of around 50 meters or so.

Those transceivers will run hot to the touch, not blistering hot, just hot, which obviously can’t be a great situation for the SFP+ socket.

They should deliver a full 10Gbps line-rate though.

When you order a new server, RJ45 v/s SFP+ LOM or NIC options are roughly the same cost, at time of order.
(You’re already spending $5k+ on a server, is an extra $100 for SFP+ instead of RJ45 really a big deal?)

When it comes to ordering 10GbE switches, there are dramatically more options to choose from with SFP+ than RJ45 10GbE switch offerings.

And with the increased array of options comes more competition and more products in the refurbished pipelines too. Those are good things for driving costs down.

So this all boils down (IMO) to the cost of cabling within the server farm.

Reality: 10GbE is more expensive than 1GbE was. If your business want’s to go fast, they gotta pay.

If you invest in a good fiber plant, you’ll be ready for 25GbE or 40GbE, or more if you go with SMF.

400GbE (yes, four-hundred Gig) is shipping now. More speed is coming. NVMe wants to go faster.

RJ45 is not the path forward.
RJ45 is pretty much at the end of it’s rope with 10GbE.

CAT7 is not going to change the situation. CAT7 is still a 10Gbps cable.

https://www.reddit.com/r/networking/comments/ewmdmt/10g_sfp_copper_not_really_10gbps/

*update

new test. did a SELF LOOP test for the DAC into my main switch. It worked


so the question is, why didn’t 10gteks own dac not work with their SFP+ 10g pcie card installed on a windows 11 desktop pc, to connect to the switch ?

:face_with_raised_eyebrow:

I struggled with the same issue on 2 of the X520-DA1 cards and that is what fixed it for me. They now work with both the 10Gtek and FS.com DAC cables that I purchased.

https://www.reddit.com/r/freenas/comments/p2xt76/cant_get_sfp_dac_to_connect_on_x520_but_sfp_to/

I recently had the need to put together a cheap 10Gbit/s lab, including a server with a NIC. I really only needed a single interface on the server, and I preferred it to be SFP+ rather than twisted-pair for flexibility (I figured I could always stuff a TP module into an SFP+ slot if need be). While researching the various PCIe NIC options available, I arrived at the conclusion that although the X520 might be neither the absolute best performing sub-25gig adapter on the market nor the cheapest 10gig adapter on the market (second-hand or otherwise), it seemed to strike the best balance of performance, stability, wide first-party (and ongoing) platform support, and price of any of the ones I looked at.

There was only one potential wrinkle: SFP+ module compatibility. Word on the street was that some of these 82599ES-based cards were known to be “picky”. And not necessarily because there was an actual interop issue with modules that were discovered not to work, but because Intel apparently was (at least as the story goes) merely afraid of having to support untested modules that might prove to have interop issues, so they artificially blocked all but the ones on their tested-and-supported list. (sigh There always has to be some catch, right? Other old cards of a similar vintage by the likes of Chelsio and Mellanox are long since EOL’d but will supposedly take just about any transceiver you throw at them. With Intel, though, you get widespread platform/OS compatibility with continued vendor support, only to be forced to either play “optics roulette” or go out of your way to source known-compatible modules.) But it was very difficult to ascertain specifically which cards were “problem” cards (all of the 82599s, just the Intel-branded ones, just some of the Intel-branded ones…?).

If you were a Linux user with one of these cards, and discovered that the SFP you wanted to populate your card with was either (seemingly) artificially “blacklisted” or “not whitelisted”, you were lucky: the Linux driver actually has a parameter you can pass to it that will override that check. However, other platforms, such as ESXi and Windows, are apparently not so lucky.

But the mere existence of that override parameter in the Linux driver was interesting, and got me thinking. And after taking delivery of a card, playing with it, and doing some additional research and testing, I think I may have discovered a way to instruct any X520 that rejects non-officially-supported transceivers by default to instead allow them, regardless of which driver you are using on your operating system of choice.

My problem is that right now I lack the time and resources to fully test this theory. So instead, I’m going to document my findings here and explain how (I think) this works, and hopefully others here who would like to be able to use their cards with unsupported optics on non-Linux OSes will stumble upon this, be willing to give this a shot, post their findings, and thus either prove this theory right or lay it to rest.

One of the reasons that I can’t conclusively test this theory is because it turns out that the card I bought, which is a second-hand, Yottamark-verified genuine Intel X520-DA1, has no issues with any SFP+ module I feed it, even when I don’t supply the Linux driver with the allow_unsupported_sfp=1 parameter. So I seemingly lucked out and got one of the “good out-of-box” ones. The thing I was able to accomplish that I feel halfway proves my theory, though, is that I managed to “convert” this unlocked card to a locked version – a card that refused to work with some of my modules until I supplied that parameter to the driver – and then convert it back to an unlocked card again. What I don’t know for sure is if the mechanism described here is applicable to all X520s that are transceiver-restricted or not…apparently some (the X520-SRx and -LRx, which are basically the -DAx pre-populated with an Intel transceiver) are locked to specific Intel-manufactured SFPs while some of the -DAx models are still restricted but will accept a wider range of approved modules? It’s all still a bit unclear to me. (I also haven’t been able to test my card on an OS other than Ubuntu Server LTS, either.)

Anyway, on with the show…

----​

The key to “unlocking” an X520 appears to be an undocumented bit within the card EEPROM. Most of the functions of the EEPROM are described in the 82599 datasheet (https://www.intel.com/content/dam/w…asheets/82599-10-gbe-controller-datasheet.pdf), but this one seems to be completely undocumented. The clue came from reading the sources to the Linux ixgbe driver: there is a bitfield in the EEPROM that the driver is checking which the driver source calls IXGBE_DEVICE_CAPS. (DEVICE_CAPS == “device capabilities”) So it would seem that the card uses this bitfield to inform the driver about some of its features (which presumably the OEM of each card that uses an 82599 decides for). There are other preprocessor #defines for the various features that are represented by this bitfield that are all named IXGBE_DEVICE_CAPS_*; one of them is IXGBE_DEVICE_CAPS_ALLOW_ANY_SFP, which is the first/least-significant bit.

So it seems reasonable to assume that if one could permanently flip that bit within the EEPROM of an X520 that rejects non-whitelisted SFP+ modules, then you might be able to permanently unlock that card. One of the reasons for assuming this is that the code in the Linux driver (which Intel themselves wrote large portions of) that checks the IXGBE_DEVICE_CAPS_ALLOW_ANY_SFP bit in the EEPROM long predates the addition of the “allow_unsupported_sfp” option in the driver, which wass a relatively late addition (you can see this for yourself by reading through the discussion in the thread at Intel Ethernet Drivers and Utilities / [E1000-devel] [PATCH RFC] ixgbe: Module param “allow_any_sfp” for allowing unsupported SFP+ modules which is interesting reading anyway, if only for people’s reactions to the news after they learned what Intel was doing). And if Intel snuck a check of this undocumented field into the Linux driver, it also seems reasonable to assume that they make similar checks in drivers that they have written for other platforms, so the Intel-written drivers for other platforms will likely honor the “ALLOW_ANY_SFP” bit in a given card’s EEPROM if it is set, even if that driver has no way of allowing the user to override the check.

So what’s the exact offset of the bit in question, and how do we change it?

If you aren’t planning to use the card in a Linux box, and don’t even have a Linux host to throw your card in temporarily in order to make the modification, I’m not (yet) sure how you would accomplish the same thing in Windows, but if your card is installed in an ESXi host, it appears that the local/remote “Tech Support” ESXi shell also has ‘ethtool’ and that it works exactly the same way as the Linux one does, as far as I have been able to tell. Other open-source *nix-like or Unix-derived OSes such as the *BSDs likely have ‘ethtool’ as well or something similar. (Also, macOS X apparently has ‘ethtool’, too, for what it’s worth.)

----​

As far as I can tell, the only kinds of modules that a “locked” X520 blocks are unapproved, active SFP+ modules, while virtually all SFP modules and passive SFP+ DACs from any vendor will work even in a card with DEVICE_CAPS bit 1 set to 0.

One thing I’m hoping to establish through both my experimentation and yours is exactly which Intel parts have this bit flipped “off”, and whether flipping this bit “on” is a complete unlock in all circumstances or whether other things factor in. /u/eruffini posted in this Reddit thread (Intel X520-DA2 10G NIC ESXi 6u2 woes. : vmware) that he’d been told that more recently manufactured X520s are the ones with the problem. On the other hand, we also know that X520-SR1/2 and X520-LR1/2 came with Intel optics and are intended to only be used with those. Is this bit in the EEPROM what is used to lock those models down? Does setting this bit to “on” essentially convert an X520-SR1 to an X520-DA1, for example? Or are there X520-DA1/2 cards that are also locked down from the factory? Is the enforcement to use Intel optics in the SR/LR cards done via the same mechanism or a totally separate one? Are 82599-based cards manufactured by other OEMs known to restrict SFP+s in the same way, and using the same mechanism? Etc.

The other scenario that I’m hoping doesn’t end up being the case – but certainly is within the realm of possibility – is that it seems feasible that Intel could have written and released a driver that ignores this bit in the EEPROM completely and always enforces SFP+ restrictions regardless. If the driver on the host has enough influence over the card’s firmware that the SFP+ check can be overridden despite that bit (e.g. “allow_unsupported_sfp” on Linux), it is certainly plausible that this could cut both ways…

Definitely looking forward to everybody’s feedback and reports, & good luck!

– Nathan

The official Intel drivers do disable (some) non-intel optics. However in OpenBSD we have removed that limitation, because it is stupid.

https://www.reddit.com/r/freenas/comments/p2xt76/cant_get_sfp_dac_to_connect_on_x520_but_sfp_to/

Some vendors lock out SFP+ ports to work only with SFP+ transceivers running their own firmware, as a kind of shitty DRM. fs.com has compatibility matrix/table posted somewhere, showing which vendors work with which firmware.

in regards to DAC, if your sfp only support 1g but the other is SFP+ 10g, it can still work. what i did was change the port in switch for the 10g to 1g (default was auto), to make that work. tested on 3 different switches same result.

1 Like

that… sounds like my situation right now. struck gold on the sfp+ fiber optic, but landed a dud on the dac :smiling_face_with_tear:

These are the 2 parts i couldn’t get to work to connect the DAC to my switch

10Gtek 10Gb PCI-E NIC Network Card, Single SFP+ Port, with Intel 82599EN Controller, PCI Express Ethernet LAN Adapter (seller said this is similar to either the “X520-10G-1S-X8 or X520-DA1” for compatibility)

About this item:
● Ship from worldwide FBA stock
● Controller(s): Intel 82599
● Single SFP+ Port
● Compatible with Intel X520-DA1
● Compatible with Intel CNA
● PCI-E v2.0 (5.0GT/s), X8 Lane
● Connector & Cable Medium: SFP+ Direct Attach Copper/ SFP+ Transceivers
●Equipped with high quality original Intel 82599EN controller which supports I/O virtualization and make the servers more stable.
●Compatible with Windows Server 2003/ 2008/ 2012, Windows7/8/10/Visa, Linux, VMware ESX.
Single SFP+ port let you connect to 10 Gigabit SFP+ module/DAC/AOC for meeting the demands of data center environments. ●PCI-E X8 Lane is suitable for both PCI-E X8 and PCI-E X16 slots.
●Driver CD is included natively. With profile bracket and additional low profile bracket that makes it easy to install the card in a small form factor/low profile computer case/server.
●What You Get: 10Gtek 10GbE PCI-E X8 Network Card X520-10G-1S (compare to Intel X520-DA1 ) x1, Driver CD x1, Low-profile Bracket x1. Backed by 10Gtek 30 Days Free-returned, 1 Year Free Warranty and Lifetime Technology Support. PS: Due to the particularity in QNAP/Synology system, for QNAP/Synology users, please contact our customer service before purchase.

10Gtek SFP+ DAC Twinax Cable - 10GBASE-CU Passive Direct Attach Copper SFP Cable, 0.3 M~7 Meters (mine is 2meter)

Product Description

●Brand 10Gtek ●Connector Type Sfp+ to Sfp+ ●Cable Type Direct Attach Copper Twinax Cable ●Special Feature Plug and play About this item ●2-pair differential twinax cable, Passive, EEPROM I2C ●SFP+ Cable can connect switch, router, server, NIC, or other fiber optic equipments with SFP+ ports for Network Attached Storage, Storage Area Network, and High Performance Computing ●Compatible with Cisco, Fortinet, Ubiquiti, Netgear, D-link, Supermicro, Mikrotik, ZTE, Quanta, Solarflare, PaloAlto, F5, etc devices ●10Gtek’s automatic assembly line, assures the consistency of manufacture under the process of laser cutting, aluminum shielding stripping, isolator stripping, automatic reshaping, automatic soldering and ultraviolet ray curing. Each DAC cables take the TDR & VNA measurement, guaranteeing passing the signal integrity test. Features: ●10GBase-CU ●SFP+ to SFP+ Connetor ●24~30AWG ●Length: 0.5~3 meters ●2-pair differential twinax cable ●Impedance: 100-ohm ●EEPROM I2C (MCU: customized) ●Compliant to SFP MSA ●Low power consumption ●Excellent EMI performance ●RoHS Compliant ●Temp. Range(℃): 0~70 (Industrial Temp.: customized) Applications: ●1-8G Fiber Channel ●1-10G Gigabit Ethernet ●Networking, Storage, Telecommunications ●Hub, Switches, Routers, Servers, Network Interface Card(NICs)

this is the dac i ordered from 10gtek

anyway time for a break, gonna enjoy some Taylor swift :smiling_face_with_three_hearts:

*update

10gtek suggested

a HPE-compatible (HPE Aruba) 10G DAC might work. Why? no idea. but that was what their engineer thought.

But seeing as i already have a working sfp+ fiber transceiver setup, i don’t need it or want to chance it again.

The dac cable i have, i can simply use it for connecting the switches within my rack for 10g (though a shorter cable would have been better for that o well)

A list of docker containers i use if anyone is interested

  • authentik (authentication for all my docker services. i set it up for passwordless meaning i can simply use fingerprint on android smartphone to unlock docker services web ui without inserting a username and password)

  • cockroachdb (just one of the dbs i use for one of the other docker containers)

  • filebrowser (a simple file browser for web browser to read/write files in the shares. If you ever used filestation on QNAP it’s something like that)

  • fireflyiii (to track finances e.g. subscription fees, bills etc)

  • glances (monitoring homelab server resource usage. Has integration with dashy)

  • grafana (monitoring)

  • homeassistant (to manage iots. e.g. i use it to control a IOT bulb via android smartphone)

  • immich (the photoprism alternative. i liked it much better)

  • jellyfin (this is like your own self hosted netflix. It’s alternatives are plex and emby. But i liked jellyfin the best. For lag free experience install the android player and the windows jellyfin player app for smooth playback)

  • dupeguru (this checks if you have duplicate files on your NAS. everyone misplaces stuff and can’t keep track of their files. This helps you detect dupes so you can delete them to reduce storage space waste)

  • librespeed (test network speed between NAS and client device of choice)

  • unifi controller (to adopt and manage ubiquiti unifi wireless Access Point)

  • Dashy ( a dashboard with bookmark links to your services. Keeps things organized so you don’t have to remember service urls to access them since they are all in dashy )

  • mongo (another db i use)

  • navidrome ( a music server. Works with android on smartphone. I use substreamer on android which supports accesing navidrome )

  • nginx proxy manager (i don’t actively use it, but the reason i mention it because it’s useful if you want an easier time to setup compared to traefik. I did try myself and get it to work, but went back to traefik)

  • pihole (tested it, it’s ok. But i disabled because i use pfsense pfblocker so didn’t need it)

  • portainer (i use this regularly for managing my docker containers. One of my favourite apps)

  • qdirstat ( this lists all your files and folders. Helps you see which of them takes up the most storage space, so you can go through them starting at the biggest hogs to see if they are still rellevant or you should delete to reclaim space)

  • qflood (basically qbitorrent with a nicer UI. i don’t torrent much maybe should delete. This would work well with gluetun docker container for VPN, but i didn’t manage to get that to work unfortunately)

  • stirling pdf (has some features for handling pdfs. it’s usefulness to you may vary)

  • telegraf (monitoring resource)

  • traefik ( very useful app. This adds a proxy to handle all the ports used by all the docker containers so i can simply enter a local domain without having to add any service port behind to access apps. I configured mine for a local lan setup so i am not exposing this to the internet, though you could if you wanted to go that route. If you are a newbie i recommend you use nginx proxy manager instead since that is easier to get started with)

  • trivy (to check if docker containers have any security issues or not)

  • uptime kuma ( this is good for keeping track of hardware or app service uptimes. Can also alert you if something goes offline so you are always aware. Very solid app )

  • vaultwarden (the self hosted version of bitwarden)

  • whoogle (the privacy minded version of a search engine self hosted)

  • youtubedl-material (can download youtube to your NAS with it for archival)

For truenas i may just not add the resource monitoring docker apps since truenas has netdata already which is good enough :thinking:

1 Like

Networking

By default the jail will have full access to the host network. No further setup is required. You may download and install additional packages inside the jail. Note that some ports are already occupied by TrueNAS SCALE (e.g. 443 for the web interface), so your jail can’t listen on these ports. This is inconvenient if you want to host some services (e.g. traefik) inside the jail. To workaround this issue when using host networking, you may disable DHCP and add several static IP addresses (Aliases) through the TrueNAS web interface. If you setup the TrueNAS web interface to only listen on one of these IP addresses, the ports on the remaining IP addresses remain available for the jail to listen on.

See Advanced Networking for more.

Docker

Using the docker config template is recommended if you want to run docker inside the jail. You may of course manually install docker inside a jail. But keep in mind that you need to add --system-call-filter='add_key keyctl bpf' (or disable seccomp filtering). It is not recommended to use host networking for a jail in which you run docker. Docker needs to manage iptables rules, which it can safely do in its own networking namespace (when using bridge or macvlan networking for the jail).

so apprently if the purpose of the jail was to deploy docker, the default network might not be good for that purpose.

so i would most definitely need to do the advanced option of eithr bridge or macvlan for this.

Ok so i editted the jailmaker docker config script, placed it in the docker dataset location

editted the script to use the bridge instead of macvlan.

I then followed the steps in this video guide to setup bridge but i used br1 instead his br0. because the script uses br1

after i had done that, i then went to truenas shell to run the command

jlmkr create --start --config /mnt/xxxxxx/jailmaker/docker/config docker


then i went to networks to check on that and i noticed a vb-docker. what da heck is that? i checked the script it made no mention of that.

anyway br1 is indicating it works. Basically what that is, is an active bridge setup to work for that physical nic which is the sfp+ fiber connection connecting the truenas to my switch.

supposedly this means bridge is working for jailmaker and docker at this point? :face_with_raised_eyebrow:

Anyway so no, i did not have to connect truenas to monitor via hdmi and attach keyboard. Since i did the shell through truenas web ui.

Note: i did noticed that, when i was following the youtube step by step to switch over to bridge, it kept constantly resetting back to the default. No idea why. The troubleshoot section said if you had a vm it may do that, but i didn’t. Anyway kept repeating the steps but faster and managed to save the setting permanently. And it stopped trying to roll back. Just thought i would mention this quirk. Anyway the jailmaker setup description for this was less risky than i thought it would be, as long as you are super careful when doing this. But even then i had already saved my truenas settings before i even attempted, just in case

I noticed that in truenas apps, the services was trying to restart.

so since i implemented jailmaker or the network, something broke, because the native truenas apps service, the 2 apps i previously installed are missing and instead it says services reinitializing.

So what i did was UNSET the app pool. Then that issue went away. Basically i won’t be using the native kubernetes since i don’t know how to manage that. I only know docker. Which i am doing manually through jailmaker.

:face_with_raised_eyebrow:

# docker version
zsh: command not found: docker
# jlmkr list
NAME   RUNNING STARTUP GPU_INTEL GPU_NVIDIA OS     VERSION ADDRESSES    
docker True    False   True      False      debian 12      192.168.0.20…


instead of jailmaker, people suggested to setup docker this other way
https://www.reddit.com/r/truenas/comments/18wllxd/opinions_on_using_docker_compose_in_truenas_scale/

i rather get jailmaker to work but i’m at a loss here x-x;

all i need right now is able to use the docker commands, at that point i know what to do for deploying docker containers. But when i checked from shell, it said not found. so what now :smiling_face_with_tear:

:thinking:

jlmkr list
NAME   RUNNING STARTUP GPU_INTEL GPU_NVIDIA OS     VERSION ADDRESSES    
docker True    False   True      False      debian 12      192.168.0.20…
root@xxxxxxxxxxxxx[~]# jlmkr log docker
Apr 23 xxxxxxxxxxxxx systemd[1]: Starting jlmkr-docker.service - My nspawn jail docker [created with jailmaker]...
Apr 23 xxxxxxxxxxxxx .ExecStartPre[xxxxxxxxxxxxx]: PRE_START_HOOK
Apr 23 xxxxxxxxxxxxx systemd[1]: Started jlmkr-docker.service - My nspawn jail docker [created with jailmaker].
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]: systemd 252.22-1~deb12u1 running in system mode (+PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 >
Apr 23 xxxxxxxxxxxxxsystemd-nspawn[xxxxxxxxxxxxx]: Detected virtualization systemd-nspawn.
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]: Detected architecture x86-64.
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]: Detected first boot.
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]: 
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]: Welcome to Debian GNU/Linux 12 (bookworm)!
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx: 
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]: Initializing machine ID from container UUID.
Apr 23 xxxxxxxxxxxxxsystemd-nspawn[xxxxxxxxxxxxx]: Populated /etc with preset unit settings.
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]: Queued start job for default target graphical.target.
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]: [  OK  ] Created slice system-getty.slice - Slice /system/getty.
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]: [  OK  ] Created slice system-modprobe.slice - Slice /system/modprobe.
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]: [  OK  ] Created slice user.slice - User and Session Slice.
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]: [  OK  ] Started systemd-ask-password-consol…quests to Console Directory Watch.
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]: [  OK  ] Started systemd-ask-password-wall.p… Requests to Wall Directory Watch.
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]: [  OK  ] Reached target cryptsetup.target - Local Encrypted Volumes.
Apr 23 xxxxxxxxxxxxxsystemd-nspawn[xxxxxxxxxxxxx]: [  OK  ] Reached target integritysetup.targe…Local Integrity Protected Volumes.
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]: [  OK  ] Reached target paths.target - Path Units.
Apr 23 xxxxxxxxxxxxxsystemd-nspawn[xxxxxxxxxxxxx]: [  OK  ] Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]: [  OK  ] Reached target remote-fs.target - Remote File Systems.
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]: [  OK  ] Reached target remote-veritysetup.t…- Remote Verity Protected Volumes.
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]: [  OK  ] Reached target slices.target - Slice Units.
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]: [  OK  ] Reached target swap.target - Swaps.
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]: [  OK  ] Reached target veritysetup.target - Local Verity Protected Volumes.
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]: [  OK  ] Listening on systemd-initctl.socket… initctl Compatibility Named Pipe.
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]: [  OK  ] Listening on systemd-journald-dev-l…ocket - Journal Socket (/dev/log).
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]: [  OK  ] Listening on systemd-journald.socket - Journal Socket.
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]: [  OK  ] Listening on systemd-networkd.socket - Network Service Netlink Socket.
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]:          Mounting dev-hugepages.mount - Huge Pages File System...
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]:          Starting nftables.service - nftables...
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]:          Starting systemd-journald.service - Journal Service...
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]:          Starting systemd-network-generator.… units from Kernel command line...
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]:          Starting systemd-remount-fs.service…nt Root and Kernel File Systems...
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]:          Starting systemd-sysctl.service - Apply Kernel Variables...
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]: [  OK  ] Mounted dev-hugepages.mount - Huge Pages File System.
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]: [  OK  ] Finished systemd-network-generator.…rk units from Kernel command line.
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]: [  OK  ] Finished systemd-remount-fs.service…ount Root and Kernel File Systems.
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]:          Starting systemd-firstboot.service - First Boot Wizard...
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]: [  OK  ] Finished systemd-sysctl.service - Apply Kernel Variables.
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]: [  OK  ] Started systemd-journald.service - Journal Service.
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]:          Starting systemd-journal-flush.serv…h Journal to Persistent Storage...
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]: [  OK  ] Finished systemd-firstboot.service - First Boot Wizard.
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]: [  OK  ] Reached target first-boot-complete.target - First Boot Complete.
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]:          Starting systemd-sysusers.service - Create System Users...
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]: [  OK  ] Finished systemd-sysusers.service - Create System Users.
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]: [  OK  ] Finished nftables.service - nftables.
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]: [  OK  ] Reached target network-pre.target - Preparation for Network.
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]:          Starting systemd-tmpfiles-setup-dev…ate Static Device Nodes in /dev...
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]: [  OK  ] Finished systemd-journal-flush.serv…ush Journal to Persistent Storage.
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]: [  OK  ] Finished systemd-tmpfiles-setup-dev…reate Static Device Nodes in /dev.
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]: [  OK  ] Reached target local-fs-pre.target …reparation for Local File Systems.
Apr 23 xxxxxxxxxxxxxsystemd-nspawn[xxxxxxxxxxxxx]: [  OK  ] Reached target local-fs.target - Local File Systems.
Apr 23 xxxxxxxxxxxxxsystemd-nspawn[xxxxxxxxxxxxx]:          Starting systemd-machine-id-commit.… a transient machine-id on disk...
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]:          Starting systemd-tmpfiles-setup.ser… Volatile Files and Directories...
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]:          Starting systemd-networkd.service - Network Configuration...
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]: [  OK  ] Finished systemd-tmpfiles-setup.ser…te Volatile Files and Directories.
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]:          Starting systemd-resolved.service - Network Name Resolution...
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]:          Starting systemd-update-utmp.servic…rd System Boot/Shutdown in UTMP...
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]: [  OK  ] Finished systemd-update-utmp.servic…cord System Boot/Shutdown in UTMP.
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]: [  OK  ] Started systemd-networkd.service - Network Configuration.
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]:          Starting systemd-networkd-wait-onli…it for Network to be Configured...
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]: [  OK  ] Started systemd-resolved.service - Network Name Resolution.
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]: [  OK  ] Reached target network.target - Network.
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]: [  OK  ] Reached target nss-lookup.target - Host and Network Name Lookups.
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]: [  OK  ] Reached target sysinit.target - System Initialization.
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]: [  OK  ] Started apt-daily.timer - Daily apt download activities.
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]: [  OK  ] Started apt-daily-upgrade.timer - D… apt upgrade and clean activities.
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]: [  OK  ] Started dpkg-db-backup.timer - Daily dpkg database backup timer.
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]: [  OK  ] Started e2scrub_all.timer - Periodi…etadata Check for All Filesystems.
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]: [  OK  ] Started systemd-tmpfiles-clean.time… Cleanup of Temporary Directories.
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]: [  OK  ] Reached target timers.target - Timer Units.
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]: [  OK  ] Listening on dbus.socket - D-Bus System Message Bus Socket.
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]:          Starting docker.socket - Docker Socket for the API...
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]: [  OK  ] Listening on docker.socket - Docker Socket for the API.
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]: [  OK  ] Reached target sockets.target - Socket Units.
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]: [  OK  ] Reached target basic.target - Basic System.
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]:          Starting containerd.service - containerd container runtime...
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]:          Starting dbus.service - D-Bus System Message Bus...
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]:          Starting systemd-logind.service - User Login Management...
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]:          Starting systemd-user-sessions.service - Permit User Sessions...
Apr 23 xxxxxxxxxxxxx  systemd-nspawn[xxxxxxxxxxxxx]: [  OK  ] Finished systemd-user-sessions.service - Permit User Sessions.
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]: [  OK  ] Started dbus.service - D-Bus System Message Bus.
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]: [  OK  ] Started console-getty.service - Console Getty.
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]: [  OK  ] Reached target getty.target - Login Prompts.
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]: [  OK  ] Started systemd-logind.service - User Login Management.
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]: [  OK  ] Started containerd.service - containerd container runtime.
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]: 
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]: Debian GNU/Linux 12 docker pts/0
Apr 23 xxxxxxxxxxxxx systemd-nspawn[xxxxxxxxxxxxx]: 
lines 52-92/92 (END)

Docker vs Containerd: A Detailed Comparison.

ok now we are talking

root@xxxxxxxx[~]# jlmkr exec docker bash -c ‘echo test; echo $RANDOM;’

test
1735
root@xxxxxxxx[~]# jlmkr exec docker bash -c docker

Usage: docker [OPTIONS] COMMAND

A self-sufficient runtime for containers

Common Commands:
run Create and run a new container from an image
exec Execute a command in a running container
ps List containers
build Build an image from a Dockerfile
pull Download an image from a registry
push Upload an image to a registry
images List images
login Log in to a registry
logout Log out from a registry
search Search Docker Hub for images
version Show the Docker version information
info Display system-wide information

Management Commands:
builder Manage builds
buildx* Docker Buildx
compose* Docker Compose
container Manage containers
context Manage contexts
image Manage images
manifest Manage Docker image manifests and manifest lists
network Manage networks
plugin Manage plugins
system Manage Docker
trust Manage trust on Docker images
volume Manage volumes

Swarm Commands:
swarm Manage Swarm

Commands:
attach Attach local standard input, output, and error streams to a running container
commit Create a new image from a container’s changes
cp Copy files/folders between a container and the local filesystem
create Create a new container
diff Inspect changes to files or directories on a container’s filesystem
events Get real time events from the server
export Export a container’s filesystem as a tar archive
history Show the history of an image
import Import the contents from a tarball to create a filesystem image
inspect Return low-level information on Docker objects
kill Kill one or more running containers
load Load an image from a tar archive or STDIN
logs Fetch the logs of a container
pause Pause all processes within one or more containers
port List port mappings or a specific mapping for the container
rename Rename a container
restart Restart one or more containers
rm Remove one or more containers
rmi Remove one or more images
save Save one or more images to a tar archive (streamed to STDOUT by default)
start Start one or more stopped containers
stats Display a live stream of container(s) resource usage statistics
stop Stop one or more running containers
tag Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE
top Display the running processes of a container
unpause Unpause all processes within one or more containers
update Update configuration of one or more containers
wait Block until one or more containers stop, then print their exit codes

Global Options:
–config string Location of client config files (default “/root/.docker”)
-c, --context string Name of the context to use to connect to the daemon (overrides DOCKER_HOST env var and default context set with “docker context use”)
-D, --debug Enable debug mode
-H, --host list Daemon socket to connect to
-l, --log-level string Set the logging level (“debug”, “info”, “warn”, “error”, “fatal”) (default “info”)
–tls Use TLS; implied by --tlsverify
–tlscacert string Trust certs signed only by this CA (default “/root/.docker/ca.pem”)
–tlscert string Path to TLS certificate file (default “/root/.docker/cert.pem”)
–tlskey string Path to TLS key file (default “/root/.docker/key.pem”)
–tlsverify Use TLS and verify the remote
-v, --version Print version information and quit

Run ‘docker COMMAND --help’ for more information on a command.

For more help on how to use Docker, head to Docker Docs
root@xxxxxxxx[~]#


:rofl:

If you’ve encountered the error “docker-compose command not found” on or about April 2, 2024, it means you’re using the v1 Docker Compose command.

GitHub deprecated v1, and you need to change the command from, e.g., docker-compose build to docker compose build (remove the dash)

There are other changes too, and your workflows and scripts might need migrations to v2. Take a look at the official Docker Compose V2 Migration Guide for guidance

Change docker-compose to docker compose it’s work

Are you running 23.10? Because Docker has been removed in 23.10 due to an upstream switch from k3s from docker to containerd as runtime.

If you want to run native docker you have two options:

  1. run a linux vm and run docker inside the vm
  2. the jailmaker script from Jip-Hop.

Option 1 is fully supported by truenas, Option 2 as i believe is not officially supported and a more “hacky” way.

Try this:
find / -name “docker.sock”

:thinking:

Example

A use of this is making files available in a jail for it to use or serve, such as media files in Plex/Jellyfin: Example: --bind='/mnt/tank/content/:/media' will make any files inside the content dataset of the tank pool available inside the jail’s /media folder. To visualise or test this you can copy some files to /mnt/tank/content/ such as media1.mp4, media2.mkv and photo.jpg. Then change directory to that folder inside the jail cd /media and list files in that directory ls -l where those files should appear.

Warning

Do not bind your TrueNAS system directories (/root /mnt /dev /bin /etc /home /var /usr or anything else in the root directory) to your jail as this can cause TrueNAS to lose permissions and render your TrueNAS system unusable. Best practice is to create a dataset in a pool which also allows zfs, raidz, permissions, and backups to function. E.g creating a websites dataset in a pool named tank then binding --bind='/mnt/tank/websites/websitename/:/var/www/websitename/'


so solution was

jlmkr shell docker

at this point you can do all your docker magic e.g.

docker version

Anyway yes it’s using the latest docker

more roadblocks

https://www.reddit.com/r/selfhosted/comments/1bm9vbj/fyi_docker_version_26_breaks_portainer/

new docker version broke portainer? nani

# find / -name "docker.sock"
/run/docker.sock

stuck here, any help? :pray:

just need to reach the dataset where i have my docker compose yamls so i can then do a docker compose up.

but i can’t do that because when ls or cd, i cannot find or change directory to where my docker compose yaml is located. really no idea at this point.

back in qts, all this was already setup in advance, so you can begin just using it right away.

but in truenas i had to do everything from scratch. it’s easy for a newbie like me :smiling_face_with_tear:

Docker runs its containers on another bridge (172.x.x.x). I think vb-docker is that.

1 Like