Mixed SSD HD setup

Just set special_small_blocks equal to the recordsize of the dataset. Thus, all subsequent writes to this dataset will go to the special VDEV.

I suspect that the reason is – three mirrors would have 3 times the IOPS of a single RAIDZ3.

1 Like

It’s worth pointing out, that I had a specific question about a GIVEN SETUP, I didn’t ask for advice as to why my setup is false, should be different, etc.

Now, while I truly appreciate any input, I do resent it when the discussion is reframed as me having asked about things I didn’t ask about, and then arguing about it.

I was asking about a mixed SSD/HDD pool, not about the merits of a 5-bay system, RAIDz1 with 26TB drives, etc. Again, I appreciate the input, but none of that was anything I asked about. I stated my intended setup for context of the actual question, not as subject for discussion. The input is welcome, but don’t blame me for a discussion ensuing from such unsolicited input. Peace!

FYI: Proxmox Vs TrueNAS Scale: VM Performance Showdown - Tech Addressed

I thought it was kinda obvious I was talking from the usability perspective.

And you were answered (by me) in post number 3. Moving the entire dataset to the special VDEV (as a “workaround”) was mentioned at least 2 more times since then. But that’s about it. TrueNAS doesn’t have “true” auto-tiering storage.

I have been running TN CORE with all other applications either in jails or in VMs and the experience was great.

That excludes the firewall appliance, the switch and the WiFi, of course.

If CE cannot live up to that, something is seriously wrong with the TN roadmap.

Is there a roadmap? If so, it’s strictly classified…

The catch is you’re a trained sysadmin. TrueNAS the system is not suitable for non-technical users, and TrueNAS the company is unwilling to commit sufficient resources to make it more user-friendly.

Did you migrate your VMs? That’s mostly a rhetorical question. How many “conversions” did you already make? How much time did you spend on those over the years? Do you really think that the need for “conversion” every few years (because of changes to the plans! We must be AGILE!!) is a great experience? I would accept “not that bad”, but “great” is a little bit too much.

Of course not. As long as CORE was supported I just updated/upgraded the host system. From FreeNAS 8 all the way to TrueNAS 13. Admittedly I only got serious with VMs from TN Corral (10) on.

Or do you mean if I moved VMs between systems? Yes, sure, trivial.

zfs send/receive over ssh of the underlying zvol, create VM in the UI, boot VM, done.

No, I meant migrate to incus VMs. The whole concept that you must take some actions to migrate your apps/VMs during the OS update is unsound.

  • If you run services in jails/apps/etc., it will need your efforts to maintain every time ix/truenas redecides their long-term (few years) strategy for apps (see the link above).
  • If you run services in VMs, then why not run them within a proper hypervisor? Thus, you would only deal with one issue (compared to the truenas VMs approach) – virtualizing truenas itself.

Just a reminder. This all applies for a single home-server scenario.

I stated that my experience with VMs is great on TN CORE so if it is not for most people on TN SCALE/CE then something is apparently very wrong.

I have been running TN as my do it all home server for years. And I consider it to be the best platform for that. I don’t know any alternative, seriously. Proxmox does not do storage and sharing, for example. Unraid does not use ZFS.

My point is hopefully obvious now: TN CE is not an improvement over CORE.

1 Like

You do you, but this best platform won’t receive any updates.

Yeah, that is the only issue.
I never used Core, but Scale’s VMs are just subpar. Starting from the basic functionality, like not being able to install from an iso without permissions (when you are already logged in as an admin, damnit). Ending with no “production-ready” VMs at all for a while – just because incus is fancier.

I took some time before I answer, but definitely wanted to, because there are some conflicting thoughts in my head about this situation.

In the past, specifically on the old forum I have been (quite bluntly sometimes) arguing that this is just the state of the art and people ought to get used to it.

Specifically:

So you want to run Nextcloud? Congratulations! You are now

  • a Nextcloud administrator
  • a web server administrator
  • a database administrator

so better get used to needing these skills.

I still think this is independent of the underlying container technology. With the amount of effort that went into CE over all its iterations, delivered, pulled, reimplemented features, … we could have a near perfect jail based plugin experience. All of the above does not go away because you use Docker, now.

I did of course notice the short lived “Postgres upgrade” containers etc. Well done. The app system is smoother than plugins ever were.

Yet still things will fail and users without any clue what is going on under the hood will continue to have a hard time.

Which left me pondering the last few days: why can’t we as an industry not deliver an experience like starting SETUP.EXE on Windows?

There are complex multi-tier applications running on Windows servers, too. And somehow they manage to install

  • IIS if not already present
  • MS SQL if not already present
  • .NET something runtimes if not already present
  • finally the application the customer/user really wants

Including creating the databases and credentials necessary and so on and so forth.

And a year later an upgrade of all of these components is another SETUP.EXE.

Why did we fail users so much?

I am not arguing that you should not still know networking and your firewalls if you want to expose that IIS hosted web application to the Internet, but most open source products completely fail to assist the user to get them running locally in the first place.

And every “app” subsystem for Linux - snap, flathub, … seem to me to not really catch on. Docker compose is currently the best we have. At least the uninitiated user can reach out to some community and will in most cases get some YAML to copy & paste that will work.

Nextcloud, Paperless-NGX, etc. … read those installation instructions. I used to argue that they are perfectly clear and easy to follow.

Your short remark made me think that in reality they should not be necessary.

Something’s rotten in the state of software :wink:

Take care,
Patrick

I wouldn’t say for the entire industry. For one, enterprise software often sometimes offers that kind of experience.

To answer your question. IMO, there are two necessary preconditions: resource sufficiency and determination. Determination for long support and improving UX. Windows (hated by many) will launch an app that was compiled for Win98 in like 90% of cases. And in like 98% of cases with a proper compatibility flag.
Linux, OTOH, can’t frigging include realtek 2.5g fixed drivers for YEARS (of the linux desktop, my ass)! And I think that having a few hundred million a year as a budget is enough to not bring up the “not enough minerals resources” thing.


Regarding the resources, I think there is a potential for some kind of kickstarter for open-source. Like – resolving this issue/implementing this feature will cost us X money; we are starting a campaign! Donate for the win your “favourite” issue!

I know. They have an entire (large) department ensuring that Quicken something from 20 years ago that does the banking of entire businesses will not be broken by a Windows update. Mindblowing.

Apple OTOH: we told you we were going to remove that feature 2 years ago. Beta has been available for a year. Fix your software,