I am 1000% aware they (IX) changed the default. I was speaking of the USER changing the arc settings, not IX. My arc has been set to 70% for a very long time without issue (I’d say 6 months on Cobia though), and many testers of Dragonfish did not have any issue. There are competing data points. We have guys who had trouble and a fresh install fixed it. And we have guys where no fresh install but an arc change seemed to fix it. Those are not the same conclusion. I personally have 4VMs, 19 apps now, system busy virtually 24/7, only 64Gb ram, 70% arc, and no issues, but on Cobia. Decent number of users too. The difference there is I made up the 70% based on my specific workload, with a human (me) looking at the data. Vs a system trying to adjust it, on Linux, without the worlds (shall we say) leading memory management. Systems running openzfs often just default to the 50% limit, so, it’s up to YOU to change it to what you want based on the system admins skill.
The reason for the 50% as it was before isn’t IX, they just use openzfs, and openzfs decided on the 50% limit for linux for good reasons long ago. However, since that time, many things have improved and the issue is today, is 50% really still important, or, have the issues that seemed to cause the problem mostly or entirely been resolved. It’s not a static decision, just because it was bad many years ago doesn’t mean it still is (despite the very valid original reasons). It would appear thus far there are systems where it may indeed still matter. But it’s also not all systems.
The fact that a fresh install did not work for some doesn’t mean it doesn’t solve it for some. That fact that some didn’t change the arc but re-installed and have no issues, also doesn’t mean it didn’t work for them.
The problem for ix to solve is do they have a way to determine what may cause the issue in some systems but not others. I’m not sure they can but am hoping so. In another thread, there is a guy with 1TB ram that has the issue. And it wasn’t even remotely close to full, what about him? And there is the bug report of a memory leak in the middleware that causes the issue as well, etc. Tough problem.
Then there is the issue of amount of memory on systems. It could be generally said that for smaller ram systems, 50% is possibly too high, and, for huge memory systems, it’s likely way too low. I mean an 8GB ram system, is 50% (4GB) really appropriate? It’s possible I guess but seems high. But in a 1TB ram system, do you really want say 400GB memory wasted and not being used at all? But there are outliers too. Man, it’s 4AM and I seem to be making all sorts of typos, hopefully, got all the facts right.
I’m not sure if I ever read of someone on Cobia who increased the arc limit having issues, I don’t recall one but maybe there was as I certainly don’t read every Truenas post. I primarily follow openzfs. I believe a bunch tried upping it (like me) and it worked, so, with testing going mostly well (although the Truecharts doc guy had this issue with the Beta), IX went for it (my take). I had long run >50% zfs arc pre-truenas on debian without issue, on many systems at many companies. It’s a tough call really as they want to have no limit, many many complaints about the 50% limit and people who don’t want to change it themselves.