This thread has turned into a bit of a mess, but I thought I might add to the thoughts maybe just to get the ball rolling.
What if there were 4 nvme slots, 2.5gbps networking and 32gb of ram?
I had to recreate this account, so first post and I’m not going to drop a link. Please look up FriendlyELEC Rockchip 3588 NAS with your favorite search engine. I’m seeing the kit for around $300 on Amazon USA plus the cost of drives.
No, this is not a “real” production level NAS choice by a long shot. But it will get the ball rolling for under $1000.
Why do I even care? Because my lab is currently drawing a bit over 400 watts at idle (four HP DL360 gen8 with more ram than I’ll ever need), and I want to slim it down. Needs are SMB, NFS, and iSCSI and hopefully about 25 or less watts at idle. Yes I’ll be sacrificing those 10gbps connections, but for my lab I’m not sure I care.
Lab is currently a single Truenas Scale ElectricEel, three XCP-NG v8.3 release, two HP T740 running VMWare 8. Aiming for all mini PC in the future which needs to include a mini NAS and I’m just not finding what I want in mini-itx sized x86 boards.
I know the answer, compile it myself. Will you help answer my stupid questions if I try it? You already know it isn’t a simple cross compile, and I’m not a hardcore Linux person so even simple would be a struggle. The dependencies mentioned in a post above certainly make me pause, but can these be overcome?
You get far more expandability, flexibility, IPMI, etc. at the price and watt point you desire without any of the compromises / pain associated with trying to get SCALE to compile on ARM (plus 20 native SATA ports, SFP+, 2 PCIe3.0x8 slots, and IPMI built into the motherboard). Is your time really worth so little that you want to spend hours pursuing a quest to get a product to work on a inferior, untested platform vs. buying something you KNOW will work?
Look, I spend all sorts of time on crazy paper mache projects because I find it fun and if getting Scale to compile on ARM is your source of happiness, go for it! I just want my NAS to work, reliably and present as little work outside the stuff I consider fun as possible.
This is not a price issue, and this kit will NOT get anything rolling.
It is a customer demand issue. PAYING customers demand.
iX will dedicate resources to porting on ARM when enough customers will demand TrueNAS Enterprise on ARM—and when this would not even cannibalise their sales on x86. If ARM keeps increasing its market share in servers, this will happen, someday.
For now downsize the lab. Consolidate servers on newer hardware. Use less drives, but larger ones.
What are your needs?
For storage, a X10SDV-xx-7TPnF, as suggested by @Constantin can do a lot—plus extra apps if you go higher than two cores. But it’s not mini-ITX.
If you want mini-ITX, A2SDi-H boards have 12 SATA ports, and 10G as well. Sip power.
Open a thread…
OK, maybe you guys are right. While I still think Truenas on ARM is important, maybe it needs to happen when more real ARM servers are available. Same goes for Risc-V servers. I’d still like to see an Alpha that might work with that Rockchip 3588 though.
I just ran a test on a VMware system with a Server 2022 guest and using the local nvme disk in one of my HP T740, the speed is impressive for me and I’m not sure what I think I could get that would be “better” with a simple 4 drive array and still a 10gbps connection.
I’m thinking that maybe I just need to buy another cheap T740, stuff a bunch of RAM and a 10gbps card in it, and a single “large” nvme drive of like 4tb (I only have a little over 2tb in my older server now), and just live on the edge. It’s a lab, how bad can it be to have the single disk fail?
I may swap one of my other VMware hosts for Truenas and give this a test and see what I can get before buying more stuff, but it looks like I should be able to build a decent small storage unit that draws around 20 watts most of the time. The only things I would lose is failure modes and iLO, the iLO I hardly use so not a huge thing. Maybe I’ll look into some of the n100 powered NAS boxes that are out and see if I can find one with a 10g SFP+ on it. My “main” switch is a Mikrotik CRS309 which is mostly SFP+ and I have lots of DAC cables ready.
Serious ARM servers are out there. I posted a link to a reasonably priced motherboard above (for “home builds” if one runs Xeon Scalable EPYC7000+ at home…). The market share and customer demand is not high enough yet.
There will be a TrueNAS image which runs on a Rockship 3588 when iXsystems will sell ARM systems with the ARM version of TrueNAS Enterprise, and will release a “community” edition of it for forumers like us to be testers. Not before, and in this order.
I’m not sure whether server-grade Risc-V is a thing.
Seconding @etorix - Supermicro’s A2SDi series so rocks. Power consumption is great, you can get as many cores as your budget allows, you can get up to 12 SATA ports, … only thing is single thread peak CPU performance, so e.g. a Windows VM will not feel that great.
But everything else that is essentially a web application - whether it be in a FreeBSD jail or a container on SCALE … will work just splendidly.
Yes, Raspberry Pis and clones are cheap, but they are in no way a suitable platform for a NAS. And real ARM based servers are no cheaper than their Intel/AMD counterparts. Which leaves us with power considerations - well, there’s Atom and Xeon-D/E for that.
My experience and current state of what I run and recommend.
I’m sure that this has changed with RK3588, which is impressive and mature SoC now (it was announced in dec-21). After three years it is stable and well supported SoC with quite good i/o for smaller NAS. There is up to 32GB RAM, 2.5G ethernet, 4x pcie 3.0 (with bifurcation), 1x native sata and eMMC 5.1. This is not that much, but it should be ok for entry NAS. You could get bare 32GB RK3588 board last year for about $175.
How to get most from RK3588 for now? With cheap bifurcation adapter You could get 2x pcie 3.0, one for 10G nic (and there are 1-2x 2.5G already) and second for 6x sata ASM1166 m.2 card. Also there is native SATA and eMMC if needed.
Also already there are some news about upcoming RK3688, we already know that it will get more AI capabilities, ARM9.3 as well as new UFS4.0 storage option.
Radxa is now working on ARM 9.2 board, all we know it’s sub-$300, we should see that before december, probably will be based on cixin p1, it should have 16x pcie 4.0 available.
For now all we have in ARM world is really good RK3588 SoC, but more are coming. Those should have more resources and fill up the gap between best RK and ampere.
Yet again I’m sure that RK3588 has that already.
Sata ports are not that compact today, for small sbcs most go with m.2 now, just because sata require additional 5+12V power. There is one native sata port on RK3588, You could get it with m.2 a+e card on Radxa ROCK 5B. Also there are cheap 2x sata3 adapters for m.2 a+e, those will be limited a bit, but it’s also some option.
For multiple nvme drives - there is only 4x pcie 3.0. But You can easily split that into 1x+1x+1x+1x (like freindlyelec RK3588 nas) or 2x+2x (like on radxa ROCK 5B+). Same thing is possible with cheap, passive adapter for others. Also it’s possible to use those for m.2 sata adapters and network cards.
All comes with at least one 2.5G ethernet, some boards have two such ports.
5Gbit USB ethernet dongles are now available. This may be an option there (I really don’t like that idea, but on the other hand, my high end, heavy thinkpad
p16g2 with i9 came without any ethernet port and just such dongle).
And of course - still You can sacrifice some pcie for 10G m.2 card.
As You can see - limits are clear. I already saw some cards with plx switch and 2x10G and 2x m.2 ports, all 4x pcie 3.0, not that cheap but should get most of those lanes automatically.
eMMC is for sure much better and faster than sd cards, also should be more durable. Same eMMC chips (from foresee) comes in m.2 2230 form factor for steamdeck.
Still using native sata is some option.
it supports up to 32GB of RAM. There are some boards with 24GB RAM as well.
Sure, I think that with RK3588 we are already there with i/o that is capable for some really good NAS. There is no ECC option as well as many others, but we can expect update near in future.
Those are really good server boards with ton of i/o,
They also need far more power to run all of that. ARM boards usually start at fraction of that and this is main reason for different arch now.
Probably there will no not much enterprise customers with this particular SoC.
Raspberry Pi is really not there yet. It’s expensive and really limited in terms of i/o. I would not think about those as good ARM examples. Maybe Jeff records some videos about that, but I’m also sure that he can afford something better (his ampere build?) for all tasks.
For now, at such category some people consider intel N100 as alternative. It’s good, low power device with roughly same performance and io as RK3588. With upcoming ARMs probably there will be something like core ultra series, hopefully in -30% prices.
The essence of a NAS is storage. And HDDs are still the medium of choice for capacity storage. One SATA port does not cut it, and then all of the few PCIe lanes have to go to a HBA.
Except that the Rockchip 3588 is a single chip. Well, their are variations. But, some of my points still stand. That chip and related ones may have reasonable I/O, but are their ANY REAL NAS boards out today? Or soon?
Using 1 x PCIe 3.0 lane, (8Gbits/ps), for 6 x SATA is quite limiting. Especially with ZFS scrubs which can read all disks at the same time. Even 2 x PCIe 3.0 lanes, (16Gbits/ps), would really only give full SATA III speed to 3 disks, (which is more or less needed with modern SSDs).
Yes, having 2 built in Ethernet ports, (even 2.5Gbits/ps), is good. But, you mention 5Gbits/ps ON USB! I’ve stated that for reasonable reliability and performance, USB is not suitable for TrueNAS. Okay, it may work for some people for years, decades even. But, have a >1,000 people do Ethernet via USB dongle and you will likely have some people with problems.
As for eMMC being more reliable and faster than SD cards, sure I will agree to that. But, do you KNOW what the eMMC cell life time is? Does it include error detection and recovery? How about spare blocks? A single bad block on the boot device may cause a TrueNAS server to crash and not boot. Easy recovery but the sole reason USB flash is no longer recommended is because of its unreliability. So how does eMMC stand up?
As I have said before, ARM will change but today is not that day in my opinion.
Yes, there are few versions of RK3588, but only this one has that i/o. Rockchip added chips like RK3582 which is strip down RK3588s version (two cores or gpu). Everything except “full” RK3588 has no those five pcie lanes.
For good boards - there is ROCK 5 ITX as mentioned earlier, ROCK 5B+ with two m.2 slots (both 2x pcie 3.0) or CM3588 NAS board. All with at least one 2.5G and sata (or sata via m.2 option). As I said I think that with this chip it’s already there to consider ARM NAS build.
Those limits are clear, but also You don’t need to use everything and always at max speed. This perfectly applies to all components - pcie cards, ethernet and sata ports (as well as others - cpu, ram etc). Also using using HDDs is fine while they never fill sata3 transfers
For 6x sata3 ports on asm1166 via 2x pcie3 lanes. Yes, it will limit 6x ssd, but probably You are already limited with ethernet. Also You may just use some ports for HDDs or left unconnected. There are many cards with 4, 5, 6 sata ports, some with 8 (two sas connectors, one via port duplicator), all of those can be useful, maybe for some second line backup. It’s really not a problem to use available bandwidth at 100%, it’s just bit harder to choose hardware wisely to not limit one port too much and other only partially.
I also don’t trust USB that much, it was made for different things, but world has changed somehow and today single usb cord can carry all i/o that we had in laptops. With 8gen intel I got first usb-c docking station, capable of 3x 4k displays, some usbs, ethernet etc. It works well and in recent thinkpads they also removed ethernet at all giving such dongle. In next few years we will probably have no other option than usb-c ethernet on all mobile computers.
As I said first 5Gbit dongles appears, but this should be also possible to work on pcie 2.1 with transfers better than 2.5Gbit (but not full 5Gbit). There are already m.2 B cards.
Sure it has all that, You can even do function call to ask how much it’s already used. eMMC was designed mainly for cellphones and android/ios that writes quite much. It needs to survive at least few years of extensive usage because it will not be easily replaced. Of course it may happen that some chips will brake sooner or later, probably not far from other types of chip drives.
In real life it’s just yet another storage option. It’s about 300MB/s and quite durable. Much better than smaller sd cards, but not that good as upcoming UFS standard (there are some board with that already).
For me small NAS build are already here with popular consumer hardware, some enterprise solutions with ampere, but there are many less known vendors that uses rockchips and build huge server racks with those SoCs. Browse for Mixtile or T-Firefly, You will find complete server racks with multi RK3588 boards.
ARM is today real thing, more interesting stuff is coming here and more stuff is optimized for this arch and still there is not that good software option for good NAS software
Sorry, but these cards have failed or even worse caused corrupted pools over and over again with ZFS.
Why would you build a NAS with parts that have a proven record of beeing unsuitable for ZFS?
If you just wanna try ZFS on ARM, you can do it already today with a RPi and say ubuntu.
Why should iX be interested in porting Truenas to ARM, just to have the forums flooded with ppl losing 5 years worth of family pictures by using some obscure arm board from aliexpress?
I can’t speak for everyone, but so far I cannot confirm that. What version of firmware You are talking about?
There were few, as well as at least five boards revisions. First releases suffered from mechanical damage and weak solder joins, but those are already fixed. ASM1166 in recent firmwares supports low power modes as well as sata hotplug.
There is also 5-port JMS585 with same 2 port pcie 3.0 upstream. This extra port on ASM1166 is bit beyond bandwidth so it’s not a big deal to not have that. As far as I remember JMS585 don’t support ASPM
I’m already running few builds with ZFS on ARM and I can assure You that Raspberry pi is worst option in terms of i/o, even older RK3399 was far better option than current and overall expensive Pi5.
So far I did not had any big issues with ZFS, but I know that recent work on kernel fixed many issues about that. I don’t keep my data at one copy so it’s not excuse to not try that
All I wanted to say here is that for consumer SBCs sata ports are now deprecated just because it’s much bigger than m.2 nvme cards and requires data and power cables. This makes it much bigger than sbc with nvme or just eMMC.
That does not mean that there are no sata hats and extensions to bring it back there. This will make build much bigger and requires different power adapter than simple ubs-c pd. Still You can do build with many SATA ports if You want.
I would just not expect any small SBC with SATA already there unless it’s something like ROCK 5 ITX that goes into regular PC case.
@dominik - Those boards do look good for a small NAS.
As for a TrueNAS version on ARM64, I can’t see iX doing the work for free. The x64 Core and SCALE versions are available in the Enterprise which pays for development. We simply get the use of the software, partially as early testers.
That said, I remember one of the iX forum members stating that if their was a general purpose bug / feature request that helped an ARM64 version of TrueNAS, iX would seriously consider mainlining the bug / feature. That does not mean iX will write the code.
Now on the bad side of those SoC / embedded boards, they tend not to have reasonable expansion. Yes, we are getting more and more M.2 PCIe expansion cards, however that form factor was not really designed for SATA or SAS connectors. Meaning the mechanical pressure of installing a cable, (once per port, 5 or 6 ports), has to be done carefully. Push too hard and the M.2 SATA board may not work properly. Or worse, appear to work fine until it heats up and a board trace looses contact.
Another nitpick is these are embedded boards, no memory expansion.
Last nitpick, if I recall correctly, most ARM boards need custom boot code.
However, keep up the research. While iX may not want to do ARM64 soon, their may come a time when they want to make their own ARM64 NAS for the small office / home office use. Then, it would make sense for iX to expand TrueNAS to ARM64.
The 2c version of that series clocks in at something like 25W while it’s running. No arm has the built in ports, I/O, etc. at that price point and it likely would use a similar amount of power once you add a HBA and a SFP+ port.
I take your point that not everyone needs 20 SATA ports. For that type of comparison, I’d look to the mini-itx 570 asrock rack board that has 8 oculink ports that can run either NVME or SATA. It also features a built in copper 10GbE interface. Pair it with a low power g or ge Ryzen CPU and you’re back in the same range re: power.
Bottom line, everyone wants a low power system but if it’s supposed to be performant, reliable, etc. then ARM is presently not an option. What ARM would have to bring to the table (besides support for ECC RAM at a minimum) is much better power efficiency and/or performance. The building blocks are there, but it has yet to be executed on in the b2c market.
That doesn’t detract from the tinkerers that get Raspberry Pi’s and like SBCs to do amazing things. I simply prefer hardware that is performant, proven, and reliable when it comes to long-term data storage.
And this is the point,
While ARM can scale from small into big enterprise hardware - I’m rather thinking about home usage with compact devices.
This is something that we can’t change.
It would be just great to see some efforts to launch that initiative, to allow others to test and contribute to such idea. This needs to be someone frm iX to show that path and maybe some day this will get benefits. Sometime it’s just right time to be ready. Of course we don’t talk about whole rewrite, but just to customize that on already-done debian.
I know that, it’s rather feature for most compact design. All You have there are ultra compact connectors, sometimes way to small (like the one from opi max).
Still there is really interesting idea of SOM and carrier boards, where You can get fitted connectors and as big as they needs to be.
It’s also a feature.
You simply replace whole SBC/SOM to get those extended.
Sometimes it’s possible to change SOM between manufacturers like RK, Pi and NVidia.
This is bit complicated, indeed it’s not that easy as it is on x86, but on the other hand all SBCs comes with some form of debian, ready to use. Also there are frameworks to compile right kernel and build system for many different vendors and SoCs.
In short it’s hard to burn right iso image, then to boot that from specific to board place and have board set up, but this process maybe can look different. OMV just provide few steps to install it on debian, just by adding apt repo and running compatibility check, then install.
I read several comments that it should be fairly easy to get it up and working on current ARM64 systems, so I’m checking from time to time if this already has started. I would not use it for anything important for now but I’m willing to help with few ARM boards to get that up, maybe sometime to be reliable.
Completely agree,
But probably with most consumer SBCs You won’t add HBA or SFP+, simple there is not much such small and power efficient hardware. For instance I would love to have pcie3.0 sfp+ adapter for m.2 slot. I could not find anything like this. Only Marevell ACQ m.2 cards (ethernet).
Same for HBA boards - none for m.2.
For home You may not need 10G yet, as well as SAS (still exensive), but building something out of consumer hardware is some idea. This usually draw less power, produces less heat and noise that pro grade hardware. And simply those needs to match each other, because it’s strange to pair small SBC with m.2 to riser and then big and power hungry PCIE HBA
I think rather about 2-5 disk builds. In future maybe all-ssds builds like the one from friendlyelec. If we will get bit more pcie lanes in next gen SoCs then 10G will be also easily possible.
For small ARM builds and power consumption it’s also bit complicated. My very old four bay ARM NAS draws about 8W at idle. It has 2x hardware RAID and only 1Gbit ethernet, it’s stable, never failed (for few years now). It’s really small, made for fun, but proved some idea I have yet another bit newer, all ssd version, but did not yet measure it’s power consumption (probably less at 2.5G).
Ampere has ECC support,
maybe there is something in the middle of range now with that to.
As I always repeat - pi5 is not yet there to build something reasonable, but RK3588 for me it’s already something interesting and we can expect more stuff coming soon with interesting specs and i/o for some interesting builds.
Therein lies the rub, ie SOC ARM getting a lot more PCIe lanes to run all that flash, allow for fast interfaces, yet keep power consumption low. That may be a somewhat mutually exclusive goal. Intel invested a lot of money into power efficient server CPUs due to customer pressure in data centers, there may be less to gain than you think by going to ARM (aside from the usual smaller circuits = less power evolution)
I went back and read the specs on that CM3588 board, while it has 4 nvme sockets, they way it divides the lanes was kind of odd. In the end, you can get four gen3 x1 lanes, there was a mix of different gen2+gen3 but summary is that you can get four gen3 x1 if you populate all four sockets. Probably fine for the 2.5gbps connection that it has, and I’m not totally going to rule one of these out, but not right now.