I know it’s not really my place to say but I’m a little uncomfortable with my level of involvement in this thread (even if it was very minor). I had no idea this script was destructive in nature and if I did know that, I wouldn’t have participated in this thread. Sorry. I don’t like that script and I wouldn’t run it.
No idea why that would be–it’s a disk burn-in script, so it’s to be expected that it would be destructive, and its docs clearly identify it as such. And, of course, data on any disks will be destroyed when you add them to your pool anyway.
For my part, I find it quite useful, albeit not frequently so (because I’m just not buying new disks that frequently). It doesn’t do anything that I wouldn’t or couldn’t do manually, but like any good utility, it automates the boring stuff.
I really don’t want to comment so I’ll keep it vague.
I spent about 2 minutes googling “disc burn in” and I honestly don’t see the value (that’s what back-up are for). But that aside, the script is obviously not a professionally written script (no shame or insult in that) and I just have a huge tendency to distrust.
I really mean no disrespect in my ‘not professional’ comment so I’ll try and demonstrate that I don’t intend to offend.
No makefile to install (git clone to some odd directory is not an installer).
No documentation (man page) to reference.
The few conditionals I saw (at least the ones on top) were using “odd” syntax.
Example (and this gets into complicated rules too so, grain-of-salt). 'Single' quotes indicate that no substitution is desired. "Double" quotes indicate that substitution is required/tolerated. [ 0 -eq 0 ] # is for integers [ "a" = "a" ] # is for chars
and sometimes quotes and braces are not even “necessary” [ $_var = "abc..." ]
So, it was just a “feel” I got when I glanced at the code (distrust first, ask questions later).
The purpose of burn-in is to wear out infant mortality in drives. This is rare, but it happens.
Same as running MemTest on any incoming RAM module you purchase.
And if you do HDD burn-in in the actual NAS, it incidentally provides a test for cooling under the worst possible workload.
I guess a backup would mitigate the harm of a brand-new disk failing in the pool, but do you really think they serve the same purpose? Is the benefit of testing hardware before you put it into production not obvious?
As to your other points–so what? What need is there for a Makefile? What benefit to “installing” the script? There is indeed documentation; why do you care if it’s in the form of a manpage?
burn:
I’m not saying anyone is dumb/bad/evil for ‘burn-in’/testing/or whatever you say it is for. I have never done it and therefore do not know anything about it. If you–or everyone else on this planet–likes to use it, great! I am not telling anyone not to but chances are I will not unless I’m convinced–or need to–otherwise. Also, I don’t remember iX offering a ‘burn-in’ feature for my discs when I bought my machine from them.
The cooling concept makes perfect sense, and I can see how that would help. But I’ve never run a memtest on ram. …but I’m not in IT or had to do this professionally so I’m useless in these types of procedures.
install:
You install because that’s how you get things in your $PATH. Same as on any other OS (Windows, Linux, or BSD).
docs:
I work very hard on my tools; I provide a method to install/uninstall and documentation etc. and, unfortunately, more often than not, that is for tools that no one besides me will ever see and use. Documentation is just as important as the tool itself (I said manpage because, if you are calling this tool from the command line it makes sense the documentation is there too).
I expect quality in everything/whatever (even in the t-shirt and sweatpants I am wearing right now) and, so, I strive to produce quality myself (actual smart people may laugh at the level of quality I offer but at least I’m trying). When I took time off from managing my server(s) myself I sought out quality; I spent the money and had iX build and setup my NAS server. Now that I don’t feel the same level of quality, I’m back to doing it myself (researching, discovering, creating and documenting). But that’s me. I’m not saying you or anyone else has to do that (but I won’t be using your stuff if you don’t deliver up to my expectations–it’s the same thing as everything else you buy).
Why do I answer your questions, and you don’t answer mine?
…and why does a tool you’re likely only going to use once need to be in your $PATH?
I mean, I guess it doesn’t hurt for you to create ~/bin, add it to $PATH, and put the script there, but I just don’t see any reason to do it for something that’s going to be used only rarely. Put the scripts somewhere (I create a dataset for them), change to that directory, and run. There’s nothing to “install,” and no reason to do it.
…and it’s provided for the script in question, both on its web page and in the directory that houses it. The docs aren’t in the form of a manpage, which you couldn’t use with TrueNAS anyway–the manpages are stored on a read-only filesystem, so you can’t add to them without enabling dev mode. But they’re definitely available from the command line.
I don’t see questions from you (unless it’s “why not put it in ~/bin,” in which case I’ve answered above). But in about 15 years of following this and the prior forum, this is the first time I’ve seen anyone complain that the many community-provided scripts lack a Makefile or a manpage. If you consider those essential indicators of “quality” for a script, I guess that’s up to you. But it honestly appears to me that you’re applying an inappropriate standard to these tools.
Why cannot it NOT? You said “you wouldn’t create a directory in the user’s $PATH (like ~/bin)”; you started this discussion. I asked you “why not?”. So, I’m asking why I am wrong. The “~/bin” concept is very old and typically, under “~/bin” can be/is a slew of directories like share, lib, etc., man, etc. to promote concepts like shared libraries and permissions (and even different architectures). You add ~/bin to your path once and you’re done (and the above standard link even recommends that directory being automatically added to users’ path).
You don’t know MAN; it is a fascinating tool/story. For example, man pages have a structure (not so much in the file itself but look at a few man pages and you will see a common format/theme). This is by design; developers all follow the basic formula, and this makes learning/using/reading easier. -i.e. if you know where to find the information it is easier for you (-e.g. “Examples are always listed at the bottom”, “flags are listed at the top”).
I am most certainly–and absolutely–NOT complaining! You asked for clarification, and I am saying I do things a certain way based on *my* standards; you or anyone can do what *you* want. How is a personal standard or an expectation inappropriate?! I didn’t do anything! I said I wouldn’t use something (and apologized); I was asked why; I gave a reason (good/bad/otherwise). How is that inappropriate?
Having been offline for a couple of hours, I just read all new posts in here.
As a Newbie as far as TrueNAS software matters I am really thankful for all contributions given.
I was asking for help for some technical questions as I am not so familiar with Linux, especially the use of scripts and related practical questions like where to store, how to run, what to obeye and so on.
However, the last posts deal with fundamental issues. I read and respect the different opinions posted, but let me also share some thoughts.
For years, I am operating four different NAS now, three four-bay Synology (DS918 being the latest one) as well as a two-bay “selfmade”-NAS with OpenMediaVault OS, without having executed a burn-in for all harddisks installed on these systems. All running in RAID 1 or RAID 5 mode, I got a faulty hard disk only once without data loss.
I didn’t find any information at Synologys website that a burn-in is required or even recommended before using hard disks. First time I read this in this forum and, yes, I personally find it useful but not mandatory. Otherwise, there would be lots of recommendations or, stronger spoken, requirements announced by manufacturers like Synology or Qnap to do this prior to install and use hard disks.
I am of the opinion that everyone should decide for themselves whether they want to take this measure or trust that the brand new hard disks have no defects.
Coming back to the original topic: As soon as I have managed the script running and the burn-in procedure will be finished, I will come back with the results. This may help other Newbies that have the same problems and willing to do burn-in procedures.
I didn’t, and don’t, say that you’re wrong in doing this. What I said is that I wouldn’t do it, the main reason being, as I already said, that I just don’t see any reason to do it. A secondary reason is that putting the scripts in ~/bin separates them from their documentation, and sometimes from files that are required in order to execute the script (for example, my Nextcloud script expects a config file to be in the same directory as the script when the script is run; Joeschmuck’s reporting script expects something similar).
So, why not do it?
It isn’t necessary for any of these kinds of tools
It will separate them from their documentation
It may break some of them
Yes, putting stuff in ~/bin is a pretty common Unix-y thing to do. But though TrueNAS is built on (FreeBSD/Linux), many common Unix-y things either aren’t available, or are done differently.
I know it sufficiently for purposes of this conversation.
As I said, you’re free to apply whatever standard you like to what you run on your systems. But if that standard is that you’ll only run software you deem “professional” (which you’ve suggested it is), and further that “professional” includes that it contains a means of installing itself to your $PATH and has a manpage, I think that’s an inappropriate standard[1], particularly to suggest to others.
not least because it means that literally no third-party/community-created tools are acceptable, as it’s impossible for them to contain, or at least to install, a manpage ↩︎
Note: The HDs are identified as sdx, but not ada0 and so on.
2. I downloaded the script using git clone instruction to my NAS.
3. Prior to run the complete script, I made the following test by typing at the prompt: sudo smartctl -t short /dev/ada0
I got the following message:
smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.6.32-production+truenas] (local build)
Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org
/dev/ada0: Unable to detect device type
Please specify device type with the -d option.
Maybe I had to use the ' sign as in dak180’s original instruction ./disk-burnin.sh -tm 'ada0 da0 ada1 da0', but when typing
sudo smartctl -t short /dev/sda,
I was successful in running smartctl, as the message
=== START OF OFFLINE IMMEDIATE AND SELF-TEST SECTION ===
Sending command: "Execute SMART Short self-test routine immediately in off-line mode".
Drive command "Execute SMART Short self-test routine immediately in off-line mode" successful.
Testing has begun.
Please wait 1 minutes for test to complete.
Test will complete after Sun Nov 17 14:22:14 2024 CET
was shown.
Obviously, I can address the correct devices by using /dev/sda,/dev/sdb and so on, but not when addressing ada0 etc…
Is it the missing ' sign?
And what is the purpose of 'ada0 da0 ada1 da0' and what is the difference to /dev/sda etc.?
./disk-burnin.sh -L would be the correct way to get the valid drive specifiers for your system.
While the script uses the smartctl command smartctl and the script are not the same and are not invoked in the same way.
ada1 da0 much like sda are drive specifiers ad the pattern used is system spcific. The examples are just that, examples; they may or may not line up with your system. Use the output of ./disk-burnin.sh -L to guide you for your system.
Since it seems like you are unfamiliar with UNIX command line norms I would suggest reading The UNIX Command Line, A Beginner’s Guide before you continue, such that you have a better idea of what you are doing.
At first, I thought TrueNAS, as the successor of FreeNAS, would base on Linux, but reading a bit more about its history, I understand that it is Unix-based. So, I will follow your recommendation to become more familiar with Unix commands and its syntax in order to manage my first and second steps with TrueNAS.
0: 1 windows (created Mon Nov 18 10:44:12 2024)
1: 1 windows (created Mon Nov 18 10:44:12 2024)
2: 1 windows (created Mon Nov 18 10:44:12 2024)
3: 1 windows (created Mon Nov 18 10:44:12 2024)
4: 1 windows (created Mon Nov 18 11:22:39 2024)
However, how do I get information on the four processes I have started, using tmux, e.g. switching windows etc.?
Addendum: Disconnecting and reconnecting PuTTY leads to the following:
I typed tmux attach at the prompt and got the message no sessions. !?
Typing tmux 1s, I got the message no server running on /tmp/tmux-950/default.
It looks like there is no tmux-session running. How do I get information on the started processes now?
So, there seem the four processes running, but I can’t get access to it.
Looking at the dashboard, there is no write activity on any of the four disks. But it should be!?
See https://tmuxcheatsheet.com but keep in mind that since the sessions were started in the root context you must access them via the same context.
Or you could look at the current output of the associated log files.
During the smart tests (which, depending on disk size, could last 24 hours or more) there will be no detectable disc activity since that happens at the controller level which the system cannot see.
FreeNAS was based on FreeBSD, which is a form of Unix. TrueNAS CORE is likewise based on FreeBSD. TrueNAS SCALE is based on Linux. This difference isn’t particularly significant in terms of learning how to use the command line, though.