Last call for feedback on upcoming UI Improvements

Headers should be clickable to sort columns. They should have up/down arrows to reverse sorting, or reverse the order with each click.

Like in CORE.

Not being able to sort the disk list is a serious PITA on large systems…

Here is an annotated/redacted screenshot from a small system with DragonFish RC.1 on it… I don’t have any CORE systems left to show how it used to work/look.

The red headers for the columns should be clickable to sort…

Ideally, I’d like the last order saved as some sortof preference… this was always an issue in CORE.

Here’s the feature request sitting in no-mans land from a few weeks ago: Issue navigator - iXsystems TrueNAS Jira

3 Likes

Is there a discussion on what datapoints will be available for widget integration?

We were hoping to get those types of suggestions in these feedback threads. Excitement over GPU widget designs seems to have distracted folks from the widget groups designs. In any case, I would invite everyone to make datapoints suggestions here on this thread or for better long term visibility via Jira tickets. We’re very interested in what datapoints you all would like to have available.

Please note that we’re open to any datapoint suggestions but we will be prioritizing datapoints that already have corresponding APIs first. You can view the existing APIs from your TrueNAS system at https://your-truenas-url/api/docs

The new widget groups architecture will open the door for many possibilities. We really think users will like this feature since it will allow the dashboard to be populated with the exact datapoints desired.

Looking at reporting.netdata_get_data and going to http://TRUENAS-URL/netdata/ in order to see what is available there I can say:
There are good pieces of information in ZFS Cache in netdata worth making widgets for. These are already included in the TrueNAS Reporting tab, but would find a nice home in a dashboard. As an example:

As someone who uses virtual machines on SCALE this would be handy:
vm.get_vmemory_in_use So that you have a quick glance into how much memery in aggregate your VMS are actually using.
vm.query should be able to provide a list of the virtual machines an report the status of whether or not machines are on/running or not

Other things I’d like to say don’t have an API associated with them AFAIK. zpool iostat has alot more of the story to tell that netdata does, and as far as I can tell there is no way to get aggregate usage (bandwidth/iops/latency) of all of the drives summed together.

I think we should care very much about pool level aggregate performance, much more in most cases than individual drive performance, despite that being important for different troubleshooting reasons.

Abridged here for simplicity, but each of these columns has a story to tell. From a widget perspective operations bandwidth and total_wait would instantly give a sysadmin into the visibility of what’s going on with their storage system.

root@prod[~]# zpool iostat -vvylq 120
                                            capacity     operations     bandwidth    total_wait     disk_wait    syncq_wait    asyncq_wait  scrub   trim  rebuild   syncq_read    syncq_write   asyncq_read  asyncq_write   scrubq_read   trimq_write  rebuildq_write
pool                                      alloc   free   read  write   read  write   read  write   read  write   read  write   read  write   wait   wait   wait   pend  activ   pend  activ   pend  activ   pend  activ   pend  activ   pend  activ   pend  activ
----------------------------------------  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----
ice                                        141T   114T     65      0   414K      0    9ms      -    9ms      -  341us      -    1ms      -      -      -      -      0      0      0      0      0      0      0      0      0      0      0      0      0      0
1 Like