Excessive disk space taken by update

Hello all

In a previous post i as asked to input a seperate (an possible bug) issue

I have updated to ver 23.10.1.13 of truenas scale

I show that this version is taking up 58GiB of disk space thi is way to much in my opinion.

How di i fix this

Thank you

2 Likes

How did you determine this to be the case?

Can you share a screenshot of where it says 58GiB?

And, just as a shot in the dark, have you recently enabled FTP? And started putting data on the server that way?

1 Like

Yea, that seems way wrong… For what its worth here is the space consumed by my boot-devices with that version and RC.1.

image

You can get this data from zfs list -r boot-pool

Hello all

I looked at the boot screen as was suggested on a similar post and it was recommended that i open a seperate thread and suggest that this is a bug (?)

FTP at initial set up = No

I had to rebuild the server on proxmox from scratch (removed VM from proxmox and re-created one)

I then got a new download to create the new VM and loaded the existing pool.

Once this was done i saw that the was an update and installed it

This below is the current boot screen

Thank you all for your input

And here is (what i think ) is the content of this drive

I didn’t ask whether it was part of your initial set up; I asked whether you were using it now. Because the bottom line is that you’ve put about 56 GB worth of “something” on your boot device, and FTP is the most common way I’ve seen for that to happen. But really, nothing to do but step through your filesystem to find it. What’s the output of du -shx /* ?

3 Likes

Hello Thank you for taking the time to help

I did input the line here above


It gave me pages of the screen shot here above.

BTW my second server (also on the same version of truenas scale) shows 118GiB for the same reason

Thank you once again

Stupid “non-root admin” decisions by iX…

OK, first, stop using the “shell” in the web UI and use SSH instead. Then you’d have a scrollback buffer to see where the space is being used.

Or second, pipe the output through more to it will stop at every page: sudo du -shx /* | more.

Or third, grep for anything with a G for gigabytes: sudo du -shx /* | grep G

1 Like

Thank you for your patience

first time doing SSH
I did sudo du -shx /* | grep G and I got

the I did sudo du -shx /* | more and i got


and

I hope3 i did what you expected

Thank you

OK, so you have 56G in your /root directory. Change there (cd /root)–if it gives you a permission error, sudo cd /root, and do sudo du -sh *. Iterate down until you find what’s using that space.

du -h | sort -h is my preferred method for this sort of task, makes things a lot more straightforward.

2 Likes

Hello all

I did a sudo du -sh

16G
Then I did du -h | sort -h

I recognise all the crap that is listed here. this represents part of my document dataset

How ro I fix this?

should i just reinstall?

Well, for some reason your documents have been stored into roots home folder.

How did you transfer them to the NAS? Via sftp?

Heavens no, just move them to somewhere on your data pool. But the question remains how they got there.

2 Likes

Hello all

I have no idea

They ended up there because of my stupidity probably

They are on a disk called sda

Since I cannot move then via the gui or via windows, i will need to move then from the ssh connection (now that i know how to do this) or from the shell.

I do not need to move them I would just delete them

Can i please ask you for additional patience?

How do i achieve this?

Thank you for your patience

sudo zpool status and sudo lsblk would give us and overview of the situation.

How did you end up with data in the boot pool? :open_mouth:
Before you start deleting data I would suggest finding an answer to this question.

What are your shares like?

Do you really have no idea how you put those files on your NAS? On two NASs, really, because you said you’re seeing something very similar on another one. Strongly recommend you think harder about this–if you have no idea what you did, the odds are good you’ll do it again.

SSH to the server, then run sudo mc. That will start the Midnight Commander, which is probably going to be the easiest way for you to navigate and manage your files. Docs at the link above, and a tutorial here:

2 Likes

I’m not sure those would add much (at least, much useful) to our understanding of what’s happening. We already know OP has 56 GB of data in /root/ that doesn’t belong there (and that’s almost certainly where the 100+ GB is on the other system he mentions). In due course, OP will probably tell us that this happened using either FTP as I suggested (and OP still hasn’t addressed), or SFTP as @Stux suggested–those are the only ways that much data would get there.

2 Likes

@Davvo

Hi Thank you for your patience.

I truly do not know how this happened.

Asd mentioned I have 2 truenas scale instances
One called Truenas
this is a VM on a proxmox server

On the proxmox server there are 2 ssd drives dedicate to proxmox and the VM
Truenas has a 64gib allocation on this ssd
truenas pool is comprised of 6 disks passed through from proxmox and are exclusive to truenas
The pool name is media and the dataset are Apps, Computers, Backups, Documents, Movies. Music and Tv Shows

This is all identically replicated on the second server

The Second server called freenas2

is on a supermicro server with ssd for boot drive 6 disks for pool and 1 disk as cache (?)
How did a portion of one of the dataset ended up there … do not know

Here are the results of the 2 cmds
zpool Status

lsblk

I know NOTHING (as sgt Klink used to say) about FTP or for that matter SFTP

While |I was learning replications and snapshots (new to me) I must have caused this issue… Not proud at all

Thank you for taking time from your weekend to help me