I think saying “Terracopy is doing it wrong” misses the whole point of terra copy. One of the main features of Terracopy is that it performs checksumming on the source and destination. So, in general, it probably intentionally disables any and all sneaky copy techniques that might pull the copy operation out of Terracopy’s control and not guarantee a copy is complete (e.g. ZFS Async writes). This is a bit of an edge case since usually the purpose of TerraCopy is to move from one system to another.
In order to checksum you gotta read the whole file. So you might as well read the whole file as part of the copy operation to allow the CPU to generate the checksums while the target is busy writing. Speed isn’t the primary use for a proper archiving application, it’s safety. (As a side note, Terracopy would have also presumably protected people from the block cloning corruption bug.
).
TeraCopy seamlessly replaces the copy and move functions in Explorer, allowing you to work with files as usual.
Hmm…that doesn’t sound like a nice feature. So it’s basically breaking File Explorer’s ability to do server-side copies. There are a lot of tools out there to do file backups / validation (for instance robocopy – native on Windows) that don’t break core OS features.
Then again, I lived through the fiasco of university networks and Windows 98 / XP malware back in the day and probably affected my viewpoint on such software.
Fair rebuttal that I hadn’t considered. Even still, verification is optional. There’s no reason something like server-side-copy and similar traffic reduction techniques couldn’t also become optional in the software.
Maybe I’ll go through the effort of creating a feedback account w/ codesector and updating the earlier feedback request with your notes.
They could probably be made optional. Syncback has like 6 different copy engine options. But the other argument is that using Terracopy is also inherently optional.
By deliberately breaking server-side optimization and running a copy through custom software. It would be brand new blocks of data, not a redirect. The app would deliberately break any cloning because the read and write would be separate operations. I know there are apps I use in a professional capacity that do this for optimization as well as safety. They’re able to do things like write to 3 backup drives simultaneously as a single operation by writing from memory instead of running 3 separate copy operations.
You could say it’s worse. You could say it’s better. I would say it’s just different. If you just need to move files from one folder to another on the same drive I would say an application like terracopy is going to be the completely incorrect tool.
When it comes to ZFS, there are still tools, software, and behaviors stuck in the pre-ZFS days. This is the problem.
Take for instance most compression and archiving applications. Even a small change in a large file creates a brand “new” file, without modifying the file “in-place”. Because of this, any snapshot that contains the old .zip file will hold onto a lot of wasteful space.
Programs still do their own “copy-on-write”, which makes sense for non-CoW and non-ZFS filesystems. When it comes to ZFS, however, this becomes a “cost” and not a “benefit”.
The same is true for copying. How is it “better” to write and consume an additional 10 GiB of space for a media file, than to simply add a new pointer to the blocks that already exist (and are protected by ZFS’s checksumming and redundancy)?
there are still tools, software, and behaviors stuck in the pre-ZFS days. This is the problem.
ZFS is still a niche. On Windows we’re still in the “pre-ZFS days” and may be forever. On Linux it’s still persona non-grata. So saying that all tools should be ZFS aware is a little presumptuous.
And even then, to what extreme should archival tools be ZFS-Ready? Should it use ZFS-Send instead of SMB entirely before it’s a proper ZFS archival tool? Moving things around on a single dataset isn’t what tools like terracopy are made for. They’re made for moving between volumes–which will require a Copy/Write anyway. So, it’s sensible that they optimize for read/write patterns assuming that there will be a read and at least one write regardless of what the destination file system is.
I think it’s even more problematic in this case. For SMB protocol all the client does is ask server to copy chunks of file A to offset in file B. If you can’t trust your server to do that correctly I can’t imagine being able to trust it for anything else. )
If the kernel client operation fails for some reason (like a severe network disruption), Windows will delete the target file and redo the copy.
nice to see my old issue posted here. I had the exact same problem as OP a few months ago. I’ve been (and still am) using TeraCopy for years and years. It just plain faster in many scenarios for local copies than the native Windows file copy functionality and also much more versatile.
That’s why I, just like OP, completely forgot to mention using it when I wrote my own thread about the same problem - TeraCopy was just “the normal way” of copying files for me (at least on Windows).
It would be great if you guys could give my feedback to CodeSector a push - this would be great functionality - with the option to turn it off of course.
I saw you want to copy files within nas via smb.
Another protocol supports server side copy. Try to use nfs4.2. Not sure if windows support it, but since M$ is developing kernel everything is possible.
In case of NFS, it is protocol dependent. The original question was a second copy of a file within server. I answered for that. NFS4.1 did not this and it was less stable (more sensitive for nw outages) than NFS4.2-
Some details.
For example wikipedia:
" NFS version 4.2 (RFC 7862) was published in November 2016[[9]] with new features including: server-side clone and copy, application I/O advise, sparse files, space reservation, application data block (ADB), labeled NFS with sec_label that accommodates any MAC security system, and two new operations for pNFS (LAYOUTERROR and LAYOUTSTATS)."
But no one is discussing NFS here and if teracopy supported NFS it would probably still avoid performing server-side copies because it is not designed to do that.
Thanks for your reply, it’s not something I need to do frequently I was just creating a block of large files to check my network transfer speed which has been greatly improved by changing my 100Mb switch to a new 1Gb device. So my question was more of a technical query than needing a solution to a problem, but I see it has generated quite some interest.