I am using robocopy to replicate various data on multiple local storage devices to my home Synology DS218+ NAS. I noticed that throughput for large (multi GB) files was significantly slower than what is achieved through Windows explorer. With Explorer, reads and writes of large files would usually be around 113MB/s (~900Mbps) for the duration of the transfer. With robocopy, based on what I could see from Windows performance monitor, and verified with the NAS management console, reads and writes of large files (multi GB) would usually be around 40MB to 50MB/s (~300-400Mbps). The /mt option had no impact, as expected (since it would generally translate to improved performance with multiple smaller transfers) on throughput. Neither did the /j (unbuffered I/O) switch affect throughput. As the scripts and task schedule I had put in place were working well, I was about ready to give up on the performance discrepancy and say "Ah, it works. Good enough." but I just had to figure it out. After multiple trial-and-error runs, it turns out it's the /B switch. If I simple remove /B (which has the added benefit of not needing to be run with administrator privileges) performance is unleashed back to the full 900Mbps+ and even a little faster than Explorer. Any ideas why backup mode would negatively impact performance on large file transfers so much? Just for the additional info, this is effectively the cmd line I'm using, though not exactly because I am use job (rcj) files to keep things more tidy:
robocopy "E:\" "\\NAS\backup\E-Drive" /MIR /A-:SH /FFT /SL /XJD /R:1 /W:3 /LOG+:output.log /NDL /TEE /NP