<div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote gmail_quote_container"><div dir="ltr" class="gmail_attr">On Fri, Sep 5, 2025 at 2:27\u202fPM William Park via kwlug-disc <<a href="mailto:kwlug-disc@kwlug.org">kwlug-disc@kwlug.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><br>
<br>It seems 1M-8M is the optimum, and also the shortest to type.<br>
<br><br>
</blockquote></div><div><br clear="all"></div><div>My experience with dd goes back to SCO and HP Unix and includes slow devices like 60MB tape (taking several hours to write). Lately, the slow devices are SD cards on Linux. Back then, I found that larger block sizes than the default worked better, default was something like 512 Bytes. Back then, a server might only have 8MB of RAM so I doubt I used 1M, more like a multiple of a block size, like 64K. But with Linux I've been going with 1M because it was easy to type and I have a lot more than that in total RAM.<br><br>In the olden times I figured efficiency meant making sure the device controller had enough data that it could write efficiency, so keeping a tape streaming (instead of stop, reposition, then write) or supplying a disk controller with enough data to write as much of a track as possible. Today, with no mechanical or analog media limitations, I wonder what the efficiency could be. Interrupts? System calls?<br><br>I'm glad to see my gut feeling and laziness in typing worked for me.</div><span class="gmail_signature_prefix">-- </span><br><div dir="ltr" class="gmail_signature"><div dir="ltr"><div>John Van Ostrand<br></div><div>At large on sabbatical<br></div><br></div></div></div>