[arm-allstar] faster SD card write

Fred Moore fred at fmeco.com
Sun Nov 22 04:35:33 EST 2015


While I certainly agree with the general thoughts of most people don't
write enough cards for time to make much difference, some of us insist
in being tech-weenies and want to understand everything that is going
on.  General settings are good starting points and will certainly get
you there, and I don't have a problem with using them..

David, I want to point out that the script you found is not a good test
for what we do..   It has a very bad assumption, and is creating a test
file to write that is the same blk size of the file it wants to write. 
This will invalidate any issues of buffering should the file your are
reading is below disk, memory, usb channel, and sd card writing buffer
size. In all cases that I can imagine the bs of the the disk (if=) is
actually controlled by the disk buffer size, and the internal disk
buffer size, where manufacturers are typically huge liars.  Look at how
much disk space is used up when we want to write a 100 byte file, in
most cases the block size is 4096 depending on the drive manufacture. 
Only point this out as it is something that needs to be taken into account.

In all cases when writing a image we are using a fixed file size.. and
are not writing data zero blocks our data is random,  and when this file
got to our disk drive, the drive most likely decided how to block the
file (also an issue between drive manufactures and OS's) and will very
between OS's, drive manufacturer etc..   The peak you published could be
from hitting a size limitations i.e we suddenly started buffering and
re-blocking, in a buffer "to/from, from/to"  disk, memory, usb channel,
or internal usb writing buffers, anywhere in the system,  we have not
even mentioned what happens when we are trying to force writes outside
of block boundaries which is a huge os hit.

As others have pointed out the usb device you are plugged into and the
card you are writing are also involved, and should also be considered
also as part of the overall system you are testing.

Anyway to get this thread back to normal, only brought this whole thing
up because of generalities that have been stated COULD  be poor choices
depending on the hardware, OS, what you are doing and other stuff.. they
certainly was in my case. If anyone is having problems do some testing..
in my case I found that a bs=128k and writing to the usb drive
un-buffered was my fastest times by a huge amount.  How much time, I can
write an image in 2+ minutes, and started out at 12+ minutes using the
generalities stated on this list the first time I created a allstar image. 

From my perspective, different than most, here I backup some very
critical files from my servers to the house, and then as a final archive
I write them to a SD card, they are also cloud backed up.  I am paranoid
about data integrity so at each point in the transfer gets checksum's
generated, and transfer(s) over the network are with zfs send/receives
so I can make sure I have 100% good data at every step, did I say I hate
disk drives, they are everyone's enemy.  I only run zfs file systems as
I am paranoid about bad data.  So I do much more than just write allstar
images with dd..  In my case testing was worth the effort but again I
needed to understand what was going on so did research and testing. 
Just as I suggest everyone do what is thinking they are having a problem.

regards.. Fred


On 11/21/15 9:13 PM, David McGough wrote:
> Hi Guys,
>
> I found this thread interesting, too. Over the last few years, I've
> written firmware images to hundreds or thousands of flash cards. And,
> years ago, I just increased the buffer sizes in dd (probably following
> advise from some website). The increased size made sense to me, it was
> much faster and I didn't study it any further. Typically, the
> "recommended" command I mention to people is very similar to:
>
> dd if=<input file> of=/dev/sdX bs=1M
>
> or
>
> zcat <gzipped input image file> | dd of=/dev/sdX bs=1M
>
> ...where /dev/sdX is the linux SD card block interface.
>
> So, since there is some interest and I'm wondering about the optimal
> settings, too, I did some googling and testing. I found this webpage that
> has some scripts: http://blog.tdg5.com/tuning-dd-block-size/ ...Please
> find the test script I used attached to bottom of this message (below).  
> For this test, I transferred a 64MB file of zeros from /dev/zero.
>
>
> I tried 3 test setups, all Linux:
>
> First, my primary setup is a roaring 4-core high-speed 3.6GHz 64-bit linux
> PC running Debian Wheezy, with a cheapo 8GB Sandisk SD card attached via a
> good performing external USB 2.0 hub. Here are the results:
>
> david-vb:/home/mcgough/ham/RasPi2# ./dd_obs_test.sh /dev/sdf
>
> block size : transfer rate
>      512 : 2.2 MB/s
>     1024 : 2.2 MB/s
>     2048 : 2.2 MB/s
>     4096 : 5.8 MB/s
>     8192 : 5.8 MB/s
>    16384 : 5.8 MB/s
>    32768 : 5.8 MB/s
>    65536 : 5.8 MB/s
>   131072 : 5.8 MB/s
>   262144 : 5.8 MB/s
>   524288 : 5.8 MB/s
>  1048576 : 5.4 MB/s
>  2097152 : 5.8 MB/s
>  4194304 : 5.4 MB/s
>  8388608 : 5.8 MB/s
> 16777216 : 5.4 MB/s
> 33554432 : 5.4 MB/s
> 67108864 : 5.4 MB/s
>
> Second, the same setup as above, except the SD card reader is plugged 
> directly into a USB 2.0 port on the PC, rather than the external 
> hub...Virtually the same speed:
>
> david-vb:/home/mcgough/ham/RasPi2# ./dd_obs_test.sh /dev/sdf
> block size : transfer rate
>      512 : 2.4 MB/s
>     1024 : 2.2 MB/s
>     2048 : 2.2 MB/s
>     4096 : 5.8 MB/s
>     8192 : 5.8 MB/s
>    16384 : 5.8 MB/s
>    32768 : 5.8 MB/s
>    65536 : 5.8 MB/s
>   131072 : 5.8 MB/s
>   262144 : 5.8 MB/s
>   524288 : 5.8 MB/s
>  1048576 : 5.8 MB/s
>  2097152 : 5.4 MB/s
>  4194304 : 5.8 MB/s
>  8388608 : 5.8 MB/s
> 16777216 : 5.4 MB/s
> 33554432 : 5.3 MB/s
> 67108864 : 5.4 MB/s
>
> Finally, the same test, using the same SD card reader, but now plugged 
> into a RPi2 USB port. Note that the RPi2 is running our latest & greatest 
> beta firmware with a 4.1.13 kernel. Also, the SD card block device is 
> now /dev/sda (it was /dev/sdf on the PC):
>
>
> [root at RPi2-dev2 ~]# ./dd_obs_test.sh /dev/sda
> block size : transfer rate
>      512 : 2.3 MB/s
>     1024 : 2.3 MB/s
>     2048 : 2.4 MB/s
>     4096 : 5.6 MB/s
>     8192 : 5.6 MB/s
>    16384 : 5.6 MB/s
>    32768 : 5.7 MB/s
>    65536 : 5.6 MB/s
>   131072 : 5.6 MB/s
>   262144 : 5.6 MB/s
>   524288 : 5.6 MB/s
>  1048576 : 5.6 MB/s
>  2097152 : 5.6 MB/s
>  4194304 : 5.6 MB/s
>  8388608 : 5.6 MB/s
> 16777216 : 5.6 MB/s
> 33554432 : 5.6 MB/s
> 67108864 : 5.5 MB/s
>
> Again, using the RPi2 also yielded the results I expected.
>
> So, on the positive side, using a 1 megabyte buffer (like I typically 
> tell people) is perfectly acceptable....But, a smaller buffer is fine 
> too--ANY buffer size at least 4K bytes seems to yield optimal thru-put. 
>
> NOTE that another useful test would be to monitor CPU use while performing 
> each test and report this as well...I expect the larger buffer sizes will 
> win here, slightly, with quickly diminishing returns once past perhaps 
> 128K byte sizes....
>
> As with everything, your mileage may vary. I recommend trying this script 
> on -YOUR- system, too. And, please report the results!
>
> Here is the script:
>
> #------------------------------------------------------------------------------------
> #!/bin/bash
>
> # Since we're dealing with dd, abort if any errors occur
> set -e
>
> TEST_FILE=${1:-dd_obs_testfile}
> [ -e "$TEST_FILE" ]; TEST_FILE_EXISTS=$?
> ###TEST_FILE_SIZE=134217728
> TEST_FILE_SIZE=67108864
>
> # Header
> PRINTF_FORMAT="%8s : %s\n"
> printf "$PRINTF_FORMAT" 'block size' 'transfer rate'
>
> # Block sizes of 512b 1K 2K 4K 8K 16K 32K 64K 128K 256K 512K 1M 2M 4M 8M 16M 32M 64M
> for BLOCK_SIZE in 512 1024 2048 4096 8192 16384 32768 65536 131072 262144 
> 524288 1048576 2097152 4194304 8388608 16777216 33554432 67108864
> do
>   # Calculate number of segments required to copy
>   COUNT=$(($TEST_FILE_SIZE / $BLOCK_SIZE))
>
>   if [ $COUNT -le 0 ]; then
>     echo "Block size of $BLOCK_SIZE estimated to require $COUNT blocks, aborting further tests."
>     break
>   fi
>
>   # Create a test file with the specified block size
>   DD_RESULT=$(dd if=/dev/zero of=$TEST_FILE bs=$BLOCK_SIZE count=$COUNT 2>&1 1>/dev/null)
>
>   # Extract the transfer rate from dd's STDERR output
>   TRANSFER_RATE=$(echo $DD_RESULT | \grep --only-matching -E '[0-9.]+ ([MGk]?B|bytes)/s(ec)?')
>
>   # Clean up the test file if we created one
>   [ $TEST_FILE_EXISTS -ne 0 ] && rm $TEST_FILE
>
>   # Output the result
>   printf "$PRINTF_FORMAT" "$BLOCK_SIZE" "$TRANSFER_RATE"
> done
> #------------------------------------------------------------------------------------
>
>
> Have fun!
>
> 73, David KB4FXC
>
> _______________________________________________
>
> arm-allstar mailing list
> arm-allstar at hamvoip.org
> http://lists.hamvoip.org/cgi-bin/mailman/listinfo/arm-allstar
>
> Visit the BBB and RPi2 web page - http://hamvoip.org
>

-- 
Fred Moore
email: fred at fmeco.com
       fred at safes.com
phone:  321-217-8699


-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 495 bytes
Desc: OpenPGP digital signature
URL: <http://lists.hamvoip.org/pipermail/arm-allstar/attachments/20151122/47e99683/attachment.pgp>


More information about the arm-allstar mailing list