If Googlebot visits your websites, it caches the whole HTML site. This can be used to share an arbitrary file. The downloaders do not need to access your own server. This saves bandwidth and money. I tried this succesfully with this picture using this HTML page. Please note that this is everything else than efficient. It's only a proof of concept. However, the general procedure is as follows

block Convert the binary file you want to share with uuencode to get a nice looking ASCII file.
block Generate a HTML page. I wrote therefore a small bashscript
hash="$(/usr/bin/sha1sum $1)"
echo "<html>
<head><title>$1 Googlesave</title>

<h1>$1 Googlesave</h1> <br>
generated $(date)<br><br>

while read line
        do echo "$hash number$count $line"
        echo "
        let count=count+1
done < "$1"

echo "END!</td></table><br>
Produced by Hanno Rein's Googlesave $(date) 
This generates a HTML page with a hashcode of the file at the beginning of each line followed by a number counting up and the data.

block Place a link to this HTML site somewhere where googlebot can find it and wait until it is in the database.

block Now you need to distribute the file's hashcode and the length of the file (the last number appearing on the file generated above) to your downloaders.

block They now search google for the hashcode. The preview on google includes the hashcode we searched for, but also the letters that follow (the actual data). This can be extracted and we can reconstruct the file. The following bash script does this:

while [   $count -lt $countmax ]; do
        data=$(lynx -dump http://www.google.com/search?q=$1+number$count | grep '\.\.\.' | sed 's/[ .]//g')
        echo $data
        let count=count+1

block Now use uudecode to get back a binary file.