If Googlebot visits your websites, it caches the whole HTML site.
This can be used to share an arbitrary file.
The downloaders do not need to access your own server.
This saves bandwidth and money.
I tried this succesfully with this picture using this HTML page.
Please note that this is everything else than efficient. It's only a proof of concept.
However, the general procedure is as follows
Convert the binary file you want to share with uuencode to get a nice looking ASCII file.
Generate a HTML page. I wrote therefore a small bashscript
<h1>$1 Googlesave</h1> <br>
while read line
do echo "$hash number$count $line"
done < "$1"
Produced by Hanno Rein's Googlesave $(date)
This generates a HTML page with a hashcode of the file at the beginning of each line followed by a number counting up and the data.
Place a link to this HTML site somewhere where googlebot can find it and wait until it is in the database.
Now you need to distribute the file's hashcode and the length of the file (the last number appearing on the file generated above) to your downloaders.
They now search google for the hashcode. The preview on google includes the hashcode we searched for, but also the letters that follow (the actual data). This can be extracted and we can reconstruct the file. The following bash script does this:
while [ $count -lt $countmax ]; do
data=$(lynx -dump http://www.google.com/search?q=$1+number$count | grep '\.\.\.' | sed 's/[ .]//g')
Now use uudecode to get back a binary file.