OVERVIEW:
S3Cmd is a Python script and libraries which give you access to Amazon S3
from the Shell. It also provides support for CloudFront and S3 Websites.
Most of the online storage and backup services out there use proprietary
Windows/OSX clients or require using a browser to upload data.
As Amazon provide an API to connect directly to their servers I thought that
it would be nice if the Amiga too could have transparent access to their cloud
storage service.
After exploring the various solutions I settled for S3Cmd as it only needs
Python which comes pre-installed with OS4 and I wanted to keep it as simple to
install as possible so that anyone else could use it too.
With the included S3Cmd script you will be able to easily load/save, share and
backup/restore data on the cloud with your Amiga.
No more excuses for not having backups ;-)
REQUIREMENTS:
For S3Cmd to be of any use you'll need an Amazon Web Services (AWS) account.
Currently Amazon gives a whole year for free from the date you sign up and their
free offer includes a lot more than storage. For example you can create an M1
cloud server instance with 30GB of EBS storage, dedicated IP, root access and
the ability to install any Linux version and software you want.
See http://aws.amazon.com/free/ for details
INSTALLATION:
1) Copy the S3C drawer wherever you like
2) If you don't have a $HOME variable defined (try 'GetEnv HOME' to find out),
open the Shell and type:
'MakeDir ENVARC:HOME'
'SetEnv SAVE HOME ENVARC:HOME'
You can use any drawer you like instead of ENVARC:HOME, it just seems like the
best place for it as $HOME is often used by Linux programs to store settings.
3) In the Shell, type: 'DH1:S3C/S3Cmd --configure' (replace DH1: with your path)
It will ask for your AWS keys and whether you want to use GPG, HTTPS and proxies.
- If you have GPG installed you could try it otherwise answer 'n' and press Enter
when it asks for the GPG passphrase.
- I recommend using HTTPS just to be sure your keys aren't sent as plain text.
- I haven't tested the GPG and Proxy support so I don't know if they work or not.
- If needed you can reconfigure S3Cmd later with '--configure'.
USAGE:
After installing S3Cmd you will be able to access your S3 storage transparently
from the Shell with the S3Cmd command. You don't have to 'CD' to the S3C drawer
first, you can call it from anywhere.
You can use S3Cmd as-is (see the How-To in the S3C/README and 'S3Cmd --help'),
for example:
'S3Cmd mb s3://amigarulez' (Create a bucket named "amigarulez" if available)
'S3Cmd put DH1:myfile s3://amigarulez/newdrawer/' (put a file in a new drawer)
'S3Cmd ls s3://amigarulez/newdrawer/' (list contents of newdrawer)
'S3Cmd get s3://amigarulez/newdrawer/myfile RAM:myfile' (download file to RAM:)
'S3Cmd sync DH1:MyPrecious/ s3://amigarulez/MyPrecious/' (incremental backup)
'S3Cmd sync s3://amigarulez/MyPrecious/ DH1:MyPrecious/' (incremental restore)
'S3Cmd -r del s3://amigarulez/newdrawer' (delete newdrawer recursively)
'S3Cmd rb s3://amigarulez' (remove the amigarulez bucket)
or you can create aliases and scripts to have more amiga-like commands operating
on a single bucket for common tasks such as:
S3List, S3Put, S3Get, S3Delete, S3Info, etc.
For example, if you own a bucket called "amigarulez":
'Alias S3 "DH1:S3C/S3Cmd"'
'Alias S3List S3 s3://amigarulez/[]'
'Alias S3Put S3 -r [] s3://amigarulez/'
'Alias S3Get S3 -r s3://amigarulez/[]'
'Alias S3Delete S3 del s3://amigarulez/[]'
'Alias S3DelAll S3 -r del s3://amigarulez/[]'
'Alias S3Info S3 info s3://amigarulez/[]'
More complex operations like copying files to a new drawer or moving/renaming
objects require multiple arguments which can't be concatenated with Alias so
you would have to create small scripts, for example an S:Shell/S3Move script:
.key SOURCE/A,DESTINATION/A
DH1:S3C/S3Cmd mv s3://amigarulez/<SOURCE> s3://amigarulez/<DESTINATION>
or a more flexible S3Put script than the S3Put alias above, S:Shell/S3Put:
.key SOURCE/A/M,DESTINATION/A
DH1:S3C/S3Cmd put -r <SOURCE> s3://amigarulez/<DESTINATION>
or an S3Copy script to duplicate files on the remote side, S:Shell/S3Copy:
.key SOURCE/A,DESTINATION/A
DH1:S3C/S3Cmd cp s3://amigarulez/<SOURCE> s3://amigarulez/<DESTINATION>
Finally, a script to automatically make either daily, weekly, or monthly backups
of the SYS: partition, incrementally (it's fast as it only copies newer file).
(To be useful it should be placed in WBStartup or loaded by some cron job)
Lab Top
Wait UNTIL 12:00 ; set a time when the machine will be ON with light use
If `Date LFormat %e` EQ 1 ; 1st of the month
DH1:S3C/S3Cmd --delete-removed sync SYS: s3://amigarulez/backups/monthly/
Else If `Date LFORMAT %w` EQ 1 ; Monday
DH1:S3C/S3Cmd --delete-removed sync SYS: s3://amigarulez/backups/weekly/
Else ; daily
DH1:S3C/S3Cmd --delete-removed sync SYS: s3://amigarulez/backups/daily/
EndIf
Skip Top BACK ; Jump to Top and wait again (most likely until next day)
HINTS:
Trailing slashes matters, always append a slash "/" after drawer names unless
you are trying to delete or get some information on the drawer itself.
With 'put' and 'sync' if you specify the source as "RAM:Drawer" instead of
"RAM:Drawer/" it will create two drawers "RAM" and "RAM/Drawer" which probably
isn't what you want.
I recommend using a single bucket and neatly organized drawers as it's easier
to have a constant S3 URI for your scripts and aliases.
If you want to multiple buckets, you could create an ENV:S3Bucket variable and
then put $S3Bucket instead of the bucket name in your scripts, that way you
can use 'SetEnv S3Bucket name' to switch buckets without modifying the scripts.
With aliases you can't store a reference to a variable so you'd have to use
escaped backticks like this: 'Alias S3List S3 ls s3://*`GetEnv S3Bucket*`/[]'
If you enable user read permission on drawers or files (setacl) you can share
your files by giving people links such as:
http://s3.amazonaws.com/amigarulez/photos/myphoto.jpg or
http://amigarulez.s3.amazonaws.com/photos/myphoto.jpg
Use the 'S3Cmd du' command to keep an eye on how much storage you use, remember
that you can only use up to 5GB for free on S3. As you also get plenty of
storage on the EC2/EBS side, you could always move stuff from S3 to EBS.
If you have Python: on an FFS partition, you can speed it up a great deal by
activating caching with 'C:fs_plugin_cache SYS: S=20000' (20MB cache).
With ARexx for Python and ProAction from Andy Broad you can quickly create some
ReAction GUIs to interact with S3.
If someone has the time and patience to make an s3-handler to mount buckets as
volumes like FTPMount and NetFS, please do!
If you run into issues, ask on the AW forum, that's the one I am most likely
to visit regularly.
CREDITS:
All the hard work was done by Michal Ludvig (S3Cmd), I only added a few lines
of code to prevent infinite loops and crashes from the colon in volume names.
Enjoy!
|