-> To install s3cmd at your ubuntu system:
sudo apt-get install s3cmd
Once you install your s3cmd you need to configure your system to have your access key and secrete key to work with your s3 bucket.
To configure your s3cmd having the config, run the following command:
s3cmd --configure
You will be asked for the two keys, access key and secrete key
NOTE: You use this key, whey you use the ec2-upload-bundle command.
See the above picture of the above command help output.
A sample .s3cfg file:
cat .s3cfg
[default]
access_key = Put your access key
bucket_location = US
cloudfront_host = cloudfront.amazonaws.com
default_mime_type = binary/octet-stream
delete_removed = False
dry_run = False
enable_multipart = True
encoding = UTF-8
encrypt = False
follow_symlinks = False
force = False
get_continue = False
gpg_command = /usr/bin/gpg
gpg_decrypt = %(gpg_command)s -d --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
gpg_encrypt = %(gpg_command)s -c --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
gpg_passphrase = <your gpg passphase >
guess_mime_type = True
host_base = s3.amazonaws.com
host_bucket = %(bucket)s.s3.amazonaws.com
human_readable_sizes = False
invalidate_on_cf = False
list_md5 = False
log_target_prefix =
mime_type =
multipart_chunk_size_mb = 15
preserve_attrs = True
progress_meter = True
proxy_host =
proxy_port = 0
recursive = False
recv_chunk = 4096
reduced_redundancy = False
secret_key = Put your secret_key
send_chunk = 4096
simpledb_host = sdb.amazonaws.com
skip_existing = False
socket_timeout = 300
urlencoding_mode = normal
use_https = False
verbosity = WARNING
website_endpoint = http://%(bucket)s.s3-website-%(location)s.amazonaws.com/
website_error =
website_index = index.html
Help file s3cmd command:
s3cmd --help
Usage: s3cmd [options] COMMAND [parameters]
S3cmd is a tool for managing objects in Amazon S3 storage. It allows for
making and removing "buckets" and uploading, downloading and removing
"objects" from these buckets.
Options:
-h, --help show this help message and exit
--configure Invoke interactive (re)configuration tool. Optionally
use as '--configure s3://come-bucket' to test access
to a specific bucket instead of attempting to list
them all.
-c FILE, --config=FILE
Config file name. Defaults to /home/amit/.s3cfg
--dump-config Dump current configuration after parsing config files
and command line options and exit.
-n, --dry-run Only show what should be uploaded or downloaded but
don't actually do it. May still perform S3 requests to
get bucket listings and other information though (only
for file transfer commands)
-e, --encrypt Encrypt files before uploading to S3.
--no-encrypt Don't encrypt files.
-f, --force Force overwrite and other dangerous operations.
--continue Continue getting a partially downloaded file (only for
[get] command).
--skip-existing Skip over files that exist at the destination (only
for [get] and [sync] commands).
-r, --recursive Recursive upload, download or removal.
--check-md5 Check MD5 sums when comparing files for [sync].
(default)
--no-check-md5 Do not check MD5 sums when comparing files for [sync].
Only size will be compared. May significantly speed up
transfer but may also miss some changed files.
-P, --acl-public Store objects with ACL allowing read for anyone.
--acl-private Store objects with default ACL allowing access for you
only.
--acl-grant=PERMISSION:EMAIL or USER_CANONICAL_ID
Grant stated permission to a given amazon user.
Permission is one of: read, write, read_acp,
write_acp, full_control, all
--acl-revoke=PERMISSION:USER_CANONICAL_ID
Revoke stated permission for a given amazon user.
Permission is one of: read, write, read_acp, wr
ite_acp, full_control, all
--delete-removed Delete remote objects with no corresponding local file
[sync]
--no-delete-removed Don't delete remote objects.
-p, --preserve Preserve filesystem attributes (mode, ownership,
timestamps). Default for [sync] command.
--no-preserve Don't store FS attributes
--exclude=GLOB Filenames and paths matching GLOB will be excluded
from sync
--exclude-from=FILE Read --exclude GLOBs from FILE
--rexclude=REGEXP Filenames and paths matching REGEXP (regular
expression) will be excluded from sync
--rexclude-from=FILE Read --rexclude REGEXPs from FILE
--include=GLOB Filenames and paths matching GLOB will be included
even if previously excluded by one of
--(r)exclude(-from) patterns
--include-from=FILE Read --include GLOBs from FILE
--rinclude=REGEXP Same as --include but uses REGEXP (regular expression)
instead of GLOB
--rinclude-from=FILE Read --rinclude REGEXPs from FILE
--bucket-location=BUCKET_LOCATION
Datacentre to create bucket in. As of now the
datacenters are: US (default), EU, us-west-1, and ap-
southeast-1
--reduced-redundancy, --rr
Store object with 'Reduced redundancy'. Lower per-GB
price. [put, cp, mv]
--access-logging-target-prefix=LOG_TARGET_PREFIX
Target prefix for access logs (S3 URI) (for [cfmodify]
and [accesslog] commands)
--no-access-logging Disable access logging (for [cfmodify] and [accesslog]
commands)
--default-mime-type Default MIME-type for stored objects. Application
default is binary/octet-stream.
--guess-mime-type Guess MIME-type of files by their extension or mime
magic. Fall back to default MIME-Type as specified by
--default-mime-type option
--no-guess-mime-type Don't guess MIME-type and use the default type
instead.
-m MIME/TYPE, --mime-type=MIME/TYPE
Force MIME-type. Override both --default-mime-type and
--guess-mime-type.
--add-header=NAME:VALUE
Add a given HTTP header to the upload request. Can be
used multiple times. For instance set 'Expires' or
'Cache-Control' headers (or both) using this options
if you like.
--encoding=ENCODING Override autodetected terminal and filesystem encoding
(character set). Autodetected: UTF-8
--verbatim Use the S3 name as given on the command line. No pre-
processing, encoding, etc. Use with caution!
--disable-multipart Disable multipart upload on files bigger than
--multipart-chunk-size-mb
--multipart-chunk-size-mb=SIZE
Size of each chunk of a multipart upload. Files bigger
than SIZE are automatically uploaded as multithreaded-
multipart, smaller files are uploaded using the
traditional method. SIZE is in Mega-Bytes, default
chunk size is noneMB, minimum allowed chunk size is
5MB, maximum is 5GB.
--list-md5 Include MD5 sums in bucket listings (only for 'ls'
command).
-H, --human-readable-sizes
Print sizes in human readable form (eg 1kB instead of
1234).
--ws-index=WEBSITE_INDEX
Name of error-document (only for [ws-create] command)
--ws-error=WEBSITE_ERROR
Name of index-document (only for [ws-create] command)
--progress Display progress meter (default on TTY).
--no-progress Don't display progress meter (default on non-TTY).
--enable Enable given CloudFront distribution (only for
[cfmodify] command)
--disable Enable given CloudFront distribution (only for
[cfmodify] command)
--cf-invalidate Invalidate the uploaded filed in CloudFront. Also see
[cfinval] command.
--cf-add-cname=CNAME Add given CNAME to a CloudFront distribution (only for
[cfcreate] and [cfmodify] commands)
--cf-remove-cname=CNAME
Remove given CNAME from a CloudFront distribution
(only for [cfmodify] command)
--cf-comment=COMMENT Set COMMENT for a given CloudFront distribution (only
for [cfcreate] and [cfmodify] commands)
--cf-default-root-object=DEFAULT_ROOT_OBJECT
Set the default root object to return when no object
is specified in the URL. Use a relative path, i.e.
default/index.html instead of /default/index.html or
s3://bucket/default/index.html (only for [cfcreate]
and [cfmodify] commands)
-v, --verbose Enable verbose output.
-d, --debug Enable debug output.
--version Show s3cmd version (1.1.0-beta3) and exit.
-F, --follow-symlinks
Follow symbolic links as if they are regular files
Commands:
Make bucket
s3cmd mb s3://BUCKET
Remove bucket
s3cmd rb s3://BUCKET
List objects or buckets
s3cmd ls [s3://BUCKET[/PREFIX]]
List all object in all buckets
s3cmd la
Put file into bucket
s3cmd put FILE [FILE...] s3://BUCKET[/PREFIX]
Get file from bucket
s3cmd get s3://BUCKET/OBJECT LOCAL_FILE
Delete file from bucket
s3cmd del s3://BUCKET/OBJECT
Synchronize a directory tree to S3
s3cmd sync LOCAL_DIR s3://BUCKET[/PREFIX] or s3://BUCKET[/PREFIX] LOCAL_DIR
Disk usage by buckets
s3cmd du [s3://BUCKET[/PREFIX]]
Get various information about Buckets or Files
s3cmd info s3://BUCKET[/OBJECT]
Copy object
s3cmd cp s3://BUCKET1/OBJECT1 s3://BUCKET2[/OBJECT2]
Move object
s3cmd mv s3://BUCKET1/OBJECT1 s3://BUCKET2[/OBJECT2]
Modify Access control list for Bucket or Files
s3cmd setacl s3://BUCKET[/OBJECT]
Enable/disable bucket access logging
s3cmd accesslog s3://BUCKET
Sign arbitrary string using the secret key
s3cmd sign STRING-TO-SIGN
Fix invalid file names in a bucket
s3cmd fixbucket s3://BUCKET[/PREFIX]
Create Website from bucket
s3cmd ws-create s3://BUCKET
Delete Website
s3cmd ws-delete s3://BUCKET
Info about Website
s3cmd ws-info s3://BUCKET
List CloudFront distribution points
s3cmd cflist
Display CloudFront distribution point parameters
s3cmd cfinfo [cf://DIST_ID]
Create CloudFront distribution point
s3cmd cfcreate s3://BUCKET
Delete CloudFront distribution point
s3cmd cfdelete cf://DIST_ID
Change CloudFront distribution point parameters
s3cmd cfmodify cf://DIST_ID
Display CloudFront invalidation request(s) status
s3cmd cfinvalinfo cf://DIST_ID[/INVAL_ID]
For more informations see the progect homepage:
http://s3tools.org
If you don't have the config file configured the you will get error like following:
s3cmd ls
ERROR: /home/<user>/.s3cfg: No such file or directory
ERROR: Configuration file not available.
ERROR: Consider using --configure parameter to create one.
External link:
http://s3tools.org/s3cmd
Few more external links:
http://docs.aws.amazon.com/AmazonS3/latest/API/Welcome.html
http://docs.aws.amazon.com/AmazonS3/latest/API/APIRest.html
http://docs.aws.amazon.com/AmazonS3/latest/API/APISoap.html
http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectCOPY.html
http://docs.aws.amazon.com/AmazonS3/latest/dev/RESTAPI.html
No comments:
Post a Comment