$ aws s3 cp myfolder s3://mybucket/myfolder --recursive
# A sync command makes it easy to synchronize the contents of a local folder with a copy in an S3 bucket.
$ aws s3 sync myfolder s3://mybucket/myfolder --exclude *.tmp
You can also get help with the command:
$ aws s3 help
class boto.s3.connection.LocationAPNortheast = 'ap-northeast-1'
APSoutheast = 'ap-southeast-1'
APSoutheast2 = 'ap-southeast-2'
CNNorth1 = 'cn-north-1'
DEFAULT = ''
EU = 'EU'
SAEast = 'sa-east-1'
USWest = 'us-west-1'
USWest2 = 'us-west-2' 需要注意的是,在创建bucket你有可能遇到
创建bucket时你有可能会遇到S3CreateError的异常,返回BucketAlreadExists的错误,这是由于在s3中所有用户的bucket都放在同一层,如果你创建的bucket名称比较常见,如test,则很有可能已被其他用户使用,从而出现该问题。
在boto的reference文档中提到了这一点,原文如下:
"Well, the thing you have to know aboutbuckets is that they are kind of like domain names. It’s one flat namespace that everyone who uses S3 shares. So, someone has already createa bucket called “mybucket” in S3 and that means no one else can grab that bucket name. So, you have to come up with a name that hasn’t been taken yet.For example, something that uses a unique string as a prefix. YourAWS_ACCESS_KEY (NOT YOUR SECRET KEY!) could work but I’ll leave it toyour imagination to come up with something. I’ll just assume that youfound an acceptable name."
5)读取所有的key
for bucket in conn.get_all_buckets():
for key in bucket.get_all_keys():
print bucket, key.key, key.md5
需要注意的是get_all_buckets()函數是一个底层的实现,其一次最多返回1000个key(one paging of results),因此在实际中可以使用以下方式
for key in [key for key in bucket]:
key.get_contents_to_filename(local_file_path)