access amazon S3 bucket from hadoop specifying SecretAccessKey from command line -


i trying access amazon s3 bucket using hdfs command. here command run:

$ hadoop fs -ls s3n://<accesskeyid>:<secretaccesskey>@<bucket-name>/tpt_files/ -ls: invalid hostname in uri s3n://<accesskeyid>:<secretaccesskey>@<bucket-name>/tpt_files usage: hadoop fs [generic options] -ls [-d] [-h] [-r] [<path> ...] 

my secretaccesskey includes “/”. cause of such behavior?

in same time have aws cli installed in server , can access bucket using aws cli without issues (accesskeyid , secretaccesskey configured in .aws/credentials):

aws s3 ls s3:// <bucket-name>/tpt_files/ 

if there way how access amazon s3 bucket using hadoop command without specifying keys in core-site.xml? i’d prefer specify keys in command line.

any suggestions helpful.

the best practice run hadoop on instance created ec2 instance profile role, , s3 access specified policy of assigned role. keys no longer needed when using instance profile.
http://docs.aws.amazon.com/java-sdk/latest/developer-guide/credentials.html

you can launch amis instance profile role , cli , sdks use it. if code uses defaultawscredentialsproviderchain class, credentials can obtained through environment variables, system properties, or credential profiles file (as ec2 instance profile role).


Comments

Popular posts from this blog

javascript - Slick Slider width recalculation -

jsf - PrimeFaces Datatable - What is f:facet actually doing? -

angular2 services - Angular 2 RC 4 Http post not firing -