S3 Client Linux

broken image


  1. Amazon S3 Client Linux
  2. Linux S3 Client
  3. S3 File System
  4. S3 Client Linux
  5. Amazon S3 Client Linux
  1. S3 Client for Mac, Window and Linux. ExpanDrive is a fast and powerful S3 Client for Windows, macOS and Linux. If you need to manage content in the cloud you're going to need a powerful tool like ExpanDrive to connect to the S3 API. While Amazon provides a basic storage browser inside the AWS console it's not a complete solution designed for desktop access or heavy usage.
  2. Freeware (with low-cost premium option) A versatile range of features.

The latest version of S3 Fox can help you help manage S3 CloudFront distributions right from Firefox. Bucket Explorer - This is the most 'complete' S3 client that's available for Mac, Windows and Linux ($50). With Bucket Explorer, you can perform almost every S3 operation from the desktop including Bucket Logging, file comparison, batch. Aws-shell is a command-line shell program that provides convenience and productivity features to help both new and advanced users of the AWS Command Line Interface.Key features include the following. Fuzzy auto-completion for Commands (e.g. Ec2, describe-instances, sqs, create-queue) Options (e.g.instance-ids, -queue-url).

Amazon's Simple Storage Service (S3) has a lot to like. It's cheap, can be used for storing a little bit of data or as much as you want, and it can be used for distributing files publicly or just storing your private data. Let's look at how you can take advantage of Amazon S3 on Linux.

Amazon S3 isn't what you'd want to use for storing just a little bit of personal data. For that, you might want to use Dropbox, SpiderOak, ownCloud, or SparkleShare. Which one depends on how much data, your tolerance for non-free software, and which features you prefer. For my work files, I use Dropbox – in large part because of its LAN sync feature.

But S3 is really good if you need to make backups of a large amount of data, or smaller amounts but you need an offsite backup. It's also good if you want to use S3 to host files for public distribution and don't have a server or need to offload data sharing because of capacity issues. Maybe you just want to use it to host a blog, cheaply. S3 also has some nifty features for content distribution and data storage from multiple regions, which we'll get into another time.

Getting the Tools

You can use S3 in a number of ways on Linux, depending on how you'd like to manage your backups. If you look around, you'll find a bunch of tools that support S3, including:

S3 Tools and Duplicity are command line utilities that support S3. S3 Tools, as the name implies, focuses on Amazon S3. Duplicity has S3 support, but also supports several other methods of transferring files. Deja Dup is a fairly simple GNOME app for backups, which has S3 support thanks to Duplicity. Dragon Disk is a freeware (but not free software) utility that provides more fine-grained control of backups to S3. It also supports Google Cloud Storage and other cloud storage software.

For the purposes of this article, I'm going to focus on S3 Tools. If you're a GNOME user, it should take very little effort to set up Deja Dup for S3. We'll tackle Duplicity and Dragon Disk another time.

S3 Tools

You might find S3 Tools in your distribution's repositories. If not, the S3 Tools folks have package repositories and have support for several versions of Red Hat, CentOS, Fedora, openSUSE, SUSE Linux Enterprise, Debian, and Ubuntu. You'll also find instructions on adding the tools on the package repositories page.

Once you have S3 Tools installed, you need to configure it with your Amazon S3 credentials. If you haven't signed up for them yet, hit the Sign Up button at the top of the S3 overview page. You'll also want to look at the pricing, which starts at $0.125 per GB per month.

The pricing calculator can help you get an idea how much it would cost to store your data in S3. For example, if you're storing 100GB in S3, it would run about $12.50 per month – before any costs for data transfer out of S3. Transfer in to S3 is free. Amazon also charges for get/put requests and so forth – so if you're using S3 to serve up content, then the pricing is going to be higher.

Back to the tools. You need to configure s3cmd (the command line utility from the S3 Tools project) like so:

s3cmd --configure

It will walk you through adding your Amazon credentials and GPG information if you want to encrypt files while stored on S3. Amazon's storage is supposed to be private, but you should always assume that data stored on remote servers is potentially visible to others. Since I'm storing information that has no real need for privacy (WordPress backups, MP3s, photos that I'd happily publish online anyway) I don't worry overmuch about encrypting for storage on S3.

There's another advantage of foregoing GPG encryption, which is that s3cmd can use an rsync-like algorithm for syncing files instead of just re-copying everything.

Now to copy files and use s3cmd sync. You'll find that the s3cmd syntax mimics standard *nix commands. Want to see what is being stored in your S3 account? Use s3cmd ls to show all buckets. (Amazon calls 'em buckets instead of directories.)

Linux

Want to copy between buckets? Use s3cmd cp bucket1bucket2. Note that buckets are specified by the syntax s3://bucketname.

To put files in a bucket, use s3cmd put filenames3://bucket. To get files, use s3cmd get filenamelocal. To upload directories, you need to use the --recursive option.

But if you want to sync files and save yourself some trouble down the road, there's the sync command. It's dead simple to use:

Amazon S3 Client Linux

s3cmd sync directorys3://bucket/

The first time, it will copy up all files. The next time it will only copy up files that don't already exist on Amazon S3. However, if you want to get rid of files that you have removed locally, use the --delete-removed option. Note that you should test this with the --dry-run option first. You can accidentally delete files that way.

It's pretty simple to use s3cmd, and you should look at its man page as well. It even has some support for the CloudFront CDN service if you need that. Happy syncing!

The AWS Command Line Interface (CLI) is a unified tool to manage your AWS services. With just one tool to download and configure, you can control multiple AWS services from the command line and automate them through scripts.

The AWS CLI v2 offers several new features including improved installers, new configuration options such as AWS Single Sign-On (SSO), and various interactive features.

Windows
Download and run the 64-bit Windows installer.

MacOS
Download and run the MacOS PKG installer.

Linux
Download, unzip, and then run the Linux installer

Amazon Linux
The AWS CLI comes pre-installed on Amazon Linux AMI.

Release Notes
Check out the Release Notes for more information on the latest version.

aws-shell is a command-line shell program that provides convenience and productivity features to help both new and advanced users of the AWS Command Line Interface. Key features include the following.

  • Fuzzy auto-completion for
    • Commands (e.g. ec2, describe-instances, sqs, create-queue)
    • Options (e.g. --instance-ids, --queue-url)
    • Resource identifiers (e.g. Amazon EC2 instance IDs, Amazon SQS queue URLs, Amazon SNS topic names)
  • Dynamic in-line documentation
    • Documentation for commands and options are displayed as you type
  • Execution of OS shell commands
    • Use common OS commands such as cat, ls, and cp and pipe inputs and outputs without leaving the shell
  • Export executed commands to a text editor

To find out more, check out the related blog post on the AWS Command Line Interface blog.

The AWS Command Line Interface User Guide walks you through installing and configuring the tool. After that, you can begin making calls to your AWS services from the command line.

You can get help on the command line to see the supported services,

New file commands make it easy to manage your Amazon S3 objects. Using familiar syntax, you can view the contents of your S3 buckets in a directory-based listing.

Linux S3 Client

S3 Client Linux

Want to copy between buckets? Use s3cmd cp bucket1bucket2. Note that buckets are specified by the syntax s3://bucketname.

To put files in a bucket, use s3cmd put filenames3://bucket. To get files, use s3cmd get filenamelocal. To upload directories, you need to use the --recursive option.

But if you want to sync files and save yourself some trouble down the road, there's the sync command. It's dead simple to use:

Amazon S3 Client Linux

s3cmd sync directorys3://bucket/

The first time, it will copy up all files. The next time it will only copy up files that don't already exist on Amazon S3. However, if you want to get rid of files that you have removed locally, use the --delete-removed option. Note that you should test this with the --dry-run option first. You can accidentally delete files that way.

It's pretty simple to use s3cmd, and you should look at its man page as well. It even has some support for the CloudFront CDN service if you need that. Happy syncing!

The AWS Command Line Interface (CLI) is a unified tool to manage your AWS services. With just one tool to download and configure, you can control multiple AWS services from the command line and automate them through scripts.

The AWS CLI v2 offers several new features including improved installers, new configuration options such as AWS Single Sign-On (SSO), and various interactive features.

Windows
Download and run the 64-bit Windows installer.

MacOS
Download and run the MacOS PKG installer.

Linux
Download, unzip, and then run the Linux installer

Amazon Linux
The AWS CLI comes pre-installed on Amazon Linux AMI.

Release Notes
Check out the Release Notes for more information on the latest version.

aws-shell is a command-line shell program that provides convenience and productivity features to help both new and advanced users of the AWS Command Line Interface. Key features include the following.

  • Fuzzy auto-completion for
    • Commands (e.g. ec2, describe-instances, sqs, create-queue)
    • Options (e.g. --instance-ids, --queue-url)
    • Resource identifiers (e.g. Amazon EC2 instance IDs, Amazon SQS queue URLs, Amazon SNS topic names)
  • Dynamic in-line documentation
    • Documentation for commands and options are displayed as you type
  • Execution of OS shell commands
    • Use common OS commands such as cat, ls, and cp and pipe inputs and outputs without leaving the shell
  • Export executed commands to a text editor

To find out more, check out the related blog post on the AWS Command Line Interface blog.

The AWS Command Line Interface User Guide walks you through installing and configuring the tool. After that, you can begin making calls to your AWS services from the command line.

You can get help on the command line to see the supported services,

New file commands make it easy to manage your Amazon S3 objects. Using familiar syntax, you can view the contents of your S3 buckets in a directory-based listing.

Linux S3 Client

You can perform recursive uploads and downloads of multiple files in a single folder-level command. The AWS CLI will run these transfers in parallel for increased performance.

A sync command makes it easy to synchronize the contents of a local folder with a copy in an S3 bucket.

See the AWS CLI command reference for the full list of supported services.

Connect with other developers in the AWS CLI Community Forum »

S3 File System

Find examples and more in the User Guide »

S3 Client Linux

Learn the details of the latest CLI tools in the Release Notes »

Amazon S3 Client Linux

Dig through the source code in the GitHub Repository »





broken image