
First, install rclone using the following command
sudo apt install rclone -y
Then, for configuring rclone, type
rclone config
Upon doing that, the following data will have to be entered into the terminal. For ease, i have entered what we have to input, in bold.
2022/03/15 21:18:34 NOTICE: Config file “/home/user/.config/rclone/rclone.conf” not found – using defaults
No remotes found – make a new one
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
name> remote
Type of storage to configure.
Enter a string value. Press Enter for the default (“”).
Choose a number from below, or type in your own value
1 / 1Fichier
\ “fichier”
2 / Alias for an existing remote
\ “alias”
3 / Amazon Drive
\ “amazon cloud drive”
4 / Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, Tencent COS, etc)
\ “s3”
5 / Backblaze B2
\ “b2”
6 / Box
\ “box”
7 / Cache a remote
\ “cache”
8 / Citrix Sharefile
\ “sharefile”
9 / Dropbox
\ “dropbox”
10 / Encrypt/Decrypt a remote
\ “crypt”
11 / FTP Connection
\ “ftp”
12 / Google Cloud Storage (this is not Google Drive)
\ “google cloud storage”
13 / Google Drive
\ “drive”
14 / Google Photos
\ “google photos”
15 / Hubic
\ “hubic”
16 / In memory object storage system.
\ “memory”
17 / Jottacloud
\ “jottacloud”
18 / Koofr
\ “koofr”
19 / Local Disk
\ “local”
20 / Mail.ru Cloud
\ “mailru”
21 / Microsoft Azure Blob Storage
\ “azureblob”
22 / Microsoft OneDrive
\ “onedrive”
23 / OpenDrive
\ “opendrive”
24 / OpenStack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
\ “swift”
25 / Pcloud
\ “pcloud”
26 / Put.io
\ “putio”
27 / SSH/SFTP Connection
\ “sftp”
28 / Sugarsync
\ “sugarsync”
29 / Transparently chunk/split large files
\ “chunker”
30 / Union merges the contents of several upstream fs
\ “union”
31 / Webdav
\ “webdav”
32 / Yandex Disk
\ “yandex”
33 / http Connection
\ “http”
34 / premiumize.me
\ “premiumizeme”
35 / seafile
\ “seafile”
Storage> s3
** See help for s3 backend at: https://rclone.org/s3/ **
Choose your S3 provider.
Enter a string value. Press Enter for the default (“”).
Choose a number from below, or type in your own value
1 / Amazon Web Services (AWS) S3
\ “AWS”
2 / Alibaba Cloud Object Storage System (OSS) formerly Aliyun
\ “Alibaba”
3 / Ceph Object Storage
\ “Ceph”
4 / Digital Ocean Spaces
\ “DigitalOcean”
5 / Dreamhost DreamObjects
\ “Dreamhost”
6 / IBM COS S3
\ “IBMCOS”
7 / Minio Object Storage
\ “Minio”
8 / Netease Object Storage (NOS)
\ “Netease”
9 / Scaleway Object Storage
\ “Scaleway”
10 / StackPath Object Storage
\ “StackPath”
11 / Tencent Cloud Object Storage (COS)
\ “TencentCOS”
12 / Wasabi Object Storage
\ “Wasabi”
13 / Any other S3 compatible provider
\ “Other”
provider> Other
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Enter a boolean value (true or false). Press Enter for the default (“false”).
Choose a number from below, or type in your own value
1 / Enter AWS credentials in the next step
\ “false”
2 / Get AWS credentials from the environment (env vars or IAM)
\ “true”
env_auth> false
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a string value. Press Enter for the default (“”).
access_key_id> “YOUR_ACCESS_KEY_FROM_CONTABO“
AWS Secret Access Key (password)
Leave blank for anonymous access or runtime credentials.
Enter a string value. Press Enter for the default (“”).
secret_access_key>YOUR_SECRET_KEY_FROM_CONTABO
Region to connect to.
Leave blank if you are using an S3 clone and you don’t have a region.
Enter a string value. Press Enter for the default (“”).
Choose a number from below, or type in your own value
1 / Use this if unsure. Will use v4 signatures and an empty region.
\ “”
2 / Use this only if v4 signatures don’t work, eg pre Jewel/v10 CEPH.
\ “other-v2-signature”
region>
Endpoint for S3 API.
Required when using an S3 clone.
Enter a string value. Press Enter for the default (“”).
Choose a number from below, or type in your own value
endpoint> eu2.contabostorage.com
Location constraint – must be set to match the Region.
Leave blank if not sure. Used when creating buckets only.
Enter a string value. Press Enter for the default (“”).
location_constraint>
Canned ACL used when creating buckets and storing or copying objects.
This ACL is used for creating objects and if bucket_acl isn’t set, for creating buckets too.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Note that this ACL is applied when server side copying objects as S3
doesn’t copy the ACL from the source but rather writes a fresh one.
Enter a string value. Press Enter for the default (“”).
Choose a number from below, or type in your own value
1 / Owner gets FULL_CONTROL. No one else has access rights (default).
\ “private”
2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access.
\ “public-read”
/ Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access.
3 | Granting this on a bucket is generally not recommended.
\ “public-read-write”
4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access.
\ “authenticated-read”
/ Object owner gets FULL_CONTROL. Bucket owner gets READ access.
5 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
\ “bucket-owner-read”
/ Both the object owner and the bucket owner get FULL_CONTROL over the object.
6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
\ “bucket-owner-full-control”
acl>
Edit advanced config? (y/n)
y) Yes
n) No (default)
y/n> n
Remote config
[remote]
provider = Other
env_auth = false
access_key_id = XXXXXXXXXXXX
secret_access_key = XXXXXXXXXXXXXX
endpoint = eu2.contabostorage.com
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
Current remotes:
Name Type
==== ====
remote s3
e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q> q
Now that we have finished configuring rclone, use these clone commands to manage your data
See all buckets
rclone lsd remote:
Make a new bucket
rclone mkdir remote:bucket
List the contents of a bucket
rclone ls remote:bucket
Sync /home/local/directory
to the remote bucket, deleting any excess files in the bucket.
rclone sync -i /home/local/directory remote:bucket