Rclone mount - Mounting SFTP, WebDAV, S3 network storages as a local directory in Linux
Greetings!

In this article, we will look at the process of mounting remote network storage SFTP, WebDAV, and S3 as a regular directory in the Linux file system.

We will do this using the cool rclone utility👩‍🚀👩‍🚀.

Briefly about SFTP, WebDAV, S3, and rclone

SFTP (SSH File Transfer Protocol) is a network protocol designed for secure file transfer over a network. SFTP operates over the SSH (Secure Shell) protocol.

WebDAV (Web-based Distributed Authoring and Versioning) is a set of extensions to the HTTP protocol that allows managing files stored on remote web servers. Essentially, it turns a web server into something similar to a network drive.

S3 (Amazon Simple Storage Service) is a cloud object storage provided by AWS, as well as many other providers (S3-compatible storage, for example, MinIO).

rclone is a powerful Open source command-line utility for synchronizing files and directories with/to many different cloud storage services, including SFTP, WebDAV, S3, Google Drive, Dropbox, and many others. rclone can also mount cloud storage as local file systems (via the FUSE subsystem), which is what this article is about 😉.

Why not SFTP/SSHFS and DAVFS?

Yes, SFTP can be connected directly or even using the FUSE variant - SSHFS. A similar situation exists for WebDAV: DAVFS is available. But the peculiarity of mounting with rclone lies in its unification and additional capabilities.

For example, both sshfs and rclone mount use SFTP as a backend. The difference is that rclone doesn’t just proxy the request; it is also capable of:

Parameters can be flexibly adjusted for various operating conditions.

How to read this article?

The sections of the article for each mounting type are self-contained. You can choose any one, and by performing the rclone installation + all steps of the desired section, you will get a working configuration.

Now, let’s move on to practice👨‍💻.

rclone installation

The rclone utility is available in standard repositories. Open the terminal and execute:

BASH
sudo apt update && sudo apt install fuse rclone
Click to expand and view more

Initially, rclone is designed for direct file exchange between different storages: move|copy|sync. But this article will only focus on the mount command, which allows mounting different types of storage as a regular directory.

Mounting SFTP as a directory

First, you need to add the SFTP remote storage access credentials to the rclone config (you can specify multiple instances of different types in the config):

BASH
mkdir -vp ~/.config/rclone

vim ~/.config/rclone/rclone.conf
Click to expand and view more

Fill it with:

INI
[sftp-storage-1]
type = sftp
host = sftp.r4ven.me
user = ivan
port = 22
#pass = ObfuscatedSecretPassword
key_file = /home/ivan/.ssh/id_ed25519
md5sum_command = md5sum
sha1sum_command = sha1sum
Click to expand and view more

Parameter description:

Manual SFTP mounting

First, let’s create a local directory where we will mount the remote one:

BASH
mkdir -vp ~/Storage/sftp
Click to expand and view more

Now, to manually mount the remote directory to the local one, execute the following command:

BASH
rclone mount --daemon sftp-storage-1:/home/ivan/data ~/Storage/sftp
Click to expand and view more

Where:

Check mounting:

BASH
findmnt --real

df -h ~/Storage/sftp

ls -l ~/Storage/sftp
Click to expand and view more

To unmount the directory, execute the fusermount command with the -u (unmount) flag:

BASH
fusermount -u ~/Storage/sftp
Click to expand and view more

It seems like nothing special, but the whole point of such mounting lies in additional parameters, which can be many. For example:

BASH
rclone mount --daemon sftp-storage-1:/home/ivan/data ~/Storage/sftp \
    --dir-cache-time 48h \
    --vfs-cache-mode full \
    --vfs-cache-max-size 10G \
    --vfs-cache-max-age 24h \
    --vfs-write-back 5s \
    --vfs-read-chunk-size 128M \
    --vfs-read-chunk-size-limit 1G \
    --buffer-size 128M \
    --transfers 8 \
    --checkers 8 \
    --umask 002 \
    --log-level INFO \
    --retries 10 \
    --low-level-retries 10 \
    --sftp-set-modtime=false \
    --sftp-disable-hashcheck
Click to expand and view more

See the rclone parameter description under the spoiler.

To check write speed, you can use the dd utility:

BASH
dd if=/dev/urandom of=./Storage/sftp/bigfile.bin bs=100M count=10
Click to expand and view more

Below in the screenshot is an example of writing with standard mounting and with additional parameters:

Automatic SFTP mounting

To automatically start the mounting process, we will use the Systemd initialization system.

Create a unit file with the command:

BASH
mkdir -vp ~/.config/systemd/user

vim  ~/.config/systemd/user/sftp-storage-1.service
Click to expand and view more

Fill it with:

INI
[Unit]
Description=Rclone SFTP mount
After=network-online.target
Wants=network-online.target

[Service]
Type=simple

ExecStartPre=/usr/bin/sleep 5

ExecStart=/usr/bin/rclone mount \
    sftp-storage-1:/home/ivan/data /home/ivan/Storage/sftp \
    --dir-cache-time 48h \
    --vfs-cache-mode full \
    --vfs-cache-max-size 10G \
    --vfs-cache-max-age 24h \
    --vfs-write-back 5s \
    --vfs-read-chunk-size 128M \
    --vfs-read-chunk-size-limit 1G \
    --buffer-size 128M \
    --transfers 8 \
    --checkers 8 \
    --umask 002 \
    --log-level INFO \
    --retries 10 \
    --low-level-retries 10 \
    --sftp-set-modtime=false \
    --sftp-disable-hashcheck

ExecStop=/usr/bin/fusermount -u -z /home/ivan/Storage/sftp
ExecStopPost=/usr/bin/sleep 2
TimeoutStopSec=20

Restart=always
RestartSec=60

KillMode=mixed

[Install]
WantedBy=default.target
Click to expand and view more

The unit parameter description is under the spoiler.

Reload Systemd configuration:

BASH
systemctl --user daemon-reload
Click to expand and view more

Now activate the unit start/autostart:

BASH
systemctl --user enable --now sftp-storage-1.service

systemctl --user status sftp-storage-1.service
Click to expand and view more

If everything is OK:

Check the mount point:

BASH
findmnt --real

df -h ~/Storage/sftp

ls -l ~/Storage/sftp
Click to expand and view more

To view the service journal:

BASH
journalctl --user -u sftp-storage-1.service
Click to expand and view more

Now the service will start when the user logs into the system.

Mounting WebDAV as a directory

Here, you also need to add the WebDAV remote storage access credentials to the rclone config (you can specify multiple instances of different types in the config):

BASH
mkdir -vp ~/.config/rclone

vim ~/.config/rclone/rclone.conf
Click to expand and view more

Fill it with:

INI
[webdav-storage-1]
type = webdav
url = https://webdav.r4ven.me/
vendor = auto
user = ivan@r4ven.me
pass = ObfuscatedSecretPassword
Click to expand and view more

Parameter description:

Manual WebDAV mounting

First, let’s create a local directory where we will mount the remote one:

BASH
mkdir -vp ~/Storage/webdav
Click to expand and view more

Now, to manually mount the remote directory to the local one, execute the following command:

BASH
rclone mount --daemon webdav-storage-1:/data ~/Storage/webdav
Click to expand and view more

Where:

Check mounting:

BASH
findmnt --real

df -h ~/Storage/webdav

ls -l ~/Storage/webdav
Click to expand and view more

To unmount the directory, execute the fusermount command with the -u (unmount) flag:

BASH
fusermount -u ~/Storage/webdav
Click to expand and view more

Here, too, many rclone parameters can be used. Example:

BASH
rclone mount --daemon webdav-storage-1:/data ~/Storage/webdav \
    --dir-cache-time 48h \
    --vfs-cache-mode full \
    --vfs-cache-max-size 10G \
    --vfs-cache-max-age 24h \
    --vfs-write-back 5s \
    --vfs-read-chunk-size 128M \
    --vfs-read-chunk-size-limit 1G \
    --buffer-size 128M \
    --transfers 8 \
    --checkers 8 \
    --umask 002 \
    --log-level INFO \
    --retries 10 \
    --low-level-retries 10
Click to expand and view more

See the rclone parameter description under the spoiler.

To check write speed, you can use the dd utility:

BASH
dd if=/dev/urandom of=./Storage/webdav/bigfile.bin bs=10M count=10
Click to expand and view more

Below in the screenshot is an example of writing with standard mounting and with additional parameters:

Automatic WebDAV mounting

For automatic startup of the mounting process, we also use Systemd.

Create a unit file with the command:

BASH
mkdir -vp ~/.config/systemd/user

vim  ~/.config/systemd/user/webdav-storage-1.service
Click to expand and view more

Fill it with:

INI
[Unit]
Description=Rclone WebDAV mount
After=network-online.target
Wants=network-online.target

[Service]
Type=simple

ExecStartPre=/usr/bin/sleep 5

ExecStart=/usr/bin/rclone mount \
    webdav-storage-1:/data /home/ivan/Storage/webdav \
    --dir-cache-time 48h \
    --vfs-cache-mode full \
    --vfs-cache-max-size 10G \
    --vfs-cache-max-age 24h \
    --vfs-write-back 5s \
    --vfs-read-chunk-size 128M \
    --vfs-read-chunk-size-limit 1G \
    --buffer-size 128M \
    --transfers 8 \
    --checkers 8 \
    --umask 002 \
    --log-level INFO \
    --retries 10 \
    --low-level-retries 10

ExecStop=/usr/bin/fusermount -u -z /home/ivan/Storage/webdav
ExecStopPost=/usr/bin/sleep 2
TimeoutStopSec=20

Restart=always
RestartSec=60

KillMode=mixed

[Install]
WantedBy=default.target
Click to expand and view more

The unit parameter description is under the spoiler.

Reload Systemd configuration:

BASH
systemctl --user daemon-reload
Click to expand and view more

Now activate the unit start/autostart:

BASH
systemctl --user enable --now webdav-storage-1.service

systemctl --user status webdav-storage-1.service
Click to expand and view more

If everything is OK:

Check the mount point:

BASH
findmnt --real

df -h ~/Storage/webdav

ls -l ~/Storage/webdav
Click to expand and view more

To view the service journal:

BASH
journalctl --user -u webdav-storage-1.service
Click to expand and view more

Now the service will start when the user logs into the system.

Mounting S3 as a directory

Again, we start with the rclone config. Open the file and add the credentials for the S3 instance (you can specify multiple instances of different types in the config):

BASH
mkdir -vp ~/.config/rclone

vim ~/.config/rclone/rclone.conf
Click to expand and view more

Fill it with:

INI
[s3-storage-1]
type = s3
provider = Minio
endpoint: https://s3.r4ven.me:443
access_key_id: ivan
secret_access_key: SecretAccessKey
bucket: data
Click to expand and view more

Parameter description:

Manual S3 mounting

Create a local directory where we will mount the remote one:

BASH
mkdir -vp ~/Storage/s3
Click to expand and view more

The mount command is identical to the previous ones:

BASH
rclone mount --daemon s3-storage-1:/data ~/Storage/s3
Click to expand and view more

Where:

Check mounting:

BASH
findmnt --real

df -h ~/Storage/s3

ls -l ~/Storage/s3
Click to expand and view more

To unmount the directory, execute the fusermount command with the -u (unmount) flag:

BASH
fusermount -u ~/Storage/s3
Click to expand and view more

Many rclone parameters are also available here. Example:

BASH
rclone mount --daemon s3-storage-1:/data ~/Storage/s3 \
    --dir-cache-time 48h \
    --vfs-cache-mode full \
    --vfs-cache-max-size 10G \
    --vfs-cache-max-age 24h \
    --vfs-write-back 5s \
    --vfs-read-chunk-size 128M \
    --vfs-read-chunk-size-limit 1G \
    --buffer-size 128M \
    --transfers 8 \
    --checkers 8 \
    --umask 002 \
    --log-level INFO \
    --retries 10 \
    --low-level-retries 10 \
    --poll-interval 10m \
    --s3-chunk-size 64M \
    --s3-upload-cutoff 64M
Click to expand and view more

See the rclone parameter description under the spoiler.

To check write speed, you can use the dd utility:

BASH
dd if=/dev/urandom of=./Storage/s3/bigfile.bin bs=10M count=10
Click to expand and view more

Below in the screenshot is an example of writing with standard mounting and with additional parameters:

Automatic S3 mounting

To automatically start the mounting process, we will again use Systemd.

Create a unit file:

BASH
mkdir -vp ~/.config/systemd/user

vim  ~/.config/systemd/user/s3-storage-1.service
Click to expand and view more

Fill it with:

INI
[Unit]
Description=Rclone S3 mount
After=network-online.target
Wants=network-online.target

[Service]
Type=simple

ExecStartPre=/usr/bin/sleep 5

ExecStart=/usr/bin/rclone mount \
    s3-storage-1:/data /home/ivan/Storage/s3 \
    --dir-cache-time 48h \
    --vfs-cache-mode full \
    --vfs-cache-max-size 10G \
    --vfs-cache-max-age 24h \
    --vfs-write-back 5s \
    --vfs-read-chunk-size 128M \
    --vfs-read-chunk-size-limit 1G \
    --buffer-size 128M \
    --transfers 8 \
    --checkers 8 \
    --umask 002 \
    --log-level INFO \
    --retries 10 \
    --low-level-retries 10 \
    --poll-interval 10m \
    --s3-chunk-size 64M \
    --s3-upload-cutoff 64M

ExecStop=/usr/bin/fusermount -u -z /home/ivan/Storage/s3
ExecStopPost=/usr/bin/sleep 2
TimeoutStopSec=20

Restart=always
RestartSec=60

KillMode=mixed

[Install]
WantedBy=default.target
Click to expand and view more

The unit parameter description is under the spoiler.

Reload Systemd configuration:

BASH
systemctl --user daemon-reload
Click to expand and view more

Now activate the unit start/autostart:

BASH
systemctl --user enable --now s3-storage-1.service

systemctl --user status s3-storage-1.service
Click to expand and view more

If everything is OK:

Check the mount point:

BASH
findmnt --real

df -h ~/Storage/s3

ls -l ~/Storage/s3
Click to expand and view more

To view the service journal:

BASH
journalctl --user -u s3-storage-1.service
Click to expand and view more

Now the service will start when the user logs into the system.

Afterword

This is how you can conveniently mount and use remote storage as a local directory📁. And simultaneously!

I discovered rclone relatively recently and was pleasantly surprised by its functionality. I will continue to study this useful tool🔧.

That’s all for today. Thank you for reading my blog📝. Good luck!

Materials used

Copyright Notice

Author: Иван Чёрный

Link: https://r4ven.me/en/storage/rclone-mount-montirovanie-setevyh-hranilishch-sftp-webdav-s3-kak-lokalnoy-direktorii-v-linux/

License: CC BY-NC-SA 4.0

Использование материалов блога разрешается при условии: указания авторства/источника, некоммерческого использования и сохранения лицензии.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut