Linux Kung Fu: Convenient File Management over SSH

If you have more than one Linux computer, then you probably use it all the time ssh. It's a great tool, but I always found one thing odd about it. Despite the fact that ssh connections allow transferring files using scpand sftp, we do not have the ability to move files between the local and remote systems without running the program on the local host, or without connecting to the local machine from the remote.







The latter is a real problem, since servers are often connected to while being behind a firewall or behind a NAT router, that is, without having a permanent IP address. As a result, the server, in any case, will not be able to connect to the local system from which it was previously accessed. If in an ssh session you could just take a local or remote file and transfer it to where you want it, it would be very convenient.



Actually, I didn't quite achieve this goal, but I got very close to achieving it. In this article I will tell you about a script that allows you to mount remote directories on your local computer. On the local machine, you will need to installsshfs, but on a remote location where you may not be able to install programs, you won't have to change anything. If you spend some time on setting up the systems, and if there is a working ssh server on the client computer, then you can also mount local directories on remote systems. You don't have to worry about blocking IP addresses or ports. In fact, if you are able to connect to a remote machine, that means that you will succeed in what I want to tell you about.



As a result, if all this is combined, it turns out that I am very close to the goal. I can work with a command shell on the client or on the server and am able to conveniently read and write files on both sides of the connection. To do this, you just need to configure everything correctly.



Is there a catch here?



Perhaps you decide that there is some kind of catch. After all, we are, in fact, talking about using two ssh connections. One is used to mount the file system, and the other is used to log into the computer. And this, in fact, is the case. But if configured correctly ssh, then authentication will need to be performed only once, without spending too much time organizing two connections.



In addition, the work is greatly facilitated by the script, which I will talk about. It hides the details from the user, so the connection procedure looks (almost) as usual, and after that everything works as it should.



A few words about sshfs



The utility sshfsallows you to work with the file system in userspace (filesystem in userspace, FUSE). That is, we are talking about the fact that there is a layer in user space that sits on top of the base file system. In this case, such a file system is the ssh server that supports sftp. This allows you to work with files on a remote system, treating them as if they were in a real file system on the local computer. If you haven't tried it yet sshfs, try it. This utility works very well.



Suppose you log on to your computer myserverand run the following command from your local machine: 



sshfs myserver:/home/admin ~/mounts/myserver


This will make the remote computer directory /home/adminaccessible on the local system along the path ~/mounts/myserver.



When using, sshfsyou can use various options. For example, you can arrange to reconnect after losing the connection. For details, sshfssee the help.



Since it sshfsuses a remotely mounted version of the file, all changes made to the file are saved on the remote machine. And after the sshfs connection is closed, nothing remains on the local computer. We will fix this now.



Preliminary preparation



Before I move on to the description of the script that was mentioned above, I want to talk about some of the client settings, which you, if you want, can modify for yourself. So, here I create a directory ~/remote, and in it I create subdirectories for each remote computer. For example - it can be directories ~/remote/fileserverand ~/remote/lab.



The script is called sshmount. It takes the same arguments as ssh. To simplify the work with the script, the information about the remote host should be stored in a file ~/.ssh/config, which will allow using simple and short host names. For example, computer information labmight look like this:



Host lab
Hostname lab.wd5gnr-dyn.net
Port 444
User alw
ForwardX11 yes
ForwardX11Trusted yes
TCPKeepAlive yes
Compression yes
ControlMaster auto
ControlPath ~/.ssh/master-%r@%h:%p


This is not really necessary, but with this approach you have a nice looking directory at your disposal ~/remote/lab, not a complex view construct ~/remote/alw@lab.wd5gnr-dyn.net:444. There is nothing mysterious about all these parameters. The only thing I want to draw your attention to that ControlMasterand ControlPathallow you to organize to run faster with compounds that, in this case, is very important.



In addition, you can organize an automatic connection to a remote system using private ssh keys. Here's the stuff about it.



Script



Our script can be used in two ways. So, if called via a link to sshunmount, it will unmount the filesystem associated with the specified remote host. If called differently (usually how sshmount), then it performs the following three actions:



  1. It checks if the directory contains a ~/remotesubdirectory with the same name as the hostname (for example - lab). If there is no such directory, it displays an error message and continues working.
  2. If such a directory exists, the script looks through the list of mounted file systems in case the required file system is already mounted. If so, he continues to work.
  3. If the directory is not mounted, it calls sshfsand continues.


This script can be found on GitHub . And here is its code, from which some comments have been removed:



#!/bin/bash
 
if [ "$1" == "" ]
then
echo Usage: sshmount host [ssh_options] - Mount remote home folder on ~/remote/host and log in
echo or: sshunmount host - Remove mount from ~/remote/host
exit 1
fi
 
#    sshunmount...
if [ $(basename "$0") == sshunmount ]
then
echo Unmounting... 1>&2
fusermount -u "$HOME/remote/$1"
exit $?
fi
 
#  ...
if [ -d "$HOME/remote/$1" ] #   ?
then
if mount | grep "$HOME/remote/$1 " #    ?
then
echo Already mounted 1>&2
else
sshfs -o reconnect $1: $HOME/remote/$1 # mount
fi
else
echo No remote directory ~/remote/$1 exists 1>&2
fi
ssh $@ #  


This script gives me half of what I need. Namely, it allows you to conveniently work with remote files on the local computer to which I am connected. But making it so that from a remote computer it would be possible to work with files located on the local machine is a little more difficult.



Solving the inverse problem



If you want to experiment with mounting folders on your local machine on a server, you will need to have an ssh server running on the local machine. Of course, if your local computer is visible and accessible to the server, then it's simple: just run it on the remote computer sshfsand mount the folder on it from the local computer. But in many cases, we do not have access to the local system, which may be located behind firewalls or routers. This is especially true if the role of a local system is played by a laptop that can connect to the network from different places.



But our task, despite all these difficulties, can still be solved. There are two parts to its solution.



Firstly - it is necessary, when callingsshmount, specify an additional argument (the file can be edited in the event that you need to constantly execute a similar command):



sshmount MyServer -R 5555:localhost:22


Secondly, after connecting to the host, you need to run the following command:



sshfs -p 5555 localhost:/home/me ~/local


Thanks to the option -R, a socket on the port is created on the remote machine 5555(which, of course, should be free) and its connection with the port of the 22local machine is carried out . Assuming the ssh server is running on the port 22, this will allow the server to connect to the local machine over the same connection. It doesn't need to know our IP address or have an open port.



The command sshfs, which can be executed at system startup, links the local directory /home/meto the directory on the ~/localremote server. If, in addition, you are logged in locally, you can take a look at the environment variables that start with SSH_and learn more about the SSH connection. For example, these are variables $SSH_CLIENTand $SSH_TTY.



Of course, in order for the above commands to work for you, you will need to change the hostnames and directories, as well as the port addresses to those used on your systems. But after everything is configured, all the files you need will be available on both local and remote machines. By the way, I did not try to organize circular mounting of directories. If you try to do this, something very strange can turn out.



Outcome



I suppose you need to be careful when you mount remote folders on the local machine and local folders on the remote machine at the same time. For example, utilities that scan the entire file system can get confused in such configurations. In addition, I am still looking for an answer to the question of how to properly disconnect from the server file system when exiting the last session.



But even now all this gives us good tools for organizing convenient and reliable work with software files ssh. It should be noted that another option for solving the problem of working with files on remote systems can be synchronizing folders and using them to transfer files between computers.



What do you use to work with files on remote Linux systems?










All Articles