Column
Inside the Kernal
NFS Examined
Emmett walks you through exporting then mounting a file system using the latest version of the Network File System.
by Emmett Dulaney
3/17/2009 -- The Linux 2.6 kernel has built-in support for the latest version of the Network File System. NFSv4 is built on earlier versions of NFS, but unlike earlier versions NFSv4 has stronger security and was designed to operate in an Internet environment.
NFSv4 uses the RPCSEC_GSS protocol for security (GSS stands for Generic Security Services). You can continue to use the older user ID- and group ID-based authentication with NFSv4, but if you want to use RPCSEC_GSS, you have to run three additional services: rpcsvcgassd on the server, rpsgssd on the client and rpcidmapd on both the client and the server. (You can get more information about NFSv4 implementation in Linux here.)
Exporting a File System with NFS
Start with the server system that exports (i.e., makes available to the client systems) the contents of a directory. On the server, run the NFS service and designate one or more file systems to export.
To then export a file system, you have to add an appropriate entry to the /etc/exports file. For example, suppose you want to export the /home directory and enable the host named LNBP75 to mount this file system for read and write operations. You can do so by adding the following entry to the /etc/exports file:
/home LNBP75(rw,sync)
If you want to give access to all hosts on a LAN such as 192.168.0.0, you could change this line to:
/home 192.168.0.0/24(rw,sync)
Every line in the /etc/exports file has this general format:
directory host1(options) host2(options) ...
The first field is the directory being shared via NFS, followed by one or more fields that specify which hosts can mount that directory remotely and a number of options within parentheses. You can specify the hosts with names or IP addresses, including ranges of addresses. The options within parentheses denote the kind of access each host is granted and how user and group IDs from the server are mapped to ID the client. (For example, if a file is owned by root on the server, what owner is that on the client?)
Within the parentheses, commas separate the options. For example, if a host is allowed both read and write access -- and all IDs are to be mapped to the anonymous user (by default this is the user named nobody) -- then the options look like this:
(rw,all_squash)
There are two types of options you can use in the /etc/exports file: general options and user ID mapping options. The following table from Linux All-in-One Desk Reference For Dummies gives a good summary of the general options available:
Option
|
Function
|
secure
|
Allows connections only from ports 1024 or lower (default).
|
insecure
|
Allows connections from ports 1024 or higher.
|
ro
|
Allows read-only access (default).
|
rw
|
Allows both read and write access.
|
sync
|
Performs write operations (writing information to the disk) when requested (by default).
|
async
|
Performs write operations when the server is ready.
|
no_wdelay
|
Performs write operations immediately.
|
wdelay
|
Waits a bit to see whether related write requests arrive and then performs them together (by default).
|
hide
|
Hides an exported directory that’s a subdirectory of another exported directory (by default).
|
no_hide
|
Behaves exactly the opposite of hide.
|
subtree_check
|
Performs subtree checking, which involves checking parent directories of an exported subdirectory whenever a file is accessed (by default).
|
no_subtree_check
|
Turns off subtree checking (opposite of subtree_check).
|
insecure_locks
|
Allows insecure file locking.
|
And this table gives a summary of the available user ID mapping options:
Option
|
Function
|
all_squash
|
Maps all user IDs and group IDs to the anonymous user on the client.
|
no_all_squash
|
Maps remote user and group IDs to similar IDs on the client (by default).
|
root_squash
|
Maps remote root user to the anonymous user on the client (by default).
|
no_root_squash
|
Maps remote root user to the local root user.
|
anonuid=UID
|
Sets the user ID of anonymous user to be used for the all_squash and root_squash options.
|
anongid=GID
|
Sets the group ID of anonymous user to be used for the all_squash and root_squash options.
|
After adding the entry in the /etc/exports file, manually export the file system by typing the following command in a terminal window:
exportfs -a
This command exports all file systems defined in the /etc/exports file. Now you can start the NFS server processes. How you do this differs by distribution. In Debian, start the NFS server by logging in as root and typing /etc/init.d/nfs-kernel-server start in a terminal window. In Fedora, type /etc/init.d/nfs start. In SUSE, type /etc/init.d/nfsserver start.
If you want the NFS server to start when the system boots, type update-rc.d nfs-kernel-server defaults in Debian, chkconfig - -level 35 nfs on in Fedora, chkconfig - -level 35 nfsserver on in SUSE, and update-rc.d nfs-user-server defaults in Xandros.
When the NFS service is up, the server side of NFS is ready. Now you can try to mount the exported file system from a client system and then access the exported file system as needed. If you ever make any changes to the exported file systems listed in the /etc/exports file, remember to restart the NFS service by invoking the script in /etc/init.d directory with restart as the argument (instead of the start argument that you use to start the service).
Mounting an NFS File System
To access an exported NFS file system on a client system, you have to mount that file system on a mount point, which is nothing more than a local directory. For example, suppose that you want to access the /home directory exported from the server named LNBP200 at the local directory /mnt/lnbp200 on the client system. To do so, follow these steps:
- Log in as root and create the directory with this command:
mkdir /mnt/lnbp200
- Type the following command to mount the directory from the remote system (LNBP200) on the local directory /mnt/lnbp200:
mount lnbp200:/home /mnt/lnbp200
After completing these steps, you can then view and access exported files from the local directory /mnt/lnbp200.
To confirm that the NFS file system is indeed mounted, log in as root on the client system and type mount in a terminal window. You should see a line similar to the following about the NFS file system:
lnbp200:/home/public on /mnt/lnbp200 type nfs (rw,addr=192.168.0.4)
NFS supports two types of mount operations: hard and soft. A mount is hard by default, meaning that if the NFS server doesn't respond, the client will keep trying to access the server indefinitely until the server responds. You can soft mount an NFS volume by adding the -o soft option to the mount command. For a soft mount, the client returns an error if the NFS server fails to respond.
Emmett Dulaney is the author of several books on Linux, Unix and certification. He can be reached at .
|