Red Hat Linux Networking , System Administration (P12) ppsx

30 390 0
Red Hat Linux Networking , System Administration (P12) ppsx

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

■■ Hide filesystems beneath — This option corresponds to the hide option listed in Table 12-1. ■■ Export only if mounted — This option corresponds to the mp[=path] option listed in Table 12-1. Selecting this option is equivalent to specify the mp mount option with out the optional path mount point. ■■ Optional mount point — This option corresponds to the path portion of the mp[=path] option listed in Table 12-1. You can type the mount point, if you want to specify on, the text box or use the Browse button to select the mount point graphically. ■■ Set explicit Filesystem ID — This option corresponds to the fsid=n option listed in Table 12-1. Enter the actual FSID value in the text box. Figure 12-4 shows the General Options tab. We have disabled subtree check- ing for /home and left the required sync option (Sync write operations on request) enabled. The User Access tab, shown in Figure 12-5, implements the UID/GID remapping and root-squashing options described earlier in this chapter. Select the Treat remote root user as local root user check box if you want the equiva- lent of no_root_squash. To remap all UIDs and GIDs to the UID and GID of the anonymous user (the all_squash option from Table 12-1), select the Treat all client users as anonymous users check box. As you might guess, if you want to specify the anonymous UID or GID, click the corresponding check boxes to enable these options and then type the desired value in the matching text boxes. In Figure 12-5, all clients will be remapped to the anonymous user. Figure 12-5 shows the User Access Tab as it appears in Fedora Core; it looks slightly different in RHEL. Figure 12-4 The General Options tab. 294 Chapter 12 18_599496 ch12.qxd 8/30/05 6:42 PM Page 294 Figure 12-5 The User Access tab. When you have finished configuring your new NFS export, click the OK button to close the Add NFS Share dialog box. After a short pause, the new NFS share appears in this list of NFS exports, as shown in Figure 12-6. If you want to change the characteristics of an NFS share, select the share you want to modify and click the Properties button on the toolbar. This will open the Edit NFS Share dialog box, which has the same interface as the Add NFS Share dialog box. Similarly, if you want to remove an NFS share, select the export you want to cancel and click the Delete button. To close the NFS Server Configuration tool, type Ctrl+Q or click File ➪ Quit on the menu bar. Figure 12-6 Adding an NFS share. The Network File System 295 18_599496 ch12.qxd 8/30/05 6:42 PM Page 295 Configuring an NFS Client Configuring client systems to mount NFS exports is simpler than configuring the NFS server itself. This section of the chapter provides a brief overview of client configuration, identifies the key files and commands involved in config- uring and mounting NFS exported file systems, and shows you how to con- figure a client to access the NFS exports configured in the previous section. Configuring a client system to use NFS involves making sure that the portmapper and the NFS file locking daemons statd and lockd are avail- able, adding entries to the client’s /etc/fstab for the NFS exports, and mounting the exports using the mount command. As explained at the beginning of the chapter, a mounted NFS exported file system is functionally equivalent to a local file system. Thus, as you might expect, you can use the mount command at the command line to mount NFS exports manually, just as you might mount a local file system. Similarly, to mount NFS exports at boot time, you just add entries to the file system mount table, /etc/fstab. As you will see in the section titled “Using Automount Services” at the end of this chapter, you can even mount NFS file systems auto- matically when they are first used, without having to mount them manually. The service that provides this feature is called, yup, you guessed it, the auto- mounter. More on the automounter in a moment. As a networked file system, NFS is sensitive to network conditions, so the NFS client daemons accept a few options, passed via the mount command, address NFS’s sensitivities and peculiarities. Table 12-4 lists the major NFS- specific options that mount accepts. For a complete list and discussion of all NFS-specific options, see the NFS manual page (man nfs). Table 12-4 NFS-Specific Mount Options OPTION DESCRIPTION bg Enables mount attempts to run in the background if the first mount attempt times out (disable with nobg). fg Causes mount attempts to run in the foreground if the first mount attempt times out, the default behavior (disable with nofg). hard Enables failed NFS file operations to continue retrying after reporting “server not responding” on the system, the default behavior (disable with nohard). intr Allow signals (such as Ctrl+C) to interrupt a failed NFS file operation if the file system is mounted with the hard option (disable with nointr). Has no effect unless the hard option is also specified or if soft or nohard is specified. 296 Chapter 12 18_599496 ch12.qxd 8/30/05 6:42 PM Page 296 Table 12-4 (continued) OPTION DESCRIPTION lock Enables NFS locking and starts the statd and lockd daemons (disable with nolock). mounthost=name Sets the name of the server running mountd to name. mountport=n Sets the mountd server port to connect to n (no default). nfsvers=n Specify the NFS protocol version to use, where n is 1, 2, 3, or 4. port=n Sets the NFS server port to which to connect to n (the default is 2049). posix Mount the export using POSIX semantics so that the POSIX pathconf command will work properly. retry=n Sets the time to retry a mount operation before giving up to n minutes (the default is 10,000). rsize=n Sets the NFS read buffer size to n bytes (the default is 1024); for NFSv4, the default value is 8192. soft Allows an NFS file operation to fail and terminate (disable with nosoft). tcp Mount the NFS file system using the TCP protocol (disable with notcp). timeo=n Sets the RPC transmission timeout to n tenths of a second (the default is 7). Especially useful with the soft mount option. udp Mount the NFS file system using the UDP protocol, the default behavior (disable with noupd). wsize=n Sets the NFS write buffer size to n bytes (the default is 1024); for NFSv4, the default value is 8192. The options you are most likely to use are rsize, wsize, hard, intr, and nolock. Increasing the default size of the NFS read and write buffers improves NFS’s performance. The suggested value is 8192 bytes, that is, rsize=8192 and wsize=8192, but you might find that you get better per- formance with larger or smaller values. The nolock option can also improve performance because it eliminates the overhead of file locking calls, but not all servers support file locking over NFS. If an NFS file operation fails, you can use a keyboard interrupt, usually Ctrl+C, to interrupt the operation if the exported file system was mounted with both the intr and hard options. This prevents NFS clients from hanging. The Network File System 297 18_599496 ch12.qxd 8/30/05 6:42 PM Page 297 Like an NFS server, an NFS client needs the portmapper running in order to process and route RPC calls and returns from the server to the appropriate port and programs. Accordingly, make sure that the portmapper is running on the client system using the portmap initialization script: # service portmap status If the output says portmap is stopped (it shouldn’t be), start the portmapper: # service portmap start To use NFS file locking, both an NFS server and any NFS clients need to run statd and lockd. As explained in the section on configuring an NFS server, the simplest way to accomplish this is to use the initialization script, /etc /rc.d/init.d/nfslock. Presumably, you have already started nfslock on the server, so all that remains is to start it on the client system: # service nfslock start Once you have configured the mount table and started the requisite dae- mons, all you need to do is mount the file systems. You learned about the mount command used to mount file systems in a previous chapter, so this sec- tion shows only the mount invocations needed to mount NFS file systems. During initial configuration and testing, it is easiest to mount and unmount NFS export at the command line. For example, to mount /home from the server configured at the end of the previous section, execute the following command as root: # mount -t nfs bubba:/home /home You can, if you wish, specify client mount options using mount’s -o argu- ment, as shown in the following example. # mount -t nfs bubba:/home /home -o rsize=8292,wsize=8192,hard,intr,nolock After satisfying yourself that the configuration works properly, you probably want to mount the exports at boot time. Fortunately, Fedora Core and RHEL make this easy because the initialization script /etc/rc.d/init.d/netfs, which runs at boot time, automatically mounts all networked file systems not configured with the noauto option, including NFS file systems. It does this by parsing /etc/fstab looking for file systems of type nfs, nfs4 (described in the next section), smbfs (Samba) cifs (Common Internet Filesystem) or ncpfs (Netware) and mounting those file systems. 298 Chapter 12 18_599496 ch12.qxd 8/30/05 6:42 PM Page 298 TIP If you are connecting an NFSv4 client to an NFSv2 server, you must use the mount option nfsvers=2 or the mount attempt will fail. Use nfsvers=1 if you are connecting to an NFSv1 server. We learned this the hard way while trying to mount an export from an ancient server running Red Hat Linux 6.2 (we told you it was ancient). We kept getting an error indicating the server was down when we knew it wasn’t. Finally, we logged into the server, discovered it was running a very old distribution and were able to mount the export. While we’re somewhat embarrassed to be running such an old version of Red Hat, we’re also quite pleased to report that it has been running so well for so long that we forgot just how old it was. Configuring an NFSv4 Client The introduction of NFSv4 into the kernel added some NFSv4-specific behav- ior of which you need to be aware and changed some of the mount options. This section covers NFSv4-specific features and begins with the mount options that have changed in terms of their meaning or behavior. Table 12-5 lists the new or changed mount options. The two new options listed in Table 12-5 are clientaddr and proto. Ver- sion 3 of NFS introduced NFS over TCP, which improved NFS’s reliability over the older UDP-based implementation. Under NFSv3, you would use the mount option tcp or udp to specify to the client whether you wanted it to use TCP or UDP to communicate with the server. NFSv4 replaces tcp and udp with a sin- gle option, proto= that accepts two arguments: tcp or udp. In case it isn’t clear, the NFSv3 option tcp is equivalent to the NFSv4 option proto=tcp. Figuring out the udp option is left as an exercise for the reader. Table 12-5 NFSv4-Specific Mount Options OPTION DESCRIPTION clientaddr=n Causes a client on a multi-homed system to use the IP address specified by n to communicate with an NFSv4 server. proto=type Tells the client to use the network protocol specified by type, which can be tcp or udp (the default is udp); this option replaces the tcp and udp options from earlier versions of NFS. rsize=n Sets the read buffer size to n bytes (the default for NFSv4 is 8192); the maximum value is 32678. sec=mode Set the security model to mode, which can be sys, krb5, krb5i, or krb5p. wsize=n Sets the write buffer size to n bytes (the default for NFSv4 is 8192); the maximum value is 32678. The Network File System 299 18_599496 ch12.qxd 8/30/05 6:42 PM Page 299 The semantics for the rsize and wsize options have changed with NFSv4. The default buffer size is for NFSv4 is 8192 bytes, but it can grow to as large and 32,678 bytes, which should result in a noticeable performance improve- ment, especially when you are transferring large files. The buffer setting is only a suggestion, however, because the client and server negotiate the buffer size to select an optimal value according to network conditions. Strictly speaking, the sec option for selecting the security model NFS uses isn’t new with NFSv4. It existed in NFSv3, but now that NFSv4 has added strong encryption to the core NFS protocol, using this option is worthwhile. As shown in Table 12-5, legal values for the sec option are sys, krb5, krb5i, and krb5p. sys, the default security model, uses standard Linux UIDs and GIDs to authenticate NFS transactions. krb5 uses Kerberos 5 to authenticate users but takes no special measures to validate NFS transactions; krb5i (Ker- beros 5 with integrity checking) uses Kerberos 5 to authenticate users and checksums to enforce the data integrity on NFS transactions; krb5p (Kerberos 5 with privacy checking) uses Kerberos 5 to authenticate users and encryption to protect NFS transactions against packet sniffing. You can use the various Kerberos-enabled security models only if the NFS server supports both NFSv4 and the requested security model. Example NFS Client The example in this section demonstrates how to mount /home and /usr /local from the NFS server configured earlier in the chapter. 1. Clients that want to use both exports need to have the following entries in /etc/fstab: bubba:/usr/local /usr/local nfs rsize=8192,wsize=8192,hard,intr,nolock 0 0 bubba:/home /home nfs rsize=8192,wsize=8192,hard,intr,nolock 0 0 The hostname used on the left side of the colon, bubba, must resolve to an IP address either using DNS or an entry in the /etc/hosts file. We don’t recommend using an IP address because, in a well-run system, IP addresses can change, whereas a hostname won’t. If DNS is properly configured and maintained, the hostname will always point to the proper system regardless of what that system’s IP address is at any given time. 2. If it isn’t already running, start the portmapper using the following command: # service portmap start Starting portmapper: [ OK ] 300 Chapter 12 18_599496 ch12.qxd 8/30/05 6:42 PM Page 300 3. Mount the exports using one of the following commands: # mount –a –t nfs or # mount /home /usr/local or # service netfs start The first command mounts all (-a) file systems of type nfs (-t nfs). The second command mounts only the file systems /home and /usr/local (for this command to work, the file systems you want to mount must be listed in /etc/fstab). The third command uses the service command to mount all network file systems using by invok- ing the netfs service. Verify that the mounts completed successfully by attempting to access files on each file system. If everything works as designed, you are ready to go. If all the preceding seems unnecessarily tedious, it only seems that way because it is more involved to explain how to set up an NFS client than it is actually to do it. Once you’ve done it a couple of times, you’ll be able to dazzle your friends and impress your coworkers with your wizardly mastery of NFS. You can really wow them after reading the next section, which shows you how to avoid the tedium by using the automounter to mount file systems automat- ically the first time you use them. Using Automount Services The easiest way for client systems to mount NFS exports is to use autofs, which automatically mounts file systems not already mounted when the file system is first accessed. autofs uses the automount daemon to mount and unmount file systems that automount has been configured to control. Although slightly more involved to configure than the other methods for mounting NFS file systems, autofs setup has to be done only once. In the next chapter, you’ll even learn how to distribute automounter configuration files from a central server, obviating the need to touch client systems manually at all. autofs uses a set of map files to control automounting. A master map file, /etc/auto.master, associates mount points with secondary map files. The secondary map files, in turn, control the file systems mounted under the cor- responding mount points. For example, consider the following /etc/auto .master autofs configuration file: /home /etc/auto.home /var /etc/auto.var timeout 600 The Network File System 301 18_599496 ch12.qxd 8/30/05 6:42 PM Page 301 This file associates the secondary map file /etc/auto.home with the mount point /home and the map file /etc/auto.var with the /var mount point. Thus, /etc/auto.home defines the file systems mounted under /home, and /etc/auto.var defines the file systems mounted under /var. Each entry in /etc/auto.master, what we’ll refer to as the master map file, consists of at least two and possibly three fields. The first field is the mount point. The second field identifies the full path to the secondary map file that controls the map point. The third field, which is optional, consists of options that control the behavior of the automount daemon. In the example master map file, the automount option for the /var mount point is timeout 600, which means that after 600 seconds (10 minutes) of inactivity, the /var mount point will be umounted automatically. If a timeout value is not specified, it defaults to 300 seconds (5 minutes). The secondary map file defines the mount options that apply to file systems mounted under the corresponding directory. Each line in a secondary map file has the general form: localdir [-[options]] remotefs localdir refers to the directory beneath the mount point where the NFS mount will be mounted. remotefs specifies the host and pathname of the NFS mount. remotefs is specified using the host:/path/name format described in the previous section. options, if specified, is a comma-separated list of mount options. These options are the same options you would use with the mount command. Given the entry /home /etc/auto.home in the master map file, consider the following entries in /etc/auto.home: kurt -rw,soft,intr,rsize=8192,wsize=8192 luther:/home/kurt terry luther:/home/terry In the first line, localdir is kurt, options is -rw,soft,intr,rsize=8192, wsize=8192, and remotefs is luther:/home/kurt. This means that the NFS export /home/kurt on the system named luther will be mounted in /home /kurt in read-write mode, as a soft mount, with read and write buffer sizes of 8192 bytes. A key point to keep in mind is that if /home/kurt exists on the local system, its contents will be temporarily replaced by the contents of the NFS mount /home/kurt. In fact, it is probably best if the directory specified by localdir does not exist because autofs dynamically creates it when it is first accessed. The second line of the example auto.home file specifies localdir as terry, no options, and remotefs as the NFS exported directory /home/terry exported from the system named luther. In this case, then, /home/terry on luther will be mounted as /home/terry on the NFS client using the default NFS 302 Chapter 12 18_599496 ch12.qxd 8/30/05 6:42 PM Page 302 mount options. Again, /home/terry should not exist on the local system, but the base directory, /home, should exist. Suppose that you want to use autofs to mount a shared projects directory named /proj on client systems on the /projects mount point. On the NFS server (named diskbeast in this case), you would export the /proj as described in the section “Configuring an NFS Server.” On each client that will mount this export, create an /etc/auto.master file that resembles the following: /projects /etc/auto.projects timeout 1800 This entry tells the automount daemon to consult the secondary map file /etc/auto.projects for all mounts located under /projects. After 1800 seconds without file system activity in /projects, autofs will automati- cally unmount it. NOTE If the autofs RPM is installed, Fedora Core and RHEL systems provide a default /etc/auto.master map file. All of the entries are commented out using the # sign, so you can edit the existing file if you wish. Next, create the following /etc/auto.projects file on each client that will use diskbeast’s export: code -rw,soft,rsize=8192,wsize=8192 diskbeast:/proj This entry mounts /proj from mailbeast as /projects/code on the client system. The mount options indicate that the directory will be read/write, that it will be a soft mount, and that the read and write block sizes are 8192 bytes. Recall from Table 12-4 that a soft mount means that the kernel can time out the mount operation after a period of time specified by the timeo=n option, where n is defined in tenths of a second. Finally, as the root user, start the autofs service: # /sbin/service autofs start Starting automount: [ OK ] After starting the autofs service, you can use the status option to verify that the automount daemon is working: # /sbin/service autofs status Configured Mount Points: /usr/sbin/automount timeout 600 /projects file /etc/auto.projects Active Mount Points: /usr/sbin/automount timeout 600 /projects file /etc/auto.projects The Network File System 303 18_599496 ch12.qxd 8/30/05 6:42 PM Page 303 [...]... type autofs (rw,fd=4,pgrp=11081,minproto=2,maxproto=4) # mount -t nfs diskbeast:/proj on /projects/code type nfs (rw,soft,rsize=8192,wsize=8192,nfsvers=2,addr=192.168.0.1) Using mount’s -t option limits the output to file systems of the specified type, autofs for automounted file systems, and nfs for NFS file systems The first output line shows that automount is managing the /projects file system; the... portmap, statd, and lockd on NFS clients that were suggested for the NFS server In summary, using TCP wrappers, the secure, root_squash, and nosuid options, and sturdy packet filters can increase the overall security of your NFS setup However, NFS is a complex, nontrivial subsystem, so it is entirely conceivable that new bugs and exploits will be discovered 307 308 Chapter 12 Summary In this chapter, you... NIS distributes information that needs to be shared throughout a Linux network to all machines that participate in the NIS domain Originally developed by Sun Microsystems, NIS was first known as Yellow Pages (YP ), so many NIS-related commands begin with the letters yp, such as ypserv, ypbind, and yppasswd Unfortunately for Sun, the phrase “Yellow Pages” was (and is) a registered trademark of British Telecom... map out to the slave servers, if present Starting the NIS Servers at Boot Time After you have configured your NIS server, you should make the system changes persistent, which means permanently storing the NIS domain name in the network configuration and ensuring that the required daemons (ypserv, yppasswdd, and, if you use slave servers, ypxfrd) start and stop when the system starts and stops The first... more NIS clients If your Linux system is going to be part of a network with existing NIS servers, you only need to install and configure an NIS client programs, ypbind, ypwhich, ypcat, yppoll, and ypmatch The most important program on an NIS client is the NIS client daemon, ypbind ypbind is usually started from the system s startup procedure As soon as ypbind is running your system has become an NIS... options and client access specifications that control ypserv and the NIS transfer daemon, ypxfrd The most important configuration file is /var/yp/securenets As a rule, RPC, on which NIS is based, happily replies to any client that asks for information Obviously, you don’t want to share your password database, just for example, with any host that asks for it So, securenets makes it possible to restrict... Information System A HUNDRED-WORD TOUR OF SLP SLP, the Service Location Protocol, provides a mechanism for networked applications to discover the presence, runtime behavior, and location of services available on a network (think Windows’ Network Neighborhood) The implementation used on Linux systems is OpenSLP, available on the Web at www.openslp.org Ordinarily, to find and use network services, such as... to configure NFS, the Network File System First, you found a general overview of NFS, its typical uses, and its advantages and disadvantages Next, you found out how to configure an NFS server, you identified key files and commands to use, and you saw the process with a typical real-world example With the server configured and functioning, you then learned how to configure a client system to access... exported file system, logging in to the client as a normal user, and then using the UID root program to become root on the client In some cases, you might also disable binaries on mounted file systems using the noexec option, but this effort almost always proves to be impractical or even counterproductive because one of the benefits of NFS is sharing file systems, such as /usr or /usr/local, that contain... servers, you need to perform configuration steps on the slave servers This section shows you how to create an NIS master server and a slave server N OT E For more information about NIS configuration, see the NIS How-To at the Linux Documentation Project, linuxdoc.org/HOWTO/NIS-HOWTO/index.html, and the NIS Web pages at www .linux- nis.org Key Files and Commands Table 13-1 lists the commands, daemons, and . on /projects type autofs (rw,fd=4,pgrp=11081,minproto=2,maxproto=4) # mount -t nfs diskbeast:/proj on /projects/code type nfs (rw,soft,rsize=8192,wsize=8192,nfsvers=2,addr=192.168.0.1) Using mount’s. file, consider the following entries in /etc/auto.home: kurt -rw,soft,intr,rsize=8192,wsize=8192 luther:/home/kurt terry luther:/home/terry In the first line, localdir is kurt, options is -rw,soft,intr,rsize=819 2, wsize=819 2,. because, in a well-run system, IP addresses can change, whereas a hostname won’t. If DNS is properly configured and maintained, the hostname will always point to the proper system regardless of what

Ngày đăng: 07/07/2014, 09:20

Từ khóa liên quan

Tài liệu cùng người dùng

Tài liệu liên quan