Network Working Group S. Shepler Request for Comments: 3530 B. Callaghan Obsoletes: 3010 D. Robinson Category: Standards Track R. Thurlow Sun Microsystems, Inc. C. Beame Hummingbird Ltd. M. Eisler D. Noveck Network Appliance, Inc. April 2003 Network File System (NFS) version 4 Protocol Status of this Memo This document specifies an Internet standards track protocol for the Internet community, and requests discussion and suggestions for improvements. Please refer to the current edition of the "Internet Official Protocol Standards" (STD 1) for the standardization state and status of this protocol. Distribution of this memo is unlimited. Copyright Notice Copyright (C) The Internet Society (2003). All Rights Reserved. Abstract The Network File System (NFS) version 4 is a distributed filesystem protocol which owes heritage to NFS protocol version 2, RFC 1094, and version 3, RFC 1813. Unlike earlier versions, the NFS version 4 protocol supports traditional file access while integrating support for file locking and the mount protocol. In addition, support for strong security (and its negotiation), compound operations, client caching, and internationalization have been added. Of course, attention has been applied to making NFS version 4 operate well in an Internet environment. This document replaces RFC 3010 as the definition of the NFS version 4 protocol. Key Words The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [RFC2119]. Shepler, et al. Standards Track [Page 1] RFC 3530 NFS version 4 Protocol April 2003 Table of Contents 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . 8 1.1. Changes since RFC 3010 . . . . . . . . . . . . . . . 8 1.2. NFS version 4 Goals. . . . . . . . . . . . . . . . . 9 1.3. Inconsistencies of this Document with Section 18 . . 9 1.4. Overview of NFS version 4 Features . . . . . . . . . 10 1.4.1. RPC and Security . . . . . . . . . . . . . . 10 1.4.2. Procedure and Operation Structure. . . . . . 10 1.4.3. Filesystem Mode. . . . . . . . . . . . . . . 11 1.4.3.1. Filehandle Types . . . . . . . . . 11 1.4.3.2. Attribute Types. . . . . . . . . . 12 1.4.3.3. Filesystem Replication and Migration. . . . . . . . . . . . . 13 1.4.4. OPEN and CLOSE . . . . . . . . . . . . . . . 13 1.4.5. File locking . . . . . . . . . . . . . . . . 13 1.4.6. Client Caching and Delegation. . . . . . . . 13 1.5. General Definitions. . . . . . . . . . . . . . . . . 14 2. Protocol Data Types. . . . . . . . . . . . . . . . . . . . 16 2.1. Basic Data Types . . . . . . . . . . . . . . . . . . 16 2.2. Structured Data Types. . . . . . . . . . . . . . . . 18 3. RPC and Security Flavor. . . . . . . . . . . . . . . . . . 23 3.1. Ports and Transports . . . . . . . . . . . . . . . . 23 3.1.1. Client Retransmission Behavior . . . . . . . 24 3.2. Security Flavors . . . . . . . . . . . . . . . . . . 25 3.2.1. Security mechanisms for NFS version 4. . . . 25 3.2.1.1. Kerberos V5 as a security triple . 25 3.2.1.2. LIPKEY as a security triple. . . . 26 3.2.1.3. SPKM-3 as a security triple. . . . 27 3.3. Security Negotiation . . . . . . . . . . . . . . . . 27 3.3.1. SECINFO. . . . . . . . . . . . . . . . . . . 28 3.3.2. Security Error . . . . . . . . . . . . . . . 28 3.4. Callback RPC Authentication. . . . . . . . . . . . . 28 4. Filehandles . . . . . . . . . . . . . . . . . . . . . . . . 30 4.1. Obtaining the First Filehandle . . . . . . . . . . . 30 4.1.1. Root Filehandle. . . . . . . . . . . . . . . 31 4.1.2. Public Filehandle. . . . . . . . . . . . . . 31 4.2. Filehandle Types . . . . . . . . . . . . . . . . . . 31 4.2.1. General Properties of a Filehandle . . . . . 32 4.2.2. Persistent Filehandle. . . . . . . . . . . . 32 4.2.3. Volatile Filehandle. . . . . . . . . . . . . 33 4.2.4. One Method of Constructing a Volatile Filehandle. . . . . . . . . . . . . 34 4.3. Client Recovery from Filehandle Expiration . . . . . 35 5. File Attributes. . . . . . . . . . . . . . . . . . . . . . 35 5.1. Mandatory Attributes . . . . . . . . . . . . . . . . 37 5.2. Recommended Attributes . . . . . . . . . . . . . . . 37 5.3. Named Attributes . . . . . . . . . . . . . . . . . . 37 Shepler, et al. Standards Track [Page 2] RFC 3530 NFS version 4 Protocol April 2003 5.4. Classification of Attributes . . . . . . . . . . . . 38 5.5. Mandatory Attributes - Definitions . . . . . . . . . 39 5.6. Recommended Attributes - Definitions . . . . . . . . 41 5.7. Time Access. . . . . . . . . . . . . . . . . . . . . 46 5.8. Interpreting owner and owner_group . . . . . . . . . 47 5.9. Character Case Attributes. . . . . . . . . . . . . . 49 5.10. Quota Attributes . . . . . . . . . . . . . . . . . . 49 5.11. Access Control Lists . . . . . . . . . . . . . . . . 50 5.11.1. ACE type . . . . . . . . . . . . . . . . . 51 5.11.2. ACE Access Mask. . . . . . . . . . . . . . 52 5.11.3. ACE flag . . . . . . . . . . . . . . . . . 54 5.11.4. ACE who . . . . . . . . . . . . . . . . . 55 5.11.5. Mode Attribute . . . . . . . . . . . . . . 56 5.11.6. Mode and ACL Attribute . . . . . . . . . . 57 5.11.7. mounted_on_fileid. . . . . . . . . . . . . 57 6. Filesystem Migration and Replication . . . . . . . . . . . 58 6.1. Replication. . . . . . . . . . . . . . . . . . . . . 58 6.2. Migration. . . . . . . . . . . . . . . . . . . . . . 59 6.3. Interpretation of the fs_locations Attribute . . . . 60 6.4. Filehandle Recovery for Migration or Replication . . 61 7. NFS Server Name Space . . . . . . . . . . . . . . . . . . . 61 7.1. Server Exports . . . . . . . . . . . . . . . . . . . 61 7.2. Browsing Exports . . . . . . . . . . . . . . . . . . 62 7.3. Server Pseudo Filesystem . . . . . . . . . . . . . . 62 7.4. Multiple Roots . . . . . . . . . . . . . . . . . . . 63 7.5. Filehandle Volatility. . . . . . . . . . . . . . . . 63 7.6. Exported Root. . . . . . . . . . . . . . . . . . . . 63 7.7. Mount Point Crossing . . . . . . . . . . . . . . . . 63 7.8. Security Policy and Name Space Presentation. . . . . 64 8. File Locking and Share Reservations. . . . . . . . . . . . 65 8.1. Locking. . . . . . . . . . . . . . . . . . . . . . . 65 8.1.1. Client ID. . . . . . . . . . . . . . . . . 66 8.1.2. Server Release of Clientid . . . . . . . . 69 8.1.3. lock_owner and stateid Definition. . . . . 69 8.1.4. Use of the stateid and Locking . . . . . . 71 8.1.5. Sequencing of Lock Requests. . . . . . . . 73 8.1.6. Recovery from Replayed Requests. . . . . . 74 8.1.7. Releasing lock_owner State . . . . . . . . 74 8.1.8. Use of Open Confirmation . . . . . . . . . 75 8.2. Lock Ranges. . . . . . . . . . . . . . . . . . . . . 76 8.3. Upgrading and Downgrading Locks. . . . . . . . . . . 76 8.4. Blocking Locks . . . . . . . . . . . . . . . . . . . 77 8.5. Lease Renewal. . . . . . . . . . . . . . . . . . . . 77 8.6. Crash Recovery . . . . . . . . . . . . . . . . . . . 78 8.6.1. Client Failure and Recovery. . . . . . . . 79 8.6.2. Server Failure and Recovery. . . . . . . . 79 8.6.3. Network Partitions and Recovery. . . . . . 81 8.7. Recovery from a Lock Request Timeout or Abort . . . 85 Shepler, et al. Standards Track [Page 3] RFC 3530 NFS version 4 Protocol April 2003 8.8. Server Revocation of Locks. . . . . . . . . . . . . 85 8.9. Share Reservations. . . . . . . . . . . . . . . . . 86 8.10. OPEN/CLOSE Operations . . . . . . . . . . . . . . . 87 8.10.1. Close and Retention of State Information. . . . . . . . . . . . . . . . 88 8.11. Open Upgrade and Downgrade. . . . . . . . . . . . . 88 8.12. Short and Long Leases . . . . . . . . . . . . . . . 89 8.13. Clocks, Propagation Delay, and Calculating Lease Expiration. . . . . . . . . . . . . . . . . . . . . 89 8.14. Migration, Replication and State. . . . . . . . . . 90 8.14.1. Migration and State. . . . . . . . . . . . 90 8.14.2. Replication and State. . . . . . . . . . . 91 8.14.3. Notification of Migrated Lease . . . . . . 92 8.14.4. Migration and the Lease_time Attribute . . 92 9. Client-Side Caching . . . . . . . . . . . . . . . . . . . . 93 9.1. Performance Challenges for Client-Side Caching. . . 93 9.2. Delegation and Callbacks. . . . . . . . . . . . . . 94 9.2.1. Delegation Recovery . . . . . . . . . . . . 96 9.3. Data Caching. . . . . . . . . . . . . . . . . . . . 98 9.3.1. Data Caching and OPENs . . . . . . . . . . 98 9.3.2. Data Caching and File Locking. . . . . . . 99 9.3.3. Data Caching and Mandatory File Locking. . 101 9.3.4. Data Caching and File Identity . . . . . . 101 9.4. Open Delegation . . . . . . . . . . . . . . . . . . 102 9.4.1. Open Delegation and Data Caching . . . . . 104 9.4.2. Open Delegation and File Locks . . . . . . 106 9.4.3. Handling of CB_GETATTR . . . . . . . . . . 106 9.4.4. Recall of Open Delegation. . . . . . . . . 109 9.4.5. Clients that Fail to Honor Delegation Recalls . . . . . . . . . . . . 111 9.4.6. Delegation Revocation. . . . . . . . . . . 112 9.5. Data Caching and Revocation . . . . . . . . . . . . 112 9.5.1. Revocation Recovery for Write Open Delegation . . . . . . . . . . . . . . . . 113 9.6. Attribute Caching . . . . . . . . . . . . . . . . . 113 9.7. Data and Metadata Caching and Memory Mapped Files . 115 9.8. Name Caching . . . . . . . . . . . . . . . . . . . 118 9.9. Directory Caching . . . . . . . . . . . . . . . . . 119 10. Minor Versioning . . . . . . . . . . . . . . . . . . . . . 120 11. Internationalization . . . . . . . . . . . . . . . . . . . 122 11.1. Stringprep profile for the utf8str_cs type. . . . . 123 11.1.1. Intended applicability of the nfs4_cs_prep profile . . . . . . . . . . . 123 11.1.2. Character repertoire of nfs4_cs_prep . . . 124 11.1.3. Mapping used by nfs4_cs_prep . . . . . . . 124 11.1.4. Normalization used by nfs4_cs_prep . . . . 124 11.1.5. Prohibited output for nfs4_cs_prep . . . . 125 11.1.6. Bidirectional output for nfs4_cs_prep. . . 125 Shepler, et al. Standards Track [Page 4] RFC 3530 NFS version 4 Protocol April 2003 11.2. Stringprep profile for the utf8str_cis type . . . . 125 11.2.1. Intended applicability of the nfs4_cis_prep profile. . . . . . . . . . . 125 11.2.2. Character repertoire of nfs4_cis_prep . . 125 11.2.3. Mapping used by nfs4_cis_prep . . . . . . 125 11.2.4. Normalization used by nfs4_cis_prep . . . 125 11.2.5. Prohibited output for nfs4_cis_prep . . . 126 11.2.6. Bidirectional output for nfs4_cis_prep . . 126 11.3. Stringprep profile for the utf8str_mixed type . . . 126 11.3.1. Intended applicability of the nfs4_mixed_prep profile. . . . . . . . . . 126 11.3.2. Character repertoire of nfs4_mixed_prep . 126 11.3.3. Mapping used by nfs4_cis_prep . . . . . . 126 11.3.4. Normalization used by nfs4_mixed_prep . . 127 11.3.5. Prohibited output for nfs4_mixed_prep . . 127 11.3.6. Bidirectional output for nfs4_mixed_prep . 127 11.4. UTF-8 Related Errors. . . . . . . . . . . . . . . . 127 12. Error Definitions . . . . . . . . . . . . . . . . . . . . 128 13. NFS version 4 Requests . . . . . . . . . . . . . . . . . . 134 13.1. Compound Procedure. . . . . . . . . . . . . . . . . 134 13.2. Evaluation of a Compound Request. . . . . . . . . . 135 13.3. Synchronous Modifying Operations. . . . . . . . . . 136 13.4. Operation Values. . . . . . . . . . . . . . . . . . 136 14. NFS version 4 Procedures . . . . . . . . . . . . . . . . . 136 14.1. Procedure 0: NULL - No Operation. . . . . . . . . . 136 14.2. Procedure 1: COMPOUND - Compound Operations . . . . 137 14.2.1. Operation 3: ACCESS - Check Access Rights. . . . . . . . . . . . . . . . . . 140 14.2.2. Operation 4: CLOSE - Close File . . . . . 142 14.2.3. Operation 5: COMMIT - Commit Cached Data . . . . . . . . . . . . . . . 144 14.2.4. Operation 6: CREATE - Create a Non-Regular File Object . . . . . . . . . 147 14.2.5. Operation 7: DELEGPURGE - Purge Delegations Awaiting Recovery . . . 150 14.2.6. Operation 8: DELEGRETURN - Return Delegation. . . . . . . . . . . . . . . . 151 14.2.7. Operation 9: GETATTR - Get Attributes . . 152 14.2.8. Operation 10: GETFH - Get Current Filehandle. . . . . . . . . . . . . . . . 153 14.2.9. Operation 11: LINK - Create Link to a File. . . . . . . . . . . . . . . . . . . 154 14.2.10. Operation 12: LOCK - Create Lock . . . . 156 14.2.11. Operation 13: LOCKT - Test For Lock . . . 160 14.2.12. Operation 14: LOCKU - Unlock File . . . . 162 14.2.13. Operation 15: LOOKUP - Lookup Filename. . 163 14.2.14. Operation 16: LOOKUPP - Lookup Parent Directory. . . . . . . . . . . . . 165 Shepler, et al. Standards Track [Page 5] RFC 3530 NFS version 4 Protocol April 2003 14.2.15. Operation 17: NVERIFY - Verify Difference in Attributes . . . . . . . . 166 14.2.16. Operation 18: OPEN - Open a Regular File. . . . . . . . . . . . . . . . . . . 168 14.2.17. Operation 19: OPENATTR - Open Named Attribute Directory . . . . . . . . . . . 178 14.2.18. Operation 20: OPEN_CONFIRM - Confirm Open . . . . . . . . . . . . . . 180 14.2.19. Operation 21: OPEN_DOWNGRADE - Reduce Open File Access . . . . . . . . . 182 14.2.20. Operation 22: PUTFH - Set Current Filehandle. . . . . . . . . . . . 184 14.2.21. Operation 23: PUTPUBFH - Set Public Filehandle . . . . . . . . . . 185 14.2.22. Operation 24: PUTROOTFH - Set Root Filehandle . . . . . . . . . . . 186 14.2.23. Operation 25: READ - Read from File . . . 187 14.2.24. Operation 26: READDIR - Read Directory. . . . . . . . . . . . . . 190 14.2.25. Operation 27: READLINK - Read Symbolic Link. . . . . . . . . . . . 193 14.2.26. Operation 28: REMOVE - Remove Filesystem Object. . . . . . . . . 195 14.2.27. Operation 29: RENAME - Rename Directory Entry. . . . . . . . . . 197 14.2.28. Operation 30: RENEW - Renew a Lease . . . 200 14.2.29. Operation 31: RESTOREFH - Restore Saved Filehandle. . . . . . . . . 201 14.2.30. Operation 32: SAVEFH - Save Current Filehandle. . . . . . . . . . . . 202 14.2.31. Operation 33: SECINFO - Obtain Available Security. . . . . . . . . . . . 203 14.2.32. Operation 34: SETATTR - Set Attributes. . 206 14.2.33. Operation 35: SETCLIENTID - Negotiate Clientid. . . . . . . . . . . . 209 14.2.34. Operation 36: SETCLIENTID_CONFIRM - Confirm Clientid. . . . . . . . . . . . . 213 14.2.35. Operation 37: VERIFY - Verify Same Attributes. . . . . . . . . . 217 14.2.36. Operation 38: WRITE - Write to File . . . 218 14.2.37. Operation 39: RELEASE_LOCKOWNER - Release Lockowner State . . . . . . . . . 223 14.2.38. Operation 10044: ILLEGAL - Illegal operation . . . . . . . . . . . . 224 15. NFS version 4 Callback Procedures . . . . . . . . . . . . 225 15.1. Procedure 0: CB_NULL - No Operation . . . . . . . . 225 15.2. Procedure 1: CB_COMPOUND - Compound Operations. . . . . . . . . . . . . . . . . . . . . 226 Shepler, et al. Standards Track [Page 6] RFC 3530 NFS version 4 Protocol April 2003 15.2.1. Operation 3: CB_GETATTR - Get Attributes . . . . . . . . . . . . . . . . 228 15.2.2. Operation 4: CB_RECALL - Recall an Open Delegation. . . . . . . . . 229 15.2.3. Operation 10044: CB_ILLEGAL - Illegal Callback Operation . . . . . . . . 230 16. Security Considerations . . . . . . . . . . . . . . . . . 231 17. IANA Considerations . . . . . . . . . . . . . . . . . . . 232 17.1. Named Attribute Definition. . . . . . . . . . . . . 232 17.2. ONC RPC Network Identifiers (netids). . . . . . . . 232 18. RPC definition file . . . . . . . . . . . . . . . . . . . 234 19. Acknowledgements . . . . . . . . . . . . . . . . . . . . . 268 20. Normative References . . . . . . . . . . . . . . . . . . . 268 21. Informative References . . . . . . . . . . . . . . . . . . 270 22. Authors' Information . . . . . . . . . . . . . . . . . . . 273 22.1. Editor's Address. . . . . . . . . . . . . . . . . . 273 22.2. Authors' Addresses. . . . . . . . . . . . . . . . . 274 23. Full Copyright Statement . . . . . . . . . . . . . . . . . 275 Shepler, et al. Standards Track [Page 7] RFC 3530 NFS version 4 Protocol April 2003 1. Introduction 1.1. Changes since RFC 3010 This definition of the NFS version 4 protocol replaces or obsoletes the definition present in [RFC3010]. While portions of the two documents have remained the same, there have been substantive changes in others. The changes made between [RFC3010] and this document represent implementation experience and further review of the protocol. While some modifications were made for ease of implementation or clarification, most updates represent errors or situations where the [RFC3010] definition were untenable. The following list is not all inclusive of all changes but presents some of the most notable changes or additions made: o The state model has added an open_owner4 identifier. This was done to accommodate Posix based clients and the model they use for file locking. For Posix clients, an open_owner4 would correspond to a file descriptor potentially shared amongst a set of processes and the lock_owner4 identifier would correspond to a process that is locking a file. o Clarifications and error conditions were added for the handling of the owner and group attributes. Since these attributes are string based (as opposed to the numeric uid/gid of previous versions of NFS), translations may not be available and hence the changes made. o Clarifications for the ACL and mode attributes to address evaluation and partial support. o For identifiers that are defined as XDR opaque, limits were set on their size. o Added the mounted_on_filed attribute to allow Posix clients to correctly construct local mounts. o Modified the SETCLIENTID/SETCLIENTID_CONFIRM operations to deal correctly with confirmation details along with adding the ability to specify new client callback information. Also added clarification of the callback information itself. o Added a new operation LOCKOWNER_RELEASE to enable notifying the server that a lock_owner4 will no longer be used by the client. o RENEW operation changes to identify the client correctly and allow for additional error returns. Shepler, et al. Standards Track [Page 8] RFC 3530 NFS version 4 Protocol April 2003 o Verify error return possibilities for all operations. o Remove use of the pathname4 data type from LOOKUP and OPEN in favor of having the client construct a sequence of LOOKUP operations to achieive the same effect. o Clarification of the internationalization issues and adoption of the new stringprep profile framework. 1.2. NFS Version 4 Goals The NFS version 4 protocol is a further revision of the NFS protocol defined already by versions 2 [RFC1094] and 3 [RFC1813]. It retains the essential characteristics of previous versions: design for easy recovery, independent of transport protocols, operating systems and filesystems, simplicity, and good performance. The NFS version 4 revision has the following goals: o Improved access and good performance on the Internet. The protocol is designed to transit firewalls easily, perform well where latency is high and bandwidth is low, and scale to very large numbers of clients per server. o Strong security with negotiation built into the protocol. The protocol builds on the work of the ONCRPC working group in supporting the RPCSEC_GSS protocol. Additionally, the NFS version 4 protocol provides a mechanism to allow clients and servers the ability to negotiate security and require clients and servers to support a minimal set of security schemes. o Good cross-platform interoperability. The protocol features a filesystem model that provides a useful, common set of features that does not unduly favor one filesystem or operating system over another. o Designed for protocol extensions. The protocol is designed to accept standard extensions that do not compromise backward compatibility. 1.3. Inconsistencies of this Document with Section 18 Section 18, RPC Definition File, contains the definitions in XDR description language of the constructs used by the protocol. Prior to Section 18, several of the constructs are reproduced for purposes Shepler, et al. Standards Track [Page 9] RFC 3530 NFS version 4 Protocol April 2003 of explanation. The reader is warned of the possibility of errors in the reproduced constructs outside of Section 18. For any part of the document that is inconsistent with Section 18, Section 18 is to be considered authoritative. 1.4. Overview of NFS version 4 Features To provide a reasonable context for the reader, the major features of NFS version 4 protocol will be reviewed in brief. This will be done to provide an appropriate context for both the reader who is familiar with the previous versions of the NFS protocol and the reader that is new to the NFS protocols. For the reader new to the NFS protocols, there is still a fundamental knowledge that is expected. The reader should be familiar with the XDR and RPC protocols as described in [RFC1831] and [RFC1832]. A basic knowledge of filesystems and distributed filesystems is expected as well. 1.4.1. RPC and Security As with previous versions of NFS, the External Data Representation (XDR) and Remote Procedure Call (RPC) mechanisms used for the NFS version 4 protocol are those defined in [RFC1831] and [RFC1832]. To meet end to end security requirements, the RPCSEC_GSS framework [RFC2203] will be used to extend the basic RPC security. With the use of RPCSEC_GSS, various mechanisms can be provided to offer authentication, integrity, and privacy to the NFS version 4 protocol. Kerberos V5 will be used as described in [RFC1964] to provide one security framework. The LIPKEY GSS-API mechanism described in [RFC2847] will be used to provide for the use of user password and server public key by the NFS version 4 protocol. With the use of RPCSEC_GSS, other mechanisms may also be specified and used for NFS version 4 security. To enable in-band security negotiation, the NFS version 4 protocol has added a new operation which provides the client a method of querying the server about its policies regarding which security mechanisms must be used for access to the server's filesystem resources. With this, the client can securely match the security mechanism that meets the policies specified at both the client and server. 1.4.2. Procedure and Operation Structure A significant departure from the previous versions of the NFS protocol is the introduction of the COMPOUND procedure. For the NFS version 4 protocol, there are two RPC procedures, NULL and COMPOUND. The COMPOUND procedure is defined in terms of operations and these operations correspond more closely to the traditional NFS procedures. Shepler, et al. Standards Track [Page 10] RFC 3530 NFS version 4 Protocol April 2003 With the use of the COMPOUND procedure, the client is able to build simple or complex requests. These COMPOUND requests allow for a reduction in the number of RPCs needed for logical filesystem operations. For example, without previous contact with a server a client will be able to read data from a file in one request by combining LOOKUP, OPEN, and READ operations in a single COMPOUND RPC. With previous versions of the NFS protocol, this type of single request was not possible. The model used for COMPOUND is very simple. There is no logical OR or ANDing of operations. The operations combined within a COMPOUND request are evaluated in order by the server. Once an operation returns a failing result, the evaluation ends and the results of all evaluated operations are returned to the client. The NFS version 4 protocol continues to have the client refer to a file or directory at the server by a "filehandle". The COMPOUND procedure has a method of passing a filehandle from one operation to another within the sequence of operations. There is a concept of a "current filehandle" and "saved filehandle". Most operations use the "current filehandle" as the filesystem object to operate upon. The "saved filehandle" is used as temporary filehandle storage within a COMPOUND procedure as well as an additional operand for certain operations. 1.4.3. Filesystem Model The general filesystem model used for the NFS version 4 protocol is the same as previous versions. The server filesystem is hierarchical with the regular files contained within being treated as opaque byte streams. In a slight departure, file and directory names are encoded with UTF-8 to deal with the basics of internationalization. The NFS version 4 protocol does not require a separate protocol to provide for the initial mapping between path name and filehandle. Instead of using the older MOUNT protocol for this mapping, the server provides a ROOT filehandle that represents the logical root or top of the filesystem tree provided by the server. The server provides multiple filesystems by gluing them together with pseudo filesystems. These pseudo filesystems provide for potential gaps in the path names between real filesystems. 1.4.3.1. Filehandle Types In previous versions of the NFS protocol, the filehandle provided by the server was guaranteed to be valid or persistent for the lifetime of the filesystem object to which it referred. For some server implementations, this persistence requirement has been difficult to Shepler, et al. Standards Track [Page 11] RFC 3530 NFS version 4 Protocol April 2003 meet. For the NFS version 4 protocol, this requirement has been relaxed by introducing another type of filehandle, volatile. With persistent and volatile filehandle types, the server implementation can match the abilities of the filesystem at the server along with the operating environment. The client will have knowledge of the type of filehandle being provided by the server and can be prepared to deal with the semantics of each. 1.4.3.2. Attribute Types The NFS version 4 protocol introduces three classes of filesystem or file attributes. Like the additional filehandle type, the classification of file attributes has been done to ease server implementations along with extending the overall functionality of the NFS protocol. This attribute model is structured to be extensible such that new attributes can be introduced in minor revisions of the protocol without requiring significant rework. The three classifications are: mandatory, recommended and named attributes. This is a significant departure from the previous attribute model used in the NFS protocol. Previously, the attributes for the filesystem and file objects were a fixed set of mainly UNIX attributes. If the server or client did not support a particular attribute, it would have to simulate the attribute the best it could. Mandatory attributes are the minimal set of file or filesystem attributes that must be provided by the server and must be properly represented by the server. Recommended attributes represent different filesystem types and operating environments. The recommended attributes will allow for better interoperability and the inclusion of more operating environments. The mandatory and recommended attribute sets are traditional file or filesystem attributes. The third type of attribute is the named attribute. A named attribute is an opaque byte stream that is associated with a directory or file and referred to by a string name. Named attributes are meant to be used by client applications as a method to associate application specific data with a regular file or directory. One significant addition to the recommended set of file attributes is the Access Control List (ACL) attribute. This attribute provides for directory and file access control beyond the model used in previous versions of the NFS protocol. The ACL definition allows for specification of user and group level access control. Shepler, et al. Standards Track [Page 12] RFC 3530 NFS version 4 Protocol April 2003 1.4.3.3. Filesystem Replication and Migration With the use of a special file attribute, the ability to migrate or replicate server filesystems is enabled within the protocol. The filesystem locations attribute provides a method for the client to probe the server about the location of a filesystem. In the event of a migration of a filesystem, the client will receive an error when operating on the filesystem and it can then query as to the new file system location. Similar steps are used for replication, the client is able to query the server for the multiple available locations of a particular filesystem. From this information, the client can use its own policies to access the appropriate filesystem location. 1.4.4. OPEN and CLOSE The NFS version 4 protocol introduces OPEN and CLOSE operations. The OPEN operation provides a single point where file lookup, creation, and share semantics can be combined. The CLOSE operation also provides for the release of state accumulated by OPEN. 1.4.5. File locking With the NFS version 4 protocol, the support for byte range file locking is part of the NFS protocol. The file locking support is structured so that an RPC callback mechanism is not required. This is a departure from the previous versions of the NFS file locking protocol, Network Lock Manager (NLM). The state associated with file locks is maintained at the server under a lease-based model. The server defines a single lease period for all state held by a NFS client. If the client does not renew its lease within the defined period, all state associated with the client's lease may be released by the server. The client may renew its lease with use of the RENEW operation or implicitly by use of other operations (primarily READ). 1.4.6. Client Caching and Delegation The file, attribute, and directory caching for the NFS version 4 protocol is similar to previous versions. Attributes and directory information are cached for a duration determined by the client. At the end of a predefined timeout, the client will query the server to see if the related filesystem object has been updated. For file data, the client checks its cache validity when the file is opened. A query is sent to the server to determine if the file has been changed. Based on this information, the client determines if the data cache for the file should kept or released. Also, when the file is closed, any modified data is written to the server. Shepler, et al. Standards Track [Page 13] RFC 3530 NFS version 4 Protocol April 2003 If an application wants to serialize access to file data, file locking of the file data ranges in question should be used. The major addition to NFS version 4 in the area of caching is the ability of the server to delegate certain responsibilities to the client. When the server grants a delegation for a file to a client, the client is guaranteed certain semantics with respect to the sharing of that file with other clients. At OPEN, the server may provide the client either a read or write delegation for the file. If the client is granted a read delegation, it is assured that no other client has the ability to write to the file for the duration of the delegation. If the client is granted a write delegation, the client is assured that no other client has read or write access to the file. Delegations can be recalled by the server. If another client requests access to the file in such a way that the access conflicts with the granted delegation, the server is able to notify the initial client and recall the delegation. This requires that a callback path exist between the server and client. If this callback path does not exist, then delegations can not be granted. The essence of a delegation is that it allows the client to locally service operations such as OPEN, CLOSE, LOCK, LOCKU, READ, WRITE without immediate interaction with the server. 1.5. General Definitions The following definitions are provided for the purpose of providing an appropriate context for the reader. Client The "client" is the entity that accesses the NFS server's resources. The client may be an application which contains the logic to access the NFS server directly. The client may also be the traditional operating system client remote filesystem services for a set of applications. In the case of file locking the client is the entity that maintains a set of locks on behalf of one or more applications. This client is responsible for crash or failure recovery for those locks it manages. Note that multiple clients may share the same transport and multiple clients may exist on the same network node. Clientid A 64-bit quantity used as a unique, short-hand reference to a client supplied Verifier and ID. The server is responsible for supplying the Clientid. Shepler, et al. Standards Track [Page 14] RFC 3530 NFS version 4 Protocol April 2003 Lease An interval of time defined by the server for which the client is irrevocably granted a lock. At the end of a lease period the lock may be revoked if the lease has not been extended. The lock must be revoked if a conflicting lock has been granted after the lease interval. All leases granted by a server have the same fixed interval. Note that the fixed interval was chosen to alleviate the expense a server would have in maintaining state about variable length leases across server failures. Lock The term "lock" is used to refer to both record (byte- range) locks as well as share reservations unless specifically stated otherwise. Server The "Server" is the entity responsible for coordinating client access to a set of filesystems. Stable Storage NFS version 4 servers must be able to recover without data loss from multiple power failures (including cascading power failures, that is, several power failures in quick succession), operating system failures, and hardware failure of components other than the storage medium itself (for example, disk, nonvolatile RAM). Some examples of stable storage that are allowable for an NFS server include: 1. Media commit of data, that is, the modified data has been successfully written to the disk media, for example, the disk platter. 2. An immediate reply disk drive with battery-backed on- drive intermediate storage or uninterruptible power system (UPS). 3. Server commit of data with battery-backed intermediate storage and recovery software. 4. Cache commit with uninterruptible power system (UPS) and recovery software. Stateid A 128-bit quantity returned by a server that uniquely defines the open and locking state provided by the server for a specific open or lock owner for a specific file. Shepler, et al. Standards Track [Page 15] RFC 3530 NFS version 4 Protocol April 2003 Stateids composed of all bits 0 or all bits 1 have special meaning and are reserved values. Verifier A 64-bit quantity generated by the client that the server can use to determine if the client has restarted and lost all previous lock state. 2. Protocol Data Types The syntax and semantics to describe the data types of the NFS version 4 protocol are defined in the XDR [RFC1832] and RPC [RFC1831] documents. The next sections build upon the XDR data types to define types and structures specific to this protocol. 2.1. Basic Data Types Data Type Definition ____________________________________________________________________ int32_t typedef int int32_t; uint32_t typedef unsigned int uint32_t; int64_t typedef hyper int64_t; uint64_t typedef unsigned hyper uint64_t; attrlist4 typedef opaque attrlist4<>; Used for file/directory attributes bitmap4 typedef uint32_t bitmap4<>; Used in attribute array encoding. changeid4 typedef uint64_t changeid4; Used in definition of change_info clientid4 typedef uint64_t clientid4; Shorthand reference to client identification component4 typedef utf8str_cs component4; Represents path name components count4 typedef uint32_t count4; Various count parameters (READ, WRITE, COMMIT) length4 typedef uint64_t length4; Describes LOCK lengths Shepler, et al. Standards Track [Page 16] RFC 3530 NFS version 4 Protocol April 2003 linktext4 typedef utf8str_cs linktext4; Symbolic link contents mode4 typedef uint32_t mode4; Mode attribute data type nfs_cookie4 typedef uint64_t nfs_cookie4; Opaque cookie value for READDIR nfs_fh4 typedef opaque nfs_fh4; Filehandle definition; NFS4_FHSIZE is defined as 128 nfs_ftype4 enum nfs_ftype4; Various defined file types nfsstat4 enum nfsstat4; Return value for operations offset4 typedef uint64_t offset4; Various offset designations (READ, WRITE, LOCK, COMMIT) pathname4 typedef component4 pathname4<>; Represents path name for LOOKUP, OPEN and others qop4 typedef uint32_t qop4; Quality of protection designation in SECINFO sec_oid4 typedef opaque sec_oid4<>; Security Object Identifier The sec_oid4 data type is not really opaque. Instead contains an ASN.1 OBJECT IDENTIFIER as used by GSS-API in the mech_type argument to GSS_Init_sec_context. See [RFC2743] for details. seqid4 typedef uint32_t seqid4; Sequence identifier used for file locking utf8string typedef opaque utf8string<>; UTF-8 encoding for strings utf8str_cis typedef opaque utf8str_cis; Case-insensitive UTF-8 string utf8str_cs typedef opaque utf8str_cs; Case-sensitive UTF-8 string Shepler, et al. Standards Track [Page 17] RFC 3530 NFS version 4 Protocol April 2003 utf8str_mixed typedef opaque utf8str_mixed; UTF-8 strings with a case sensitive prefix and a case insensitive suffix. verifier4 typedef opaque verifier4[NFS4_VERIFIER_SIZE]; Verifier used for various operations (COMMIT, CREATE, OPEN, READDIR, SETCLIENTID, SETCLIENTID_CONFIRM, WRITE) NFS4_VERIFIER_SIZE is defined as 8. 2.2. Structured Data Types nfstime4 struct nfstime4 { int64_t seconds; uint32_t nseconds; } The nfstime4 structure gives the number of seconds and nanoseconds since midnight or 0 hour January 1, 1970 Coordinated Universal Time (UTC). Values greater than zero for the seconds field denote dates after the 0 hour January 1, 1970. Values less than zero for the seconds field denote dates before the 0 hour January 1, 1970. In both cases, the nseconds field is to be added to the seconds field for the final time representation. For example, if the time to be represented is one-half second before 0 hour January 1, 1970, the seconds field would have a value of negative one (-1) and the nseconds fields would have a value of one-half second (500000000). Values greater than 999,999,999 for nseconds are considered invalid. This data type is used to pass time and date information. A server converts to and from its local representation of time when processing time values, preserving as much accuracy as possible. If the precision of timestamps stored for a filesystem object is less than defined, loss of precision can occur. An adjunct time maintenance protocol is recommended to reduce client and server time skew. time_how4 enum time_how4 { SET_TO_SERVER_TIME4 = 0, SET_TO_CLIENT_TIME4 = 1 }; Shepler, et al. Standards Track [Page 18] RFC 3530 NFS version 4 Protocol April 2003 settime4 union settime4 switch (time_how4 set_it) { case SET_TO_CLIENT_TIME4: nfstime4 time; default: void; }; The above definitions are used as the attribute definitions to set time values. If set_it is SET_TO_SERVER_TIME4, then the server uses its local representation of time for the time value. specdata4 struct specdata4 { uint32_t specdata1; /* major device number */ uint32_t specdata2; /* minor device number */ }; This data type represents additional information for the device file types NF4CHR and NF4BLK. fsid4 struct fsid4 { uint64_t major; uint64_t minor; }; This type is the filesystem identifier that is used as a mandatory attribute. fs_location4 struct fs_location4 { utf8str_cis server<>; pathname4 rootpath; }; fs_locations4 struct fs_locations4 { pathname4 fs_root; fs_location4 locations<>; }; Shepler, et al. Standards Track [Page 19] RFC 3530 NFS version 4 Protocol April 2003 The fs_location4 and fs_locations4 data types are used for the fs_locations recommended attribute which is used for migration and replication support. fattr4 struct fattr4 { bitmap4 attrmask; attrlist4 attr_vals; }; The fattr4 structure is used to represent file and directory attributes. The bitmap is a counted array of 32 bit integers used to contain bit values. The position of the integer in the array that contains bit n can be computed from the expression (n / 32) and its bit within that integer is (n mod 32). 0 1 +-----------+-----------+-----------+-- | count | 31 .. 0 | 63 .. 32 | +-----------+-----------+-----------+-- change_info4 struct change_info4 { bool atomic; changeid4 before; changeid4 after; }; This structure is used with the CREATE, LINK, REMOVE, RENAME operations to let the client know the value of the change attribute for the directory in which the target filesystem object resides. clientaddr4 struct clientaddr4 { /* see struct rpcb in RFC 1833 */ string r_netid<>; /* network id */ string r_addr<>; /* universal address */ }; The clientaddr4 structure is used as part of the SETCLIENTID operation to either specify the address of the client that is using a clientid or as part of the callback registration. The Shepler, et al. Standards Track [Page 20] RFC 3530 NFS version 4 Protocol April 2003 r_netid and r_addr fields are specified in [RFC1833], but they are underspecified in [RFC1833] as far as what they should look like for specific protocols. For TCP over IPv4 and for UDP over IPv4, the format of r_addr is the US-ASCII string: h1.h2.h3.h4.p1.p2 The prefix, "h1.h2.h3.h4", is the standard textual form for representing an IPv4 address, which is always four octets long. Assuming big-endian ordering, h1, h2, h3, and h4, are respectively, the first through fourth octets each converted to ASCII-decimal. Assuming big-endian ordering, p1 and p2 are, respectively, the first and second octets each converted to ASCII-decimal. For example, if a host, in big-endian order, has an address of 0x0A010307 and there is a service listening on, in big endian order, port 0x020F (decimal 527), then the complete universal address is "10.1.3.7.2.15". For TCP over IPv4 the value of r_netid is the string "tcp". For UDP over IPv4 the value of r_netid is the string "udp". For TCP over IPv6 and for UDP over IPv6, the format of r_addr is the US-ASCII string: x1:x2:x3:x4:x5:x6:x7:x8.p1.p2 The suffix "p1.p2" is the service port, and is computed the same way as with universal addresses for TCP and UDP over IPv4. The prefix, "x1:x2:x3:x4:x5:x6:x7:x8", is the standard textual form for representing an IPv6 address as defined in Section 2.2 of [RFC2373]. Additionally, the two alternative forms specified in Section 2.2 of [RFC2373] are also acceptable. For TCP over IPv6 the value of r_netid is the string "tcp6". For UDP over IPv6 the value of r_netid is the string "udp6". cb_client4 struct cb_client4 { unsigned int cb_program; clientaddr4 cb_location; }; This structure is used by the client to inform the server of its call back address; includes the program number and client address. Shepler, et al. Standards Track [Page 21] RFC 3530 NFS version 4 Protocol April 2003 nfs_client_id4 struct nfs_client_id4 { verifier4 verifier; opaque id; }; This structure is part of the arguments to the SETCLIENTID operation. NFS4_OPAQUE_LIMIT is defined as 1024. open_owner4 struct open_owner4 { clientid4 clientid; opaque owner; }; This structure is used to identify the owner of open state. NFS4_OPAQUE_LIMIT is defined as 1024. lock_owner4 struct lock_owner4 { clientid4 clientid; opaque owner; }; This structure is used to identify the owner of file locking state. NFS4_OPAQUE_LIMIT is defined as 1024. open_to_lock_owner4 struct open_to_lock_owner4 { seqid4 open_seqid; stateid4 open_stateid; seqid4 lock_seqid; lock_owner4 lock_owner; }; This structure is used for the first LOCK operation done for an open_owner4. It provides both the open_stateid and lock_owner such that the transition is made from a valid open_stateid sequence to that of the new lock_stateid sequence. Using this mechanism avoids the confirmation of the lock_owner/lock_seqid pair since it is tied to established state in the form of the open_stateid/open_seqid. Shepler, et al. Standards Track [Page 22] RFC 3530 NFS version 4 Protocol April 2003 stateid4 struct stateid4 { uint32_t seqid; opaque other[12]; }; This structure is used for the various state sharing mechanisms between the client and server. For the client, this data structure is read-only. The starting value of the seqid field is undefined. The server is required to increment the seqid field monotonically at each transition of the stateid. This is important since the client will inspect the seqid in OPEN stateids to determine the order of OPEN processing done by the server. 3. RPC and Security Flavor The NFS version 4 protocol is a Remote Procedure Call (RPC) application that uses RPC version 2 and the corresponding eXternal Data Representation (XDR) as defined in [RFC1831] and [RFC1832]. The RPCSEC_GSS security flavor as defined in [RFC2203] MUST be used as the mechanism to deliver stronger security for the NFS version 4 protocol. 3.1. Ports and Transports Historically, NFS version 2 and version 3 servers have resided on port 2049. The registered port 2049 [RFC3232] for the NFS protocol should be the default configuration. Using the registered port for NFS services means the NFS client will not need to use the RPC binding protocols as described in [RFC1833]; this will allow NFS to transit firewalls. Where an NFS version 4 implementation supports operation over the IP network protocol, the supported transports between NFS and IP MUST be among the IETF-approved congestion control transport protocols, which include TCP and SCTP. To enhance the possibilities for interoperability, an NFS version 4 implementation MUST support operation over the TCP transport protocol, at least until such time as a standards track RFC revises this requirement to use a different IETF-approved congestion control transport protocol. If TCP is used as the transport, the client and server SHOULD use persistent connections. This will prevent the weakening of TCP's congestion control via short lived connections and will improve performance for the WAN environment by eliminating the need for SYN handshakes. Shepler, et al. Standards Track [Page 23] RFC 3530 NFS version 4 Protocol April 2003 As noted in the Security Considerations section, the authentication model for NFS version 4 has moved from machine-based to principal- based. However, this modification of the authentication model does not imply a technical requirement to move the TCP connection management model from whole machine-based to one based on a per user model. In particular, NFS over TCP client implementations have traditionally multiplexed traffic for multiple users over a common TCP connection between an NFS client and server. This has been true, regardless whether the NFS client is using AUTH_SYS, AUTH_DH, RPCSEC_GSS or any other flavor. Similarly, NFS over TCP server implementations have assumed such a model and thus scale the implementation of TCP connection management in proportion to the number of expected client machines. It is intended that NFS version 4 will not modify this connection management model. NFS version 4 clients that violate this assumption can expect scaling issues on the server and hence reduced service. Note that for various timers, the client and server should avoid inadvertent synchronization of those timers. For further discussion of the general issue refer to [Floyd]. 3.1.1. Client Retransmission Behavior When processing a request received over a reliable transport such as TCP, the NFS version 4 server MUST NOT silently drop the request, except if the transport connection has been broken. Given such a contract between NFS version 4 clients and servers, clients MUST NOT retry a request unless one or both of the following are true: o The transport connection has been broken o The procedure being retried is the NULL procedure Since reliable transports, such as TCP, do not always synchronously inform a peer when the other peer has broken the connection (for example, when an NFS server reboots), the NFS version 4 client may want to actively "probe" the connection to see if has been broken. Use of the NULL procedure is one recommended way to do so. So, when a client experiences a remote procedure call timeout (of some arbitrary implementation specific amount), rather than retrying the remote procedure call, it could instead issue a NULL procedure call to the server. If the server has died, the transport connection break will eventually be indicated to the NFS version 4 client. The client can then reconnect, and then retry the original request. If the NULL procedure call gets a response, the connection has not broken. The client can decide to wait longer for the original request's response, or it can break the transport connection and reconnect before re-sending the original request. Shepler, et al. Standards Track [Page 24] RFC 3530 NFS version 4 Protocol April 2003 For callbacks from the server to the client, the same rules apply, but the server doing the callback becomes the client, and the client receiving the callback becomes the server. 3.2. Security Flavors Traditional RPC implementations have included AUTH_NONE, AUTH_SYS, AUTH_DH, and AUTH_KRB4 as security flavors. With [RFC2203] an additional security flavor of RPCSEC_GSS has been introduced which uses the functionality of GSS-API [RFC2743]. This allows for the use of various security mechanisms by the RPC layer without the additional implementation overhead of adding RPC security flavors. For NFS version 4, the RPCSEC_GSS security flavor MUST be used to enable the mandatory security mechanism. Other flavors, such as, AUTH_NONE, AUTH_SYS, and AUTH_DH MAY be implemented as well. 3.2.1. Security mechanisms for NFS version 4 The use of RPCSEC_GSS requires selection of: mechanism, quality of protection, and service (authentication, integrity, privacy). The remainder of this document will refer to these three parameters of the RPCSEC_GSS security as the security triple. 3.2.1.1. Kerberos V5 as a security triple The Kerberos V5 GSS-API mechanism as described in [RFC1964] MUST be implemented and provide the following security triples. column descriptions: 1 == number of pseudo flavor 2 == name of pseudo flavor 3 == mechanism's OID 4 == mechanism's algorithm(s) 5 == RPCSEC_GSS service 1 2 3 4 5 -------------------------------------------------------------------- 390003 krb5 1.2.840.113554.1.2.2 DES MAC MD5 rpc_gss_svc_none 390004 krb5i 1.2.840.113554.1.2.2 DES MAC MD5 rpc_gss_svc_integrity 390005 krb5p 1.2.840.113554.1.2.2 DES MAC MD5 rpc_gss_svc_privacy for integrity, and 56 bit DES for privacy. Note that the pseudo flavor is presented here as a mapping aid to the implementor. Because this NFS protocol includes a method to negotiate security and it understands the GSS-API mechanism, the Shepler, et al. Standards Track [Page 25] RFC 3530 NFS version 4 Protocol April 2003 pseudo flavor is not needed. The pseudo flavor is needed for NFS version 3 since the security negotiation is done via the MOUNT protocol. For a discussion of NFS' use of RPCSEC_GSS and Kerberos V5, please see [RFC2623]. Users and implementors are warned that 56 bit DES is no longer considered state of the art in terms of resistance to brute force attacks. Once a revision to [RFC1964] is available that adds support for AES, implementors are urged to incorporate AES into their NFSv4 over Kerberos V5 protocol stacks, and users are similarly urged to migrate to the use of AES. 3.2.1.2. LIPKEY as a security triple The LIPKEY GSS-API mechanism as described in [RFC2847] MUST be implemented and provide the following security triples. The definition of the columns matches the previous subsection "Kerberos V5 as security triple" 1 2 3 4 5 -------------------------------------------------------------------- 390006 lipkey 1.3.6.1.5.5.9 negotiated rpc_gss_svc_none 390007 lipkey-i 1.3.6.1.5.5.9 negotiated rpc_gss_svc_integrity 390008 lipkey-p 1.3.6.1.5.5.9 negotiated rpc_gss_svc_privacy The mechanism algorithm is listed as "negotiated". This is because LIPKEY is layered on SPKM-3 and in SPKM-3 [RFC2847] the confidentiality and integrity algorithms are negotiated. Since SPKM-3 specifies HMAC-MD5 for integrity as MANDATORY, 128 bit cast5CBC for confidentiality for privacy as MANDATORY, and further specifies that HMAC-MD5 and cast5CBC MUST be listed first before weaker algorithms, specifying "negotiated" in column 4 does not impair interoperability. In the event an SPKM-3 peer does not support the mandatory algorithms, the other peer is free to accept or reject the GSS-API context creation. Because SPKM-3 negotiates the algorithms, subsequent calls to LIPKEY's GSS_Wrap() and GSS_GetMIC() by RPCSEC_GSS will use a quality of protection value of 0 (zero). See section 5.2 of [RFC2025] for an explanation. LIPKEY uses SPKM-3 to create a secure channel in which to pass a user name and password from the client to the server. Once the user name and password have been accepted by the server, calls to the LIPKEY context are redirected to the SPKM-3 context. See [RFC2847] for more details. Shepler, et al. Standards Track [Page 26] RFC 3530 NFS version 4 Protocol April 2003 3.2.1.3. SPKM-3 as a security triple The SPKM-3 GSS-API mechanism as described in [RFC2847] MUST be implemented and provide the following security triples. The definition of the columns matches the previous subsection "Kerberos V5 as security triple". 1 2 3 4 5 -------------------------------------------------------------------- 390009 spkm3 1.3.6.1.5.5.1.3 negotiated rpc_gss_svc_none 390010 spkm3i 1.3.6.1.5.5.1.3 negotiated rpc_gss_svc_integrity 390011 spkm3p 1.3.6.1.5.5.1.3 negotiated rpc_gss_svc_privacy For a discussion as to why the mechanism algorithm is listed as "negotiated", see the previous section "LIPKEY as a security triple." Because SPKM-3 negotiates the algorithms, subsequent calls to SPKM- 3's GSS_Wrap() and GSS_GetMIC() by RPCSEC_GSS will use a quality of protection value of 0 (zero). See section 5.2 of [RFC2025] for an explanation. Even though LIPKEY is layered over SPKM-3, SPKM-3 is specified as a mandatory set of triples to handle the situations where the initiator (the client) is anonymous or where the initiator has its own certificate. If the initiator is anonymous, there will not be a user name and password to send to the target (the server). If the initiator has its own certificate, then using passwords is superfluous. 3.3. Security Negotiation With the NFS version 4 server potentially offering multiple security mechanisms, the client needs a method to determine or negotiate which mechanism is to be used for its communication with the server. The NFS server may have multiple points within its filesystem name space that are available for use by NFS clients. In turn the NFS server may be configured such that each of these entry points may have different or multiple security mechanisms in use. The security negotiation between client and server must be done with a secure channel to eliminate the possibility of a third party intercepting the negotiation sequence and forcing the client and server to choose a lower level of security than required or desired. See the section "Security Considerations" for further discussion. Shepler, et al. Standards Track [Page 27] RFC 3530 NFS version 4 Protocol April 2003 3.3.1. SECINFO The new SECINFO operation will allow the client to determine, on a per filehandle basis, what security triple is to be used for server access. In general, the client will not have to use the SECINFO operation except during initial communication with the server or when the client crosses policy boundaries at the server. It is possible that the server's policies change during the client's interaction therefore forcing the client to negotiate a new security triple. 3.3.2. Security Error Based on the assumption that each NFS version 4 client and server must support a minimum set of security (i.e., LIPKEY, SPKM-3, and Kerberos-V5 all under RPCSEC_GSS), the NFS client will start its communication with the server with one of the minimal security triples. During communication with the server, the client may receive an NFS error of NFS4ERR_WRONGSEC. This error allows the server to notify the client that the security triple currently being used is not appropriate for access to the server's filesystem resources. The client is then responsible for determining what security triples are available at the server and choose one which is appropriate for the client. See the section for the "SECINFO" operation for further discussion of how the client will respond to the NFS4ERR_WRONGSEC error and use SECINFO. 3.4. Callback RPC Authentication Except as noted elsewhere in this section, the callback RPC (described later) MUST mutually authenticate the NFS server to the principal that acquired the clientid (also described later), using the security flavor the original SETCLIENTID operation used. For AUTH_NONE, there are no principals, so this is a non-issue. AUTH_SYS has no notions of mutual authentication or a server principal, so the callback from the server simply uses the AUTH_SYS credential that the user used when he set up the delegation. For AUTH_DH, one commonly used convention is that the server uses the credential corresponding to this AUTH_DH principal: unix.host@domain where host and domain are variables corresponding to the name of server host and directory services domain in which it lives such as a Network Information System domain or a DNS domain. Shepler, et al. Standards Track [Page 28] RFC 3530 NFS version 4 Protocol April 2003 Because LIPKEY is layered over SPKM-3, it is permissible for the server to use SPKM-3 and not LIPKEY for the callback even if the client used LIPKEY for SETCLIENTID. Regardless of what security mechanism under RPCSEC_GSS is being used, the NFS server, MUST identify itself in GSS-API via a GSS_C_NT_HOSTBASED_SERVICE name type. GSS_C_NT_HOSTBASED_SERVICE names are of the form: service@hostname For NFS, the "service" element is nfs Implementations of security mechanisms will convert nfs@hostname to various different forms. For Kerberos V5 and LIPKEY, the following form is RECOMMENDED: nfs/hostname For Kerberos V5, nfs/hostname would be a server principal in the Kerberos Key Distribution Center database. This is the same principal the client acquired a GSS-API context for when it issued the SETCLIENTID operation, therefore, the realm name for the server principal must be the same for the callback as it was for the SETCLIENTID. For LIPKEY, this would be the username passed to the target (the NFS version 4 client that receives the callback). It should be noted that LIPKEY may not work for callbacks, since the LIPKEY client uses a user id/password. If the NFS client receiving the callback can authenticate the NFS server's user name/password pair, and if the user that the NFS server is authenticating to has a public key certificate, then it works. In situations where the NFS client uses LIPKEY and uses a per-host principal for the SETCLIENTID operation, instead of using LIPKEY for SETCLIENTID, it is RECOMMENDED that SPKM-3 with mutual authentication be used. This effectively means that the client will use a certificate to authenticate and identify the initiator to the target on the NFS server. Using SPKM-3 and not LIPKEY has the following advantages: o When the server does a callback, it must authenticate to the principal used in the SETCLIENTID. Even if LIPKEY is used, because LIPKEY is layered over SPKM-3, the NFS client will need to Shepler, et al. Standards Track [Page 29] RFC 3530 NFS version 4 Protocol April 2003 have a certificate that corresponds to the principal used in the SETCLIENTID operation. From an administrative perspective, having a user name, password, and certificate for both the client and server is redundant. o LIPKEY was intended to minimize additional infrastructure requirements beyond a certificate for the target, and the expectation is that existing password infrastructure can be leveraged for the initiator. In some environments, a per-host password does not exist yet. If certificates are used for any per-host principals, then additional password infrastructure is not needed. o In cases when a host is both an NFS client and server, it can share the same per-host certificate. 4. Filehandles The filehandle in the NFS protocol is a per server unique identifier for a filesystem object. The contents of the filehandle are opaque to the client. Therefore, the server is responsible for translating the filehandle to an internal representation of the filesystem object. 4.1. Obtaining the First Filehandle The operations of the NFS protocol are defined in terms of one or more filehandles. Therefore, the client needs a filehandle to initiate communication with the server. With the NFS version 2 protocol [RFC1094] and the NFS version 3 protocol [RFC1813], there exists an ancillary protocol to obtain this first filehandle. The MOUNT protocol, RPC program number 100005, provides the mechanism of translating a string based filesystem path name to a filehandle which can then be used by the NFS protocols. The MOUNT protocol has deficiencies in the area of security and use via firewalls. This is one reason that the use of the public filehandle was introduced in [RFC2054] and [RFC2055]. With the use of the public filehandle in combination with the LOOKUP operation in the NFS version 2 and 3 protocols, it has been demonstrated that the MOUNT protocol is unnecessary for viable interaction between NFS client and server. Therefore, the NFS version 4 protocol will not use an ancillary protocol for translation from string based path names to a filehandle. Two special filehandles will be used as starting points for the NFS client. Shepler, et al. Standards Track [Page 30] RFC 3530 NFS version 4 Protocol April 2003 4.1.1. Root Filehandle The first of the special filehandles is the ROOT filehandle. The ROOT filehandle is the "conceptual" root of the filesystem name space at the NFS server. The client uses or starts with the ROOT filehandle by employing the PUTROOTFH operation. The PUTROOTFH operation instructs the server to set the "current" filehandle to the ROOT of the server's file tree. Once this PUTROOTFH operation is used, the client can then traverse the entirety of the server's file tree with the LOOKUP operation. A complete discussion of the server name space is in the section "NFS Server Name Space". 4.1.2. Public Filehandle The second special filehandle is the PUBLIC filehandle. Unlike the ROOT filehandle, the PUBLIC filehandle may be bound or represent an arbitrary filesystem object at the server. The server is responsible for this binding. It may be that the PUBLIC filehandle and the ROOT filehandle refer to the same filesystem object. However, it is up to the administrative software at the server and the policies of the server administrator to define the binding of the PUBLIC filehandle and server filesystem object. The client may not make any assumptions about this binding. The client uses the PUBLIC filehandle via the PUTPUBFH operation. 4.2. Filehandle Types In the NFS version 2 and 3 protocols, there was one type of filehandle with a single set of semantics. This type of filehandle is termed "persistent" in NFS Version 4. The semantics of a persistent filehandle remain the same as before. A new type of filehandle introduced in NFS Version 4 is the "volatile" filehandle, which attempts to accommodate certain server environments. The volatile filehandle type was introduced to address server functionality or implementation issues which make correct implementation of a persistent filehandle infeasible. Some server environments do not provide a filesystem level invariant that can be used to construct a persistent filehandle. The underlying server filesystem may not provide the invariant or the server's filesystem programming interfaces may not provide access to the needed invariant. Volatile filehandles may ease the implementation of server functionality such as hierarchical storage management or filesystem reorganization or migration. However, the volatile filehandle increases the implementation burden for the client. Shepler, et al. Standards Track [Page 31] RFC 3530 NFS version 4 Protocol April 2003 Since the client will need to handle persistent and volatile filehandles differently, a file attribute is defined which may be used by the client to determine the filehandle types being returned by the server. 4.2.1. General Properties of a Filehandle The filehandle contains all the information the server needs to distinguish an individual file. To the client, the filehandle is opaque. The client stores filehandles for use in a later request and can compare two filehandles from the same server for equality by doing a byte-by-byte comparison. However, the client MUST NOT otherwise interpret the contents of filehandles. If two filehandles from the same server are equal, they MUST refer to the same file. Servers SHOULD try to maintain a one-to-one correspondence between filehandles and files but this is not required. Clients MUST use filehandle comparisons only to improve performance, not for correct behavior. All clients need to be prepared for situations in which it cannot be determined whether two filehandles denote the same object and in such cases, avoid making invalid assumptions which might cause incorrect behavior. Further discussion of filehandle and attribute comparison in the context of data caching is presented in the section "Data Caching and File Identity". As an example, in the case that two different path names when traversed at the server terminate at the same filesystem object, the server SHOULD return the same filehandle for each path. This can occur if a hard link is used to create two file names which refer to the same underlying file object and associated data. For example, if paths /a/b/c and /a/d/c refer to the same file, the server SHOULD return the same filehandle for both path names traversals. 4.2.2. Persistent Filehandle A persistent filehandle is defined as having a fixed value for the lifetime of the filesystem object to which it refers. Once the server creates the filehandle for a filesystem object, the server MUST accept the same filehandle for the object for the lifetime of the object. If the server restarts or reboots the NFS server must honor the same filehandle value as it did in the server's previous instantiation. Similarly, if the filesystem is migrated, the new NFS server must honor the same filehandle as the old NFS server. The persistent filehandle will be become stale or invalid when the filesystem object is removed. When the server is presented with a persistent filehandle that refers to a deleted object, it MUST return an error of NFS4ERR_STALE. A filehandle may become stale when the filesystem containing the object is no longer available. The file Shepler, et al. Standards Track [Page 32] RFC 3530 NFS version 4 Protocol April 2003 system may become unavailable if it exists on removable media and the media is no longer available at the server or the filesystem in whole has been destroyed or the filesystem has simply been removed from the server's name space (i.e., unmounted in a UNIX environment). 4.2.3. Volatile Filehandle A volatile filehandle does not share the same longevity characteristics of a persistent filehandle. The server may determine that a volatile filehandle is no longer valid at many different points in time. If the server can definitively determine that a volatile filehandle refers to an object that has been removed, the server should return NFS4ERR_STALE to the client (as is the case for persistent filehandles). In all other cases where the server determines that a volatile filehandle can no longer be used, it should return an error of NFS4ERR_FHEXPIRED. The mandatory attribute "fh_expire_type" is used by the client to determine what type of filehandle the server is providing for a particular filesystem. This attribute is a bitmask with the following values: FH4_PERSISTENT The value of FH4_PERSISTENT is used to indicate a persistent filehandle, which is valid until the object is removed from the filesystem. The server will not return NFS4ERR_FHEXPIRED for this filehandle. FH4_PERSISTENT is defined as a value in which none of the bits specified below are set. FH4_VOLATILE_ANY The filehandle may expire at any time, except as specifically excluded (i.e., FH4_NO_EXPIRE_WITH_OPEN). FH4_NOEXPIRE_WITH_OPEN May only be set when FH4_VOLATILE_ANY is set. If this bit is set, then the meaning of FH4_VOLATILE_ANY is qualified to exclude any expiration of the filehandle when it is open. FH4_VOL_MIGRATION The filehandle will expire as a result of migration. If FH4_VOL_ANY is set, FH4_VOL_MIGRATION is redundant. Shepler, et al. Standards Track [Page 33] RFC 3530 NFS version 4 Protocol April 2003 FH4_VOL_RENAME The filehandle will expire during rename. This includes a rename by the requesting client or a rename by any other client. If FH4_VOL_ANY is set, FH4_VOL_RENAME is redundant. Servers which provide volatile filehandles that may expire while open (i.e., if FH4_VOL_MIGRATION or FH4_VOL_RENAME is set or if FH4_VOLATILE_ANY is set and FH4_NOEXPIRE_WITH_OPEN not set), should deny a RENAME or REMOVE that would affect an OPEN file of any of the components leading to the OPEN file. In addition, the server should deny all RENAME or REMOVE requests during the grace period upon server restart. Note that the bits FH4_VOL_MIGRATION and FH4_VOL_RENAME allow the client to determine that expiration has occurred whenever a specific event occurs, without an explicit filehandle expiration error from the server. FH4_VOL_ANY does not provide this form of information. In situations where the server will expire many, but not all filehandles upon migration (e.g., all but those that are open), FH4_VOLATILE_ANY (in this case with FH4_NOEXPIRE_WITH_OPEN) is a better choice since the client may not assume that all filehandles will expire when migration occurs, and it is likely that additional expirations will occur (as a result of file CLOSE) that are separated in time from the migration event itself. 4.2.4. One Method of Constructing a Volatile Filehandle A volatile filehandle, while opaque to the client could contain: [volatile bit = 1 | server boot time | slot | generation number] o slot is an index in the server volatile filehandle table o generation number is the generation number for the table entry/slot When the client presents a volatile filehandle, the server makes the following checks, which assume that the check for the volatile bit has passed. If the server boot time is less than the current server boot time, return NFS4ERR_FHEXPIRED. If slot is out of range, return NFS4ERR_BADHANDLE. If the generation number does not match, return NFS4ERR_FHEXPIRED. When the server reboots, the table is gone (it is volatile). If volatile bit is 0, then it is a persistent filehandle with a different structure following it. Shepler, et al. Standards Track [Page 34] RFC 3530 NFS version 4 Protocol April 2003 4.3. Client Recovery from Filehandle Expiration If possible, the client SHOULD recover from the receipt of an NFS4ERR_FHEXPIRED error. The client must take on additional responsibility so that it may prepare itself to recover from the expiration of a volatile filehandle. If the server returns persistent filehandles, the client does not need these additional steps. For volatile filehandles, most commonly the client will need to store the component names leading up to and including the filesystem object in question. With these names, the client should be able to recover by finding a filehandle in the name space that is still available or by starting at the root of the server's filesystem name space. If the expired filehandle refers to an object that has been removed from the filesystem, obviously the client will not be able to recover from the expired filehandle. It is also possible that the expired filehandle refers to a file that has been renamed. If the file was renamed by another client, again it is possible that the original client will not be able to recover. However, in the case that the client itself is renaming the file and the file is open, it is possible that the client may be able to recover. The client can determine the new path name based on the processing of the rename request. The client can then regenerate the new filehandle based on the new path name. The client could also use the compound operation mechanism to construct a set of operations like: RENAME A B LOOKUP B GETFH Note that the COMPOUND procedure does not provide atomicity. This example only reduces the overhead of recovering from an expired filehandle. 5. File Attributes To meet the requirements of extensibility and increased interoperability with non-UNIX platforms, attributes must be handled in a flexible manner. The NFS version 3 fattr3 structure contains a fixed list of attributes that not all clients and servers are able to support or care about. The fattr3 structure can not be extended as new needs arise and it provides no way to indicate non-support. With the NFS version 4 protocol, the client is able query what attributes the server supports and construct requests with only those supported attributes (or a subset thereof). Shepler, et al. Standards Track [Page 35] RFC 3530 NFS version 4 Protocol April 2003 To this end, attributes are divided into three groups: mandatory, recommended, and named. Both mandatory and recommended attributes are supported in the NFS version 4 protocol by a specific and well- defined encoding and are identified by number. They are requested by setting a bit in the bit vector sent in the GETATTR request; the server response includes a bit vector to list what attributes were returned in the response. New mandatory or recommended attributes may be added to the NFS protocol between major revisions by publishing a standards-track RFC which allocates a new attribute number value and defines the encoding for the attribute. See the section "Minor Versioning" for further discussion. Named attributes are accessed by the new OPENATTR operation, which accesses a hidden directory of attributes associated with a file system object. OPENATTR takes a filehandle for the object and returns the filehandle for the attribute hierarchy. The filehandle for the named attributes is a directory object accessible by LOOKUP or READDIR and contains files whose names represent the named attributes and whose data bytes are the value of the attribute. For example: LOOKUP "foo" ; look up file GETATTR attrbits OPENATTR ; access foo's named attributes LOOKUP "x11icon" ; look up specific attribute READ 0,4096 ; read stream of bytes Named attributes are intended for data needed by applications rather than by an NFS client implementation. NFS implementors are strongly encouraged to define their new attributes as recommended attributes by bringing them to the IETF standards-track process. The set of attributes which are classified as mandatory is deliberately small since servers must do whatever it takes to support them. A server should support as many of the recommended attributes as possible but by their definition, the server is not required to support all of them. Attributes are deemed mandatory if the data is both needed by a large number of clients and is not otherwise reasonably computable by the client when support is not provided on the server. Note that the hidden directory returned by OPENATTR is a convenience for protocol processing. The client should not make any assumptions about the server's implementation of named attributes and whether the underlying filesystem at the server has a named attribute directory or not. Therefore, operations such as SETATTR and GETATTR on the named attribute directory are undefined. Shepler, et al. Standards Track [Page 36] RFC 3530 NFS version 4 Protocol April 2003 5.1. Mandatory Attributes These MUST be supported by every NFS version 4 client and server in order to ensure a minimum level of interoperability. The server must store and return these attributes and the client must be able to function with an attribute set limited to these attributes. With just the mandatory attributes some client functionality may be impaired or limited in some ways. A client may ask for any of these attributes to be returned by setting a bit in the GETATTR request and the server must return their value. 5.2. Recommended Attributes These attributes are understood well enough to warrant support in the NFS version 4 protocol. However, they may not be supported on all clients and servers. A client may ask for any of these attributes to be returned by setting a bit in the GETATTR request but must handle the case where the server does not return them. A client may ask for the set of attributes the server supports and should not request attributes the server does not support. A server should be tolerant of requests for unsupported attributes and simply not return them rather than considering the request an error. It is expected that servers will support all attributes they comfortably can and only fail to support attributes which are difficult to support in their operating environments. A server should provide attributes whenever they don't have to "tell lies" to the client. For example, a file modification time should be either an accurate time or should not be supported by the server. This will not always be comfortable to clients but the client is better positioned decide whether and how to fabricate or construct an attribute or whether to do without the attribute. 5.3. Named Attributes These attributes are not supported by direct encoding in the NFS Version 4 protocol but are accessed by string names rather than numbers and correspond to an uninterpreted stream of bytes which are stored with the filesystem object. The name space for these attributes may be accessed by using the OPENATTR operation. The OPENATTR operation returns a filehandle for a virtual "attribute directory" and further perusal of the name space may be done using READDIR and LOOKUP operations on this filehandle. Named attributes may then be examined or changed by normal READ and WRITE and CREATE operations on the filehandles returned from READDIR and LOOKUP. Named attributes may have attributes. Shepler, et al. Standards Track [Page 37] RFC 3530 NFS version 4 Protocol April 2003 It is recommended that servers support arbitrary named attributes. A client should not depend on the ability to store any named attributes in the server's filesystem. If a server does support named attributes, a client which is also able to handle them should be able to copy a file's data and meta-data with complete transparency from one location to another; this would imply that names allowed for regular directory entries are valid for named attribute names as well. Names of attributes will not be controlled by this document or other IETF standards track documents. See the section "IANA Considerations" for further discussion. 5.4. Classification of Attributes Each of the Mandatory and Recommended attributes can be classified in one of three categories: per server, per filesystem, or per filesystem object. Note that it is possible that some per filesystem attributes may vary within the filesystem. See the "homogeneous" attribute for its definition. Note that the attributes time_access_set and time_modify_set are not listed in this section because they are write-only attributes corresponding to time_access and time_modify, and are used in a special instance of SETATTR. o The per server attribute is: lease_time o The per filesystem attributes are: supp_attr, fh_expire_type, link_support, symlink_support, unique_handles, aclsupport, cansettime, case_insensitive, case_preserving, chown_restricted, files_avail, files_free, files_total, fs_locations, homogeneous, maxfilesize, maxname, maxread, maxwrite, no_trunc, space_avail, space_free, space_total, time_delta o The per filesystem object attributes are: type, change, size, named_attr, fsid, rdattr_error, filehandle, ACL, archive, fileid, hidden, maxlink, mimetype, mode, numlinks, owner, owner_group, rawdev, space_used, system, time_access, time_backup, time_create, time_metadata, time_modify, mounted_on_fileid For quota_avail_hard, quota_avail_soft, and quota_used see their definitions below for the appropriate classification. Shepler, et al. Standards Track [Page 38] RFC 3530 NFS version 4 Protocol April 2003 5.5. Mandatory Attributes - Definitions Name # DataType Access Description __________________________________________________