Distributed file systems do not share block level access to the same storage but use a network protocol. These are commonly known as network file systems, even though they are not the only file systems that use the network to send data. Distributed file systems can restrict access to the file system depending on access lists or capabilities on both the servers and the clients, depending on how the protocol is designed.
The difference between a distributed file system and a distributed data store is that a distributed file system allows files to be accessed using the same interfaces and semantics as local files – for example, mounting/unmounting, listing directories, read/write at byte boundaries, system’s native permission model. Distributed data stores, by contrast, require using a different API or library and have different semantics (most often those of a database).
A distributed file system may also be created by software implementing IBM’s Distributed Data Management Architecture (DDM), in which programs running on one computer use local interfaces and semantics to create, manage and access files located on other networked computers. All such client requests are trapped and converted to equivalent messages defined by the DDM. Using protocols also defined by the DDM, these messages are transmitted to the specified remote computer on which a DDM server program interprets the messages and uses the file system interfaces of that computer to locate and interact with the specified file.
Design Goals
Distributed file systems may aim for “transparency” in a number of aspects. That is, they aim to be “invisible” to client programs, which “see” a system which is similar to a local file system. Behind the scenes, the distributed file system handles locating files, transporting data, and potentially providing other features listed below.
- Access transparency: clients are unaware that files are distributed and can access them in the same way as local files are accessed.
- Location transparency: a consistent namespace exists encompassing local as well as remote files. The name of a file does not give its location.
- Concurrency transparency: all clients have the same view of the state of the file system. This means that if one process is modifying a file, any other processes on the same system or remote systems that are accessing the files will see the modifications in a coherent manner.
- Failure transparency: the client and client programs should operate correctly after a server failure.
- Heterogeneity: file service should be provided across different hardware and operating system platforms.
- Scalability: the file system should work well in small environments (1 machine, a dozen machines) and also scale gracefully to bigger ones (hundreds through tens of thousands of systems).
- Replication transparency: Clients should be unaware of the file replication performed across multiple servers to support scalability.
- Migration transparency: files should be able to move between different servers without the client’s knowledge.