Authentication and Naming facilities:
NFS is simplistic in its notion of 'naming' and permissions. By
default (and in many implementations) it requires a flat ;SPM_quot;uid;SPM_quot; space
`across the set of machines sharing the file systems (except for
root whose ownerships vanish across an NFS mount). This means simply
that two different users could have read/write permissions over each
others files on different servers if they mounted each others file
stores. [Note that this can be stopped by controlling who may mount a
file store with the ;SPM_quot;exports;SPM_quot; file).
This causes problems when adding existing systems to the consortium
filestore (without some gateway/uid-mapping service). For instance,
the CS department uses uids 0-7999 already (most systems support 2*15
or 2*31 uids...).
NIS provides a way of distributing the user names/uids without having
multiple stored copies, but does not provide a uid-mapping service.
Hesiod provides a similar service to NIS.
Kerberos provides the appropriate mapping, but requires a lot of
integration work. Until it is provided by the manufacturer, it is
probably not appropriate to the consortium.
-
Quotas:
BSD Quotas on file systems work independent of NFS. As mentioned
above, separation of user communities onto separate file systems has a
benefit in that it provides coarse grained quota-ing of each community
(with the drawback that file free blocks are <#2407#> not<#2407#> shared - the
usual engineering trade-off).
-
Distributed Execution:
Some views or architectures may impose certain
hierarchies or structures from the server (e.g. TCF works in clusters
of up to 31 workstations). Hopefully, both schemes work across
symbolic links.
-
Performance/Reliability Considerations:
When a filestore is not mounted, NFS Fails exactly as required: the
application gets an open/read/write error.
If a filestore is already mounted, but fails, NFS fails in different
ways depending on mount options: If hard mounted, it retries for ever
(unless mounted -intr, in which case an interrupt to the application
terminates the NFS call), If soft mounted, it can fail after some
number of retries (and it can fail silently (as in ;SPM_quot;write behinds;SPM_quot;)).
Consider an NFS setup with N file servers with some availability x
(assume 1 filestore per server - can redo the numbers if several
filestores per server or treat together, since filestore failure often
means machine failure).
If the average utilization of a filestore is y, then:
Mean chance of hanging an application = N(1-x).
assuming machines run much longer than the MTBF.
With automount, each file store has a chance of being mounted when the
server crashes of y. Then we get:
N(1-x)y
assuming all filestores roughly the same.
But if the filestore mount map is flat, there are some operations
which systematically do badly (e.g. find or ls on /users). The same
effect will also be seen if finding the current working directory uses
the fstab rather than stat .. method.
Luckily, such broad file tree activities are rare. If sufficiently
rarer than the fileserver failures, they wont cause broken filestores
to stay mounted.
By contrast, there is an overhead associated with opening a file in a
deeply nested file store of a readdir per level.
It is known that a great many Unix applications open a file, read a
block, and close the file. For each level of file system the number
of resulting NFS calls goes up by at least one. So if we take the CS
approach, we could increase the NFS calls by 100%
levels), but only if all the file traffic is non-local - as proposed,
most file access would be from the central machine, on the central
machine.