I have been fighting a problem with mounting volumes from an NFS Server in AWS for a few days. With more pressing issues at hand, I had to try and google for solutions for an hour or so before bed, and nothing I was doing was having any effect. Curiously, an Ubuntu based machine that was mounting the drive using NFS3 was not having the same problem. Only the Amazon Linux servers that were using NFS4 were having the issue, and were showing all files and directories as being owned by nobody:nobody.
drwxr-xr-x 2 nobody nobody 22 Jan 919:58 installervc drwxr-xr-x 2 nobody nobody 4.0K Jan 919:56 avatar drwxr-xr-x 2 nobody nobody 4.0K Jan 919:56 accessories
I had previously insured that the user UID and group GID for the user that would be writing files (in my case "apache" was the same (with the same UID and GID) on the NFS server and the servers mounting the nfs volume.
As it turned out the problem was with the configuration (or lack thereof) of the rpcidmapd service. NFS4 relies on this service to map users between machines. The "idmapd" requires that the domain of both the client and server should match for the UID/GID mapping to work, and in my case it wasn't. Probably many people with proper DNS configuration don't hit this problem, but we did not have a proper DNS setup, as these machines are part of a growing cluster. Compounding the problem I had set the configuration files to have meaningless host names rather than a domain.
You can tweak this setup by editing the: /etc/idmapd.conf file, and find the "Domain" variable:
Domain = yourdomain.com
Set these to be the same for the server and all the clients.
The last problem was that I had to restart the idmapd process, which has an /etc/init.d control script named /etc/rpcidmapd