What are the security Export Settings that everyone is using to secure the NFS for the Component Pack Servers? I have noticed that out of the box the current settings are as shown below which is quite insecure and actually appears on a PEN test as a critical security flaw -
/pv-connections/mongo-node-0/data/db 192.168.0.0/255.255.0.0(rw,no_root_squash)
I want to change this but I don't want to make any changes that are going to break the Component Pack applications that I currently have running so wondered what everyone else is changing this to when deploying the Component Pack Persistent Volumes.
Hi Richard,
I suggest opening a case with Connections Support to look into this.
Tony Dezanet
HCL Connections Support
That's fine and happy to do that but does that mean no one is changing the NFS settings so all the component pack servers NFS security settings are open like this?
I suppose it is an issue but it was more a question regarding what does everyone else do with the security on the NFS for the Component Pack server.
I will raise a call and add keep this post updated with the findings.
I have finally had a reply from Connections support and they tried to close the case with the following -
"As per development, nfsSetup.sh was created as a reference. This means you can implement NFS setup the way you want to, to tailor it to their needs. NFS is just needed for Component Pack as a storage for persistent volumes."
If that is the case then it probably needs updating in the documentation to say that this script is not for production but for reference only, However this still doesn't answer the question of what should it be as I tried to change the no_root_squash in a deployed environment and certain pods failed to restart so you can't just put anything you like as is being suggested by support.
I really need to sort this out as this is actually stopping a company from getting Security accredited.
I got a similar answer for NFS when I asked for additional storage options.
I want to add that the documentation does not contain any information how the storage needs to be configured securely. The provided nfsSetup.sh is unsecure (no_root_squash for example). So there needs to be an information which user account (uid and id number) access the PV from which pod to get this secured. Getting all of these informations from reverse engineering is a pain.
I checked the pv on my demo environment and except of kudos-minio every file is not owned by root. So it should be save to remove the no_root_squash from all other pv when the system is up and running, BUT I'm not 100% sure if there aren't init-containers which create the folder structure which could need the setting.
I tried removing no_root_squash from the exports file and most things seemed to be fine apart from Orient Me if I remember correctly. The pods errored even on delete and recreate so just removing no_root_squash isn't going to do it unfortunately.
The last update I have o far is that I should be editing the nfssetup.sh script myself and that's how the changes are made so I am going to take a look but really as you said they need to say if you want to secure it then use this account etc.
Orient Me is talking to Redis (no nfs) and Mongo DB.
The mongodb database is owned by uid:1001 and guid:1001, but the folders to the db is owned by root and 700 access rights.
ls -al /pv-connections/mongo-node-0:
drwx------. 3 root root 18 May 4 16:01 mongo-node-0
Within the mongodb pod:
10.0.11.40:/pv-connections/mongo-node-0/data/db on /data/db type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.0.11.41,local_lock=none,addr=10.0.11.40)
$ ls -al /data
total 16
drwxr-xr-x 3 root root 16 Apr 26 21:07 .
drwxr-xr-x 1 root root 50 May 25 08:56 ..
drwx------ 4 mongodb mongodb 12288 May 25 09:30 db
So in my case a
chown 1001:1001 /pv-connections/mongo-node-* -R
on the NFS server, worked and I can remove the no_root_squash from the mongo-db nfs shares.
UPDATE: Sorry need to add something after reboot. mongo-db seems to use root in the init or sidecar container, so it won't start without no_root_squash.
Thanks Christoph,
I have posted all your findings to the HCL Support Team as I still have an open call for this.
I am hoping that they find a resolution as this is stopping a customer from getting a Security Accreditation.
HCL have finally got back to me with a proposed fix but I am having an issue with finding the folder structure for the Helm Charts.
This is what they have said I need to do:
Update the PV's on NFS master to 1001:1001 so sudo chown -R 1001:1001 /pv-connections/kudos-boards-minio for example.
You also need to update the nfs export file to remove the no_ from_root_squash and save the file.
You then need to change the securityContext: runAsUser in the helm chart templates/deployment.yaml file as an example HCL sent me the below
Current kudos-boards-minio charts need root permission in the container. So update below templates to change securityContext: runAsUser: 1001 kudos-boards-cp/charts/kudos-minio/templates/deployment.yaml
This is the part I am falling down on as I can't find the path to my Helm Charts I have been through every folder I can see and can't find it. I know it must be there so any help on where these would be on a single server POC build of the component pack would be great as HCL Support seem to also be struggling to find it.
HI Richard
The helm chart for kudos-boards-cp are in the microservices_connections/hybridcloud/helmbuilds/kudos-boards-cp-*.tgz file or if you are using the newer dockerhub images (https://docs.huddo.com/boards/cp/dockerhub/) you can get the tgz file from there. The deployment.yml is in that tgz file.