How can I force Kubernetes to use a different IP address for the Connections nginx server than it would find in DNS?

I'm working on a side by side migration to a new environment which includes the Component Pack. I would like to build the Component pack using connections.example.org as Connections url. However, connections.example.org is the url of the current/old environment. I therefore need to make sure that when pods try to reach connections.example.org it translates to a different IP address. For the WebSphere components, I can simply change the /etc/hosts file, but the pods don't use the hosts file of the host. What would be the best way to achieve this?

I tried with using hostAliases, but that doesn't work with the Bootstrap Helm chart for example. I also tried using a dnsmasq server on the Kubernetes Master machine and changing Network manager to point to that dns server, but somehow that messed up the entire Kubernetes networking and a nslookup to kubernetes.default would fail.

Hello Martijn,

The side-by-side migration is not supported to HCL Connections 6.5

Only the in-place migration strategy (taking the deployment offline to migrate to the next version of Connections), is tested and supported for release 6.5 and later. Although side-by-side migration should still be possible, it is not supported and that content has been removed from the 6.5.x version of the product documentation.

Upgrading to HCL Connections 6.5
https://help.hcltechsw.com/connections/v65/admin/migrate/t_upgrading_to_65.html

Regards

The side-by-side migration is the only way to go for a 5.5 to 6.5 migration. HCL will have to update their documentation on this (and I'm sure they will at some point). Currently, they apparently have no understanding of how migrations are done and should be done in the real world.

Anyways, different point and no answer to my question.

Hi Martijn,

The k8s pods will resolve via CoreDNS. I believe you can add local entries to the ConfigMap as shown in this article:

https://medium.com/@hjrocha/add-a-custom-host-to-kubernetes-a06472cedccb

I haven't tested it myself, but I hope this works for you!

Hi @Martijn de Jong

I tested this today, changing the Corefile ConfigMap works perfectly.

I added the lines at the bottom before the last closing brace:

hosts custom.hosts iamtest.local {
xxx.xxx.xxx.xxx iamtest.local
fallthrough
}

Regards
Heidi

Thanks Heidi! I had solved it in the mean time by building a DNS server based on dnsmasq which would, with a very simple config, forward all requests to the normal DNS servers except for connections.example.org, which would be resolved by this server. I then changed the DNS server in the /etc/resolv.conf to this new DNS server on the Kubernetes hosts where the CoreDNS pod was running. That worked, but your solution is a lot simpler and far more elegant. Going to try it. Thanks!