Connections still requires dependencies, which are End of Life for some time, some will be EOL soon - Support promises not fulfilled?

Hello,

the CNX 7 documentation for the CP says:

Supported Kubernetes versions
Component Pack for Connections was tested on Kubernetes 1.19 and is following the same Kubernetes support pattern that Kubernetes itself is following.

Kubernetes supports the most recent three minor releases, which are currently 1.21, 1.22 and 1.23. Supporting them would be a good policy for CNX. But the mentioned version 1.19 is EOL since last year, so for several months now.

The CNX 6.5 docs says:

Supported Kubernetes versions
As of HCL Connections version 6.5.0.1, deployments to the latest stable Kubernetes platform (version 1.17) are supported, and that is the version where all development and testing is happening at HCL.However, HCL tested its deployment all the way from Kubernetes 1.11.9 to 1.18.2, and is continuing to test it against any new Kubernetes release.

This seems even more outdated, since 1.17 is not supported since the end of 2020! It's also mentioned that new versions are tested, but with the mention of very old versions. So which of them is true? I made a ticket and it seems that not even 1.21 (released in April 2021, so already nearly a year old!) is tested (and supported?), they're currently testing.

We have the same problem with helm:

Currently, Component Pack supports Helm v2, and support for Helm v3 is on the roadmap.

Helm v2 is deprecated since april 2020 and went end of life in August 2020. Even 1,5 years after the end of life, HCL still only supports v2 and forces customers to use outdated software, when they want (paid) support from HCL.

Problems

Using outdated software causes compatibility issues and forces users of the component pack to have a complete cluster running on outdated software. So not just the cluster itself is oudated, it also has an impact on other software running there, which needs to be kept compatible on those old versions.

And this is just the beginning, imho the most important issue is security. At least since log4shell it should be widely known, how risky third-party dependencies can be, especially when they're out of date or even end of life.

What is HCL doing to make its software sustainably free of obsolete dependencies?

When HCL seems to start testing never versions on request months after they become EOL, this violates basic security concepts. Imho, the only solution is a reliable update-policy, as it is written in some docs, but not really done in practice yet.

Kubernetes 1.21 reaches EOL on 2022-06-28. So at the time of writing, just 4 months of support left. A good and reliable solution would be when HCL starts to test about at least 1.22 or better directly 1.23 NOW, so that this version is supported at least 1-2 months before the 2022-06-28 because K8s admins also need time to plan their cluster upgrades and test them.

From my view, it's completely unacceptable for a productive environment which has basic security requirements to open a ticket in 6 months that 1.21 is already EOL and the whole process starts again, where we have to run unsupported K3s versions for a while until CNX follows with the updates.

CentOS will be the next big EOL dependency

This is not only the case for K8s, we have the same issues on helm and soon on CentOS too: There is still only support for CentOS 7, which will become EOL in June 2024. I already suggested support for Rocky Linux/Alma Linux last year for the same reason, which was just rejected without any comment or further information, what OS should be used for environments who want to be keept longer than for just the next ~2.5 years.

Will CNX support CentOS 8 stream? Seems not realistic to me. Do they prefer Alma Linux over Rocky Linux? Some different fork/distribution? We don't know what's planned, we don't know if they even have already any plan what will be supported after the EOL of CentOS 7 or when those plans will be made. But it's time, since fresh installs on a OS that will last for just 2,5 years is not reliable. And existing installations can't be migrated within 2 days after a solution is announced. Especially larger enviroments need several months to plan and prepare this.

Recently it seems that at least RHEL 8.5 was added, but we're still waiting which free & open alternative is supported as replacement for CentOS 7.

Thank you, @Daniel, for bringing this up. It's a shame for a professional business as HCL to have current versions of their software depend on other software way beyond EOL.

Sure, Kubernetes and other modern software have faster life circles in comparison to IBM software like WebSphere and Db2 and more breaking changes. But: HCL has over 140.000 employees, quite some delivering services for Kubernetes, Docker, modern software development etc. Thus I consider this lack of development as a conscious decision, I really don't like nor understand.

Totally agree.

The interesting part is: I've also proposed similar ideas before (Support More Recent Operating Systems, Support CRI runtimes for Kubernetes), but unlike yours, they were put on a planning to implement status (hopefully with v8).

Anyway, I would also appreciate it if HCL would communicate a bit more openly here what is planned for what period. We also need a replacement for CentOS, RockyLinux would be the way to go in my opinion. And it works with Connections, I did test it. No issues. Zero.

PS: The Component Pack also seems to run on K8s 1.22 without issues, if you adjust a few Ingress rules that use a deprecated API. But exactly because it is so simple, I would expect HCL to provide regularly updated Helm Charts.

Nothing to add from my side, I totally support everything that was said before and agree that it is a shame!

I fully support this.

About a year ago I tried to deploy standalone Elasticsearch for Connections Metrics and it ended up in about a week of search for outdated software packages and trying to get them all working together. I was completely upset by this.

Hi Daniel,

Thank you for your feedback – we certainly could make some improvements here. We are currently in the process of removing the specific Kubernetes version from our Connections Component pack 6.5 and 7.0 documentation. The versions change rapidly and should not be hard coded into the product documentation like it is today. To replace this, we will create a Connections Component Pack system requirements article which will be linked into the product doc instead, and we can more easily keep this up to date.

Today we have tested and support up to Kubernetes 1.21 for both Connections Component Pack 6.5 and 7.0. If you are a user of the Connections ansible automation on HCL-TECH-SOFTWARE/connections-automation, we recently published the Feb 2022 Release, which was updated to support the following:

• Docker v20.10.12
• Kubernetes v1.21.7
• Helm v3.7.2
• HAProxy v2.5.1
• Containerd v1.4.12
• IBM DB2 v11.5.6
• WebSphere Application Server 8.5.5 FixPack 20

We are in the process of testing Kubernetes 1.22 and will update the new system requirements document and the ansible repo once we have that completed as well. We will be validating Kubernetes 1.23 next, and we already support containerd, which is a requirement for that version
For Connections Component pack 6.0, we are not pursuing updates here. To get to the latest supported Kubernetes environments, you will need to migrate to a later version of Connections. We recommend moving to Connections 7.0.

As far as the Centos OS strategy goes, we want to get to the point where we can support the Component Pack running on any intel based platform that is supported by Kubernetes. Note that we would not be able to support unlimited platforms for the ansible automation, but as that is community based, we would gladly merge in contributions to support other platforms as they are available. For the WebSphere side, we are limited to supporting the environments that WebSphere itself supports.

Hope this helps some?

-Bill