Sample Header Ad - 728x90

Unix & Linux Stack Exchange

Q&A for users of Linux, FreeBSD and other Unix-like operating systems

Latest Questions

0 votes
2 answers
3178 views
multiple schedules for a single task in a k8s cronjob
**Warning**: k8s greenhorn on this side. I need to run a task that will be set up in a k8s cronjob. I need it to run every 45 minutes. Having this in the `schedule` does not work: 0/45 * * * * Because it would run at `X:00`, then `X:45` then `X+1:00` instead of `X+1:30`. So I might need to set up mu...
**Warning**: k8s greenhorn on this side. I need to run a task that will be set up in a k8s cronjob. I need it to run every 45 minutes. Having this in the schedule does not work: 0/45 * * * * Because it would run at X:00, then X:45 then X+1:00 instead of X+1:30. So I might need to set up multiple schedule rules instead: 0,45 0/3 * * * 30 1/3 * * * 15 2/3 * * * I am wondering if it's possible to set up multiple schedules in a _single_ CronJob definition or if I will have to setup multiple CronJobs so that each CronJob takes care of each line. https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/cron-job-v1/ **Update**: I just read that it's possible to have more than a single manifest written in a single yaml file so it might work with 3 manifests.... but knowing if it's possible with a single manifest would be awesome.
eftshift0 (707 rep)
Oct 28, 2021, 03:12 PM • Last activity: Jul 21, 2025, 06:07 PM
0 votes
0 answers
16 views
Upgraded k8 worker node from ubuntu 20.04 to 22.04. DNS resolution inside pods doesn’t work & pods keep crashing/restarting. openebs also crashed
We are facing the issue. Here I mentioned our setup. We have 5 clusters like application clustes, ingestion clusters, MySQL clusters, file server clusters. Each clusters working with 2 masters and rest are worker Os upgrade are going cluster wise.ubuntu os upgrade went smoothly but when we bring the...
We are facing the issue. Here I mentioned our setup. We have 5 clusters like application clustes, ingestion clusters, MySQL clusters, file server clusters. Each clusters working with 2 masters and rest are worker Os upgrade are going cluster wise.ubuntu os upgrade went smoothly but when we bring the pods we are facing issue , specially in statefull set pods Example in ingestion clusters we have 11 instances in that 11 , 2 masters and rest workers, Pods that are running in instances are like kafkatls,Cassandra, wso2, proxysql In some instances we can able to bring up the pods, but when try bring up the pods from another instances from same clusters some of dependency pods frequently restarting and went crashlookback off Note: we just change the iptable settings before upgrading like we manually set iptables legacy to prevent upgrade to nftables Kindly guide us on this if need I will give additional information
Ajmeer Hussain (1 rep)
Jul 9, 2025, 04:51 AM • Last activity: Jul 9, 2025, 05:44 AM
0 votes
1 answers
92 views
Upgraded k8 worker node from ubuntu 20.04 to 22.04. DNS resolution/networking inside pods doesn’t work & pods keep crashing/restarting
I have a k8 cluster based on Ubuntu 20.04 1 master and 3 worker nodes. I drained one of the worker node. Put kubectl,iptables, kubeadm, kubelet & containerd packages on hold. OS upgrade to 22.04, went smooth, but after upgrade pods (kube-system daemon-sets) kept crashing. One of the issue I found is...
I have a k8 cluster based on Ubuntu 20.04 1 master and 3 worker nodes. I drained one of the worker node. Put kubectl,iptables, kubeadm, kubelet & containerd packages on hold. OS upgrade to 22.04, went smooth, but after upgrade pods (kube-system daemon-sets) kept crashing. One of the issue I found is that DNS resolution is not working inside pods residing on upgraded node. When I revert back to ubuntu 20.04 everything works fine. Anyone help/suggestion please
Muhammad Saeed (31 rep)
Mar 2, 2025, 02:50 PM • Last activity: Jul 8, 2025, 08:13 PM
0 votes
0 answers
9 views
plugin type=portmap failed (add): unable to create chain CNI-HOSTPORT-SETMARK
Trying to run `kind` cluster in linux mint with `nerdctl`. It was working previously, may be upgrading some packages causing this. I am able to run other containers like postgres. ## Error ``` $ sudo kind create cluster Creating cluster "kind" ... ✓ Ensuring node image (kindest/node:v1.32.2) &#12844...
Trying to run kind cluster in linux mint with nerdctl. It was working previously, may be upgrading some packages causing this. I am able to run other containers like postgres. ## Error
$ sudo kind create cluster
Creating cluster "kind" ...
 ✓ Ensuring node image (kindest/node:v1.32.2) 🖼
 ✗ Preparing nodes 📦
ERROR: failed to create cluster: command "nerdctl run --name kind-control-plane --hostname kind-control-plane --label io.x-k8s.kind.role=control-plane --privileged --security-opt seccomp=unconfined --security-opt apparmor=unconfined --tmpfs /tmp --tmpfs /run --volume /var --volume /lib/modules:/lib/modules:ro -e KIND_EXPERIMENTAL_CONTAINERD_SNAPSHOTTER --detach --tty --label io.x-k8s.kind.cluster=kind --net kind --restart=on-failure:1 --init=false --publish=127.0.0.1:33831:6443/TCP -e KUBECONFIG=/etc/kubernetes/admin.conf kindest/node:v1.32.2@sha256:f226345927d7e348497136874b6d207e0b32cc52154ad8323129352923a3142f" failed with error: exit status 1
Command Output: time="2025-06-28T10:30:22+05:30" level=fatal msg="failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running createRuntime hook #0: exit status 1, stdout: , stderr: time=\"2025-06-28T10:30:22+05:30\" level=fatal msg=\"failed to call cni.Setup: plugin type=\\\"portmap\\\" failed (add): unable to create chain CNI-HOSTPORT-SETMARK: running [/usr/sbin/ip6tables -t nat -C CNI-HOSTPORT-SETMARK -m comment --comment CNI portfwd masquerade mark -j MARK --set-xmark 0x2000/0x2000 --wait]: exit status 2: Warning: Extension MARK revision 0 not supported, missing kernel module?\\nip6tables v1.8.10 (nf_tables): unknown option \\\"--set-xmark\\\"\\nTry `ip6tables -h' or 'ip6tables --help' for more information.\\n\""
enter image description here ## System Info
$ which docker
/usr/local/bin/docker

$ ls -la /usr/local/bin/docker
lrwxrwxrwx 1 root root 22 Feb 21 00:07 /usr/local/bin/docker -> /usr/local/bin/nerdctl

$ sudo uname -a
Linux HP-ZBook-15-G4 6.8.0-57-generic #59-Ubuntu SMP PREEMPT_DYNAMIC Sat Mar 15 17:40:59 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux

$ sudo nerdctl -v
nerdctl version 2.0.3

$ containerd -v
containerd github.com/containerd/containerd/v2 v2.0.2 c507a0257ea6462fbd6f5ba4f5c74facb04021f4

$ /opt/cni/bin/portmap -v
CNI portmap plugin v1.6.2
CNI protocol versions supported: 0.1.0, 0.2.0, 0.3.0, 0.3.1, 0.4.0, 1.0.0, 1.1.0

$ iptables --version
iptables v1.8.10 (nf_tables)

$ sudo lsmod | grep mark
xt_mark                12288  1
x_tables               65536  10 xt_conntrack,nft_compat,xt_multiport,xt_tcpudp,xt_addrtype,xt_nat,xt_comment,ip_tables,xt_MASQUERADE,xt_mark
PSKP (131 rep)
Jun 28, 2025, 05:38 AM
2 votes
1 answers
4333 views
HELM(error) data: Too long: must have at most 1048576 bytes
ok I have a huge applications with multiple components, each have plentyful config files directory tree looks like this # ls -1 . Chart.yaml configs templates values.yaml # ls -1 configs component1-config component2-config component3-config component4-config # ls -1 templates component1.yaml compone...
ok I have a huge applications with multiple components, each have plentyful config files directory tree looks like this # ls -1 . Chart.yaml configs templates values.yaml # ls -1 configs component1-config component2-config component3-config component4-config # ls -1 templates component1.yaml component2.yaml component3.yaml component4.yaml _helpers.tpl secrets.yaml I am creating configmaps for multiple folders of each component like this --- apiVersion: v1 kind: ConfigMap metadata: name: config-component1-folder1 namespace: {{ .Values.nSpace }} data: {{ $currentScope := . }} {{ range $path, $_ := .Files.Glob "configs/component1/folder1/**" }} {{- with $currentScope}} {{ (base $path) }}: |- {{ .Files.Get $path | indent 6 }} {{ end }} {{ end }} --- apiVersion: v1 kind: ConfigMap metadata: name: config-component1-folder2 namespace: {{ .Values.nSpace }} data: {{ $currentScope := . }} {{ range $path, $_ := .Files.Glob "configs/component1/folder2/**" }} {{- with $currentScope}} {{ (base $path) }}: |- {{ .Files.Get $path | indent 6 }} {{ end }} {{ end }} and some standard deployment & services also included problem is when I run helm install myapp . it throws this data too long error which I want to avoid, as my application may grow. Error: INSTALLATION FAILED: create: failed to create: Secret "sh.helm.release.v1.myapp.v1" is invalid: data: Too long: must have at most 1048576 bytes
Sollosa (1993 rep)
Mar 3, 2023, 02:39 PM • Last activity: Jun 26, 2025, 06:01 PM
0 votes
1 answers
54 views
OpenShift pod tolerations
I have a question related to openshift pod tolerations. According to the document, tolerations are defined in `pod. Spec` not in deployment/deploymentconfigs Here is the scenario, 3 master nodes with following taints, no other nodes oc adm taint node master01 want=food:NoSchedule oc adm taint node m...
I have a question related to openshift pod tolerations. According to the document, tolerations are defined in pod. Spec not in deployment/deploymentconfigs Here is the scenario, 3 master nodes with following taints, no other nodes oc adm taint node master01 want=food:NoSchedule oc adm taint node master02 want=drink:NoSchedule oc adm taint node master03 want=money:NoSchedule After deploying a simple hello-world app, not a surprise that pod stucks in pending state because pod doesn't have any tolerations Name READY STATUS RESTARTS AGE hello-world-1-build 0/1 Pending 0 5s Now add a toleration that tolerates all taints to hello-world-1-build tolerations: - operator: Exists Now build is running but the application pod still in pending state, because I only added toleration to build pods. After adding toleration to build pod Adding the same tolerations to hello-world pod will change the state to running but if I want to scale to 10 pods, then I would have to manually add tolerations to all 10 pods. Is there a better way, without removing the node taints?
Ask and Learn (1895 rep)
Oct 18, 2022, 10:57 AM • Last activity: Jun 24, 2025, 11:43 AM
1 votes
1 answers
6354 views
How to allow ServiceAccount list namespaces it has access to within a cluster?
I have a cluster with multiple namespaces. Let's call them: `ns1` and `ns2` I also have multiple service accounts, lets call them `sa1` and `sa2`, all in one namespace - `sa-ns`. Both users can access all resources within both namespaces, however they cannot list namespaces they are part of. `kubect...
I have a cluster with multiple namespaces. Let's call them: ns1 and ns2 I also have multiple service accounts, lets call them sa1 and sa2, all in one namespace - sa-ns. Both users can access all resources within both namespaces, however they cannot list namespaces they are part of. kubectl get ns --as=sa1 returns: Error from server (Forbidden): namespaces is forbidden: User "sa1" cannot list resource "namespaces" in API group "" at the cluster scope It works only if I manually specify which namespace I want to list: kubectl get ns ns1 --as=sa1
NAME           STATUS   AGE
ns1   Active   6d6h
I need both users sa1 and sa2 be able to list all namespaces within cluster they have access to. In this case ns1 and ns2. This behavior also probably wont allow me to list namespaces and it's resources in Lens dashboards. From the the namespace list I can list only the namespace sa-ns the users sa1 & sa2 are part of. Dashboards are however empty as you can seen on the image bellow. enter image description here I tried to add namespace the user has in fact access to via ACCESSIBLE NAMESPACES feature in Lens, but it doesn't work either. enter image description here I still don't see anything, only blank dashboards. enter image description here ServiceAccount: apiVersion: v1 kind: ServiceAccount metadata: name: sa1 namespace: sa-ns --- apiVersion: v1 kind: ServiceAccount metadata: name: sa2 namespace: sa-ns Role: apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: admin-role namespace: ns1 rules: - apiGroups: - "*" resources: - "*" verbs: - "*" --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: admin-role namespace: ns2 rules: - apiGroups: - "*" resources: - "*" verbs: - "*" RoleBinding: apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: admin-role-binding namespace: ns1 roleRef: kind: Role name: admin-role apiGroup: rbac.authorization.k8s.io subjects: - kind: ServiceAccount name: sa1 namespace: sa-ns --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: admin-role-binding namespace: ns2 roleRef: kind: Role name: admin-role apiGroup: rbac.authorization.k8s.io subjects: - kind: ServiceAccount name: sa2 namespace: sa-ns I tried to use ClusterRoleinstead of Role but nothing has changed.
JohnyFailLab (163 rep)
Apr 18, 2022, 06:52 PM • Last activity: Jun 15, 2025, 04:04 PM
1 votes
1 answers
2042 views
How to remove "timestamp" date from the Fluent-Bit logs?
I'm testing Fluent-bit for my local cluster which has a CRI runtime interface and I'm sending logs to a slack channel. But the problem is that Fluent-Bit is assigning a "timestamp" in the log and I'm not able to remove it. Maybe someone knows a solution? Here is the ConfigMap of my Fluent-Bit: ``` a...
I'm testing Fluent-bit for my local cluster which has a CRI runtime interface and I'm sending logs to a slack channel. But the problem is that Fluent-Bit is assigning a "timestamp" in the log and I'm not able to remove it. Maybe someone knows a solution? Here is the ConfigMap of my Fluent-Bit:
apiVersion: v1
kind: ConfigMap
metadata:
  name: fluent-bit-config
  namespace: logging1
  labels:
    k8s-app: fluent-bit
data:
  # Configuration files: server, input, filters and output
  # ======================================================
  fluent-bit.conf: |
    [SERVICE]
        Flush         2
        Log_Level     info
        Daemon        off
        Parsers_File  parsers.conf
        HTTP_Server   On
        HTTP_Listen   0.0.0.0
        HTTP_Port     2020
 
    @INCLUDE input-kubernetes.conf
    @INCLUDE filter-kubernetes.conf
    @INCLUDE output-syslog.conf
 
  input-kubernetes.conf: |
    [INPUT]
        Name              tail
        Tag               kube.*
        Path              /var/log/containers/*
        Parser            cri
        DB                /var/log/flb_kube.db
        Mem_Buf_Limit     5MB
        Skip_Long_Lines   On

  filter-kubernetes.conf: |

  output-syslog.conf: |
    [OUTPUT]
        Name               slack
        Match              *
        webhook            [LINK]
        
 
  parsers.conf: |
    [PARSER]
        Name          cc
        Format        regex
        Format        cri
        Regex         ^(?[^ ]+) (?stdout|stderr) (?[^ ]*) (?.*)$
        Time_Key      time
        Time_Format   %Y-%m-%dT%H:%M:%S.%L%z
Also here is the raw log coming from my app: 2023-04-12T16:09:02.016483996Z stderr F 10.244.0.1 - - [12/Apr/2023 16:09:02] "GET / HTTP/1.1" 200 - And this is the log that is sent to Slack: ["timestamp": 1681315742.016981904, {"log"=>"2023-04-12T16:09:02.016483996Z stderr F 10.244.0.1 - - [12/Apr/2023 16:09:02] "GET / HTTP/1.1" 200 -"}]
Gevegeve (25 rep)
Apr 12, 2023, 05:27 PM • Last activity: Jun 11, 2025, 09:00 PM
0 votes
0 answers
386 views
invalid capacity 0 on image filesystem
Today, when I added a new Kubernetes (v1.30.0) node into the cluster, the new node showed the following error: invalid capacity 0 on image filesystem On the new node, I am using containerd. It seems it could not detect the image filesystem size. Here is the output of the lsblk command: root@iZhp33cq...
Today, when I added a new Kubernetes (v1.30.0) node into the cluster, the new node showed the following error: invalid capacity 0 on image filesystem On the new node, I am using containerd. It seems it could not detect the image filesystem size. Here is the output of the lsblk command: root@iZhp33cq6mvrjo8mzatgrmZ:/etc/containerd# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS vda 254:0 0 40G 0 disk ├─vda1 254:1 0 1M 0 part ├─vda2 254:2 0 191M 0 part /boot/efi └─vda3 254:3 0 39.8G 0 part / This is the containerd version info: root@iZhp33cq6mvrjo8mzatgrmZ:/etc/containerd# containerd --version containerd github.com/containerd/containerd 1.6.20~ds1 1.6.20~ds1-1+b1 Am I missing some configuration on the new nodes? The new node's pods are always in the Waiting state. What should I do to fix this issue??
Dolphin (791 rep)
Nov 24, 2024, 11:09 AM • Last activity: Jun 9, 2025, 01:06 PM
0 votes
0 answers
29 views
Is there a way to get the PID information for the host network namespace?
I am writing a script that gathers information about containers running within kubernetes that utilize network namespaces to write a CSV with the following information: "Network Namespace", "Network IP", "Protocol", "Local address", "Remote address", "Status", "PID", "Program name" The problem I am...
I am writing a script that gathers information about containers running within kubernetes that utilize network namespaces to write a CSV with the following information: "Network Namespace", "Network IP", "Protocol", "Local address", "Remote address", "Status", "PID", "Program name" The problem I am running into is I can easily gather the pids running within a network namespace utilizing "ip netns pids " but can't figure out a way to get the same information for the host itself. Any insight would be helpful!
Mikal.Furnell (1 rep)
May 28, 2025, 03:49 PM
0 votes
1 answers
65 views
microk8s on rocky linux 9 problem
I installed microk8s on rocky linux version 9.5. To do that I did the following. ```bash # installing snapd sudo dnf install epel-release -y sudo dnf install snapd -y sudo systemctl enable --now snapd.socket sudo ln -s /var/lib/snapd/snap /snap snap install microk8s --classic --channel=1.32/stable `...
I installed microk8s on rocky linux version 9.5. To do that I did the following.
# installing snapd
sudo dnf install epel-release -y
sudo dnf install snapd -y
sudo systemctl enable --now snapd.socket
sudo ln -s /var/lib/snapd/snap /snap

snap install microk8s --classic --channel=1.32/stable
And when I run below to get the status of my deployment,
microk8s inspect
I get below
Inspecting system
Inspecting Certificates
Inspecting services
  Service snap.microk8s.daemon-cluster-agent is running
  Service snap.microk8s.daemon-containerd is running
  Service snap.microk8s.daemon-kubelite is running
  Service snap.microk8s.daemon-k8s-dqlite is running
  Service snap.microk8s.daemon-apiserver-kicker is running
  Copy service arguments to the final report tarball
Inspecting AppArmor configuration
Gathering system information
  Copy processes list to the final report tarball
  Copy disk usage information to the final report tarball
  Copy memory usage information to the final report tarball
  Copy server uptime to the final report tarball
  Copy openSSL information to the final report tarball
  Copy snap list to the final report tarball
  Copy VM name (or none) to the final report tarball
  Copy current linux distribution to the final report tarball
  Copy asnycio usage and limits to the final report tarball
  Copy inotify max_user_instances and max_user_watches to the final report tarball
  Copy network configuration to the final report tarball
Inspecting kubernetes cluster
  Inspect kubernetes cluster
Inspecting dqlite
  Inspect dqlite
cp: cannot stat '/var/snap/microk8s/8148/var/kubernetes/backend/localnode.yaml': No such file or directory

Building the report tarball
  Report tarball is at /var/snap/microk8s/8148/inspection-report-20250519_194506.tar.gz
What is the problem? Is it the OS? I need the OS to be Rocky Linux.
Gmosy Gnaq (101 rep)
May 19, 2025, 04:24 PM • Last activity: May 21, 2025, 01:23 PM
0 votes
0 answers
31 views
WSO2 API Manager 4.4.0 – How to Properly Sync APIs Across Multiple All-in-One Pods in Kubernetes?
I'm deploying **WSO2 API Manager 4.4.0** on Kubernetes with the goal of scaling horizontally. My current setup is: - 5 replicas of **all-in-one pods**, each running the full WSO2 API Manager stack (Control Plane + Gateway). - Shared **MySQL** instances for both `WSO2AM_DB` and `WSO2_SHARED_DB`. - Tw...
I'm deploying **WSO2 API Manager 4.4.0** on Kubernetes with the goal of scaling horizontally. My current setup is: - 5 replicas of **all-in-one pods**, each running the full WSO2 API Manager stack (Control Plane + Gateway). - Shared **MySQL** instances for both WSO2AM_DB and WSO2_SHARED_DB. - Two **Nginx ingresses**: - am.wso2.com → Control Plane (Publisher, Devportal) - gw.wso2.com → Gateway --- ## 🔥 The Problem APIs created in one pod (via the Publisher) are only deployed to that pod’s internal Gateway. The other 4 pods don’t receive the new APIs. This leads to inconsistent behavior, especially since users hit different pods due to load balancing. --- ## 🧪 What I’ve Tried 1. **DBRetriever-based polling** (seems deprecated method): - Initially configured the [apim.sync_runtime_artifacts.gateway] block with enable = true. - This did **not** sync APIs across pods as expected. 2. **Switched to Event Hub-based sync**: - Added a dedicated **Key Manager/Event Hub pod**. - Updated [apim.event_hub] and related sections (e.g., publish.url_group, event_listening_endpoints). - The **real issue** appears to be **connectivity between Control Plane pods and the Event Hub on port 9611** — the port does not respond, and API deployment events aren't published. --- ## ❓ My Question What is the **correct, working method** to synchronize API artifacts between multiple WSO2 APIM 4.4.0 all-in-one pods on Kubernetes? If you have a working pattern or configuration that **reliably syncs APIs across pods**, I’d really appreciate your insights.
Arapa (1 rep)
May 5, 2025, 09:14 AM
2 votes
1 answers
15632 views
kubeadm init missing optional cgroups
It seems that since I've updated to the newest kernel (Ubuntu server 22), I get this message during `kubeadm init` How to get rid of the following error? : user@kubemaster:~$ sudo kubeadm init [init] Using Kubernetes version: v1.24.2 [preflight] Running pre-flight checks [WARNING SystemVerification]...
It seems that since I've updated to the newest kernel (Ubuntu server 22), I get this message during kubeadm init How to get rid of the following error? : user@kubemaster:~$ sudo kubeadm init [init] Using Kubernetes version: v1.24.2 [preflight] Running pre-flight checks
spaceman117X (492 rep)
Jun 24, 2022, 09:21 AM • Last activity: Apr 25, 2025, 05:08 PM
0 votes
0 answers
29 views
Unexpected network namespace inode when accessing /var/run/netns/ from pod in host network namespace
I'm running a Kubernetes cluster with RKE2 v1.30.5+rke2r1 on Linux nixos 6.6.56 amd64, using Cilium CNI. Here's the setup: I have two pods (yaml manifests at the bottom): Pod A (xfrm-pod) is running in the default network namespace. Pod B (charon-pod) is running in the host network namespace (hostNe...
I'm running a Kubernetes cluster with RKE2 v1.30.5+rke2r1 on Linux nixos 6.6.56 amd64, using Cilium CNI. Here's the setup: I have two pods (yaml manifests at the bottom): Pod A (xfrm-pod) is running in the default network namespace. Pod B (charon-pod) is running in the host network namespace (hostNetwork: true). On Pod A, I check the inode of its network namespace using:
readlink /proc/$$/ns/net
This gives the expected value, e.g., net:. Then i mount /var/run/netns on pod B e.g. to /netns and run ls -li /netns, the inode for Pod A's network namespace is a strange value, like 53587. Permission show this is the only file there is write access to. (I can delete it) However, when I ls -li /var/run/netns directly on the host, the inode and file name are what I expect: the correct namespace symlink and inode number. Why is the inode different inside the host-network pod? And why does it appear writable, unlike other netns files? Any idea why this happens, and how I can get consistent behavior inside host network pods? Pod yaml manifests (fetched with kubectl get pod -o yaml since i create them in a controller in go): Pod A:
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: "2025-04-24T14:57:55Z"
  name: xfrm-pod
  namespace: ims
  resourceVersion: "7200524"
  uid: dd08aa88-460f-4bdd-8019-82a433682825
spec:
  containers:
  - command:
    - bash
    - -c
    - while true; do sleep 1000; done
    image: ubuntu:latest
    imagePullPolicy: Always
    name: xfrm-container
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /netns
      name: netns-dir
      readOnly: true
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-cszxx
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  nodeName: nixos
  preemptionPolicy: PreemptLowerPriority
  priority: 0
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext:
    sysctls:
    - name: net.ipv4.ip_forward
      value: "1"
    - name: net.ipv4.conf.all.rp_filter
      value: "0"
    - name: net.ipv4.conf.default.rp_filter
      value: "0"
    - name: net.ipv4.conf.all.arp_filter
      value: "1"
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - hostPath:
      path: /var/run/netns/
      type: Directory
    name: netns-dir
  - name: kube-api-access-cszxx
    projected:
      defaultMode: 420
      sources:
      - serviceAccountToken:
          expirationSeconds: 3607
          path: token
      - configMap:
          items:
          - key: ca.crt
            path: ca.crt
          name: kube-root-ca.crt
      - downwardAPI:
          items:
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
            path: namespace
Pod B:
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: "2025-04-24T14:57:45Z"
  labels:
    ipserviced: "true"
  name: charon-pod
  namespace: ims
  resourceVersion: "7200483"
  uid: 1c5542ba-16c8-4105-9556-7519ea50edef
spec:
  containers:
  - image: someimagewithstrongswan
    imagePullPolicy: IfNotPresent
    name: charondaemon
    resources: {}
    securityContext:
      allowPrivilegeEscalation: false
      capabilities:
        add:
        - NET_ADMIN
        - NET_RAW
        - NET_BIND_SERVICE
        drop:
        - ALL
      seccompProfile:
        type: RuntimeDefault
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/
      name: charon-volume
    - mountPath: /etc/swanctl
      name: charon-conf
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-jjkpm
      readOnly: true
  - image: someimagewithswanctl
    imagePullPolicy: Always
    name: restctl
    resources: {}
    securityContext:
      allowPrivilegeEscalation: false
      capabilities:
        add:
        - NET_ADMIN
        drop:
        - ALL
      seccompProfile:
        type: RuntimeDefault
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/
      name: charon-volume
    - mountPath: /etc/swanctl
      name: charon-conf
    - mountPath: /netns
      name: netns-dir
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-jjkpm
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  hostIPC: true
  hostNetwork: true
  hostPID: true
  initContainers:
  - command:
    - sh
    - -c
    - "echo 'someconfig'
      > /etc/swanctl/swanctl.conf"
    image: busybox:latest
    imagePullPolicy: Always
    name: create-conf
    resources: {}
    securityContext:
      allowPrivilegeEscalation: false
      capabilities:
        drop:
        - ALL
      seccompProfile:
        type: RuntimeDefault
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /etc/swanctl
      name: charon-conf
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-jjkpm
      readOnly: true
  nodeName: nixos
  preemptionPolicy: PreemptLowerPriority
  priority: 0
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext: {}
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - emptyDir: {}
    name: charon-volume
  - emptyDir: {}
    name: charon-conf
  - hostPath:
      path: /var/run/netns/
      type: Directory
    name: netns-dir
  - name: kube-api-access-jjkpm
    projected:
      defaultMode: 420
      sources:
      - serviceAccountToken:
          expirationSeconds: 3607
          path: token
      - configMap:
          items:
          - key: ca.crt
            path: ca.crt
          name: kube-root-ca.crt
      - downwardAPI:
          items:
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
            path: namespace
rrekaF (1 rep)
Apr 25, 2025, 07:07 AM
-1 votes
1 answers
51 views
Is a multi-machine Vagrant a good choice to simulate a Kubernetes cluster?
I am in front of a book to teaching myself Kubernetes. It has many chapters about handling a Kubernetes cluster, and it urges the reader to create an account on a cloud server, if he can. Else to attempt creating a cluster of few Raspberry Pi. But I don't want or cannot afford these choices. I have...
I am in front of a book to teaching myself Kubernetes. It has many chapters about handling a Kubernetes cluster, and it urges the reader to create an account on a cloud server, if he can. Else to attempt creating a cluster of few Raspberry Pi. But I don't want or cannot afford these choices. I have my own home computer, that's all. Is it something that prevents me from creating a multi-machine Vagrant to create all computers my book could talk about? I believe this could work... My question is simple and naive. But if there's an obstacle or a big difficulty that I will surely face, I'd like to know it immediately before choosing this wrong way. Thanks!
Marc Le Bihan (2353 rep)
Apr 17, 2025, 04:16 PM • Last activity: Apr 17, 2025, 04:48 PM
2 votes
1 answers
97 views
How does one run cron jobs in one container that does stuff in another?
I am on Kubernetes. I need to be able to write and run cron jobs in a pod. I can't use the CronJob workload. The solution I found is to run cron jobs from a *cron* sidecar container. I write cron jobs to /etc/crontab and run them with `chroot`. Although, I can run `chroot` commands from the command...
I am on Kubernetes. I need to be able to write and run cron jobs in a pod. I can't use the CronJob workload. The solution I found is to run cron jobs from a *cron* sidecar container. I write cron jobs to /etc/crontab and run them with chroot. Although, I can run chroot commands from the command line, it doesn't seem to work as a cron job. Example: This works when I run it in the shell.
# chroot /proc/151/root bash -c 'echo $(date) this is a command executed in cron sidecar >> /proc/151/root/var/log/cronjob.log'
cronjob.log: Mon Apr 14 15:44:34 UTC 2025 this is a command executed in cron sidecar Doesn't work when I put this line in /etc/crontab in cron sidecar container.
* * * * * root chroot /proc/151/root bash -c 'echo $(date) this is a cron job executed by cron sidecar >> /proc/151/root/var/log/cronjob.log'
crond is running.
# pgrep cron
347
What's wrong with this cron job? How can I make this work?
Mooch (41 rep)
Apr 14, 2025, 06:09 PM • Last activity: Apr 14, 2025, 06:58 PM
2 votes
1 answers
3430 views
E: Package 'python-minimal' has no installation candidate
I am trying to deploy k8s cluster on aws using kubespray. Step1: I have downloaded the following dependencies: ``` apt-get update apt-get install software-properties-common apt-add-repository ppa:ansible/ansible apt-get update apt-get install ansible apt-get update apt-get -y upgrade apt-get install...
I am trying to deploy k8s cluster on aws using kubespray. Step1: I have downloaded the following dependencies:
apt-get update  
apt-get install software-properties-common  
apt-add-repository ppa:ansible/ansible  
apt-get update  
apt-get install ansible  
apt-get update  
apt-get -y upgrade  
apt-get install python-pip  
pip install jinja2  
pip install netaddr
Step2: cloned the kubespray git repo git clone https://github.com/xenonstack/kubespray.git Step3: customized the inventory file, while deploying the cluster using ansible it is throwing the following error E: Package 'python-minimal' has no installation candidate How can i fix this?
swetha panchakatla (21 rep)
Jul 18, 2023, 06:53 AM • Last activity: Apr 12, 2025, 10:03 AM
0 votes
0 answers
49 views
creating VM snapshot using Virsh inside Kubevirt
I have a running virtual machine inside Kubevirt, Inside the virt-launcher of this VM I ran virsh to create a snapshot . virsh snapshot-create-as \ --domain default_my-test-vm \ --diskspec vda,file=/tmp,snapshot=external \ --memspec file=/tmp,snapshot=external \ --atomic error: internal error: missi...
I have a running virtual machine inside Kubevirt, Inside the virt-launcher of this VM I ran virsh to create a snapshot . virsh snapshot-create-as \ --domain default_my-test-vm \ --diskspec vda,file=/tmp,snapshot=external \ --memspec file=/tmp,snapshot=external \ --atomic error: internal error: missing storage backend for 'file' storage . I think the error is due to the fact that I'm not using virsh as root. My Goal is to create a snapshot of the VM running inside Kubevirt, I also tried using QEMU monitor approach from the worker node but couldn't connect . I would appreciate any help with this
reda bouzad (1 rep)
Mar 18, 2025, 12:58 PM
0 votes
2 answers
90 views
Why are my network connections being rejected and the ping command between server does not work?
Cluster information: ``` kubectl version Client Version: v1.29.14 Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3 Server Version: v1.29.14 Cloud being used: bare-metal Installation method: Host OS: AlmaLinux 8 CNI and version: Flannel ver: 0.26.4 CRI and version: cri-dockerd ver: 0.3.16 ```...
Cluster information:
kubectl version
Client Version: v1.29.14
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.29.14
Cloud being used: bare-metal
Installation method:
Host OS: AlmaLinux 8
CNI and version: Flannel ver: 0.26.4
CRI and version: cri-dockerd ver: 0.3.16
I have a master node and create my first worker node, before executing the command kubeadm join in the worker I could ping from the worker to the master and viceversa without troubles, now that I have executed the kubeadm join ... command I cannot ping between them anymore and I get this error:
[root@worker-1 ~]# kubectl get nodes -o wide
E0308 19:38:31.027307   59324 memcache.go:265] couldn't get current server API group list: Get "https://198.58.126.88:6443/api?timeout=32s ": dial tcp 198.58.126.88:6443: connect: connection refused
E0308 19:38:32.051145   59324 memcache.go:265] couldn't get current server API group list: Get "https://198.58.126.88:6443/api?timeout=32s ": dial tcp 198.58.126.88:6443: connect: connection refused
E0308 19:38:33.075350   59324 memcache.go:265] couldn't get current server API group list: Get "https://198.58.126.88:6443/api?timeout=32s ": dial tcp 198.58.126.88:6443: connect: connection refused
E0308 19:38:34.099160   59324 memcache.go:265] couldn't get current server API group list: Get "https://198.58.126.88:6443/api?timeout=32s ": dial tcp 198.58.126.88:6443: connect: connection refused
E0308 19:38:35.123011   59324 memcache.go:265] couldn't get current server API group list: Get "https://198.58.126.88:6443/api?timeout=32s ": dial tcp 198.58.126.88:6443: connect: connection refused
The connection to the server 198.58.126.88:6443 was refused - did you specify the right host or port?
Ping from the worker node to the master node:
[root@worker-1 ~]# ping 198.58.126.88
PING 198.58.126.88 (198.58.126.88) 56(84) bytes of data.
From 198.58.126.88 icmp_seq=1 Destination Port Unreachable
From 198.58.126.88 icmp_seq=2 Destination Port Unreachable
From 198.58.126.88 icmp_seq=3 Destination Port Unreachable
If I run this:
[root@worker-1 ~]# iptables -F && sudo iptables -t nat -F && sudo iptables -t mangle -F && sudo iptables -X
The ping command starts to work:
[root@worker-1 ~]# ping 198.58.126.88
PING 198.58.126.88 (198.58.126.88) 56(84) bytes of data.
64 bytes from 198.58.126.88: icmp_seq=1 ttl=64 time=0.030 ms
64 bytes from 198.58.126.88: icmp_seq=2 ttl=64 time=0.025 ms
(The ping command works with the IPv6 address, it just fails with the IPv4 address) But after about one minute it gets blocked again:
[root@worker-1 ~]# ping 198.58.126.88
PING 198.58.126.88 (198.58.126.88) 56(84) bytes of data.
From 198.58.126.88 icmp_seq=1 Destination Port Unreachable
From 198.58.126.88 icmp_seq=2 Destination Port Unreachable
[root@worker-1 ~]# cat /etc/sysctl.conf
# sysctl settings are defined through files in
# /usr/lib/sysctl.d/, /run/sysctl.d/, and /etc/sysctl.d/.
#
# Vendors settings live in /usr/lib/sysctl.d/.
# To override a whole file, create a new file with the same in
# /etc/sysctl.d/ and put new settings there. To override
# only specific settings, add a file with a lexically later
# name in /etc/sysctl.d/ and put new settings there.
#
# For more information, see sysctl.conf(5) and sysctl.d(5).
net.ipv6.conf.default.forwarding=1
net.ipv6.conf.all.forwarding=1
[root@worker-1 ~]# cd /etc/systctl.d/
-bash: cd: /etc/systctl.d/: No such file or directory
The port 6443/TCP is closed in the worker node and I have tried to open it without success:
nmap 172.235.135.144 -p 6443                                                                                            ✔  2.7.4   06:19:47
Starting Nmap 7.95 ( https://nmap.org  ) at 2025-03-11 16:22 -05
Nmap scan report for 172-235-135-144.ip.linodeusercontent.com (172.235.135.144)
Host is up (0.072s latency).

PORT     STATE  SERVICE
6443/tcp closed sun-sr-https

Nmap done: 1 IP address (1 host up) scanned in 0.26 seconds
master node:
[root@master ~]# iptables -nvL
Chain INPUT (policy ACCEPT 1312K packets, 202M bytes)
 pkts bytes target     prot opt in     out     source               destination
1301K  201M KUBE-FIREWALL  all  --  *      *       0.0.0.0/0            0.0.0.0/0
1311K  202M KUBE-IPVS-FILTER  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes ipvs access filter */
1311K  202M KUBE-PROXY-FIREWALL  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kube-proxy firewall rules */
1311K  202M KUBE-NODE-PORT  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes health check rules */
   40  3520 ACCEPT     icmp --  *      *       198.58.126.88        0.0.0.0/0
    0     0 ACCEPT     icmp --  *      *       172.233.172.101      0.0.0.0/0

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination
  950  181K KUBE-PROXY-FIREWALL  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kube-proxy firewall rules */
  950  181K KUBE-FORWARD  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes forwarding rules */
  212 12626 DOCKER-ISOLATION-STAGE-1  all  --  *      *       0.0.0.0/0            0.0.0.0/0
  212 12626 DOCKER-USER  all  --  *      *       0.0.0.0/0            0.0.0.0/0
    0     0 ACCEPT     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
    0     0 DOCKER     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0
    0     0 ACCEPT     all  --  docker0 !docker0  0.0.0.0/0            0.0.0.0/0
    0     0 ACCEPT     all  --  docker0 docker0  0.0.0.0/0            0.0.0.0/0
    0     0 ACCEPT     all  --  *      br-09363fc9af47  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
   20  1068 DOCKER     all  --  *      br-09363fc9af47  0.0.0.0/0            0.0.0.0/0
    0     0 ACCEPT     all  --  br-09363fc9af47 !br-09363fc9af47  0.0.0.0/0            0.0.0.0/0
    0     0 ACCEPT     all  --  br-09363fc9af47 br-09363fc9af47  0.0.0.0/0            0.0.0.0/0
    0     0 ACCEPT     all  --  *      br-05a2ea8c281b  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
    4   184 DOCKER     all  --  *      br-05a2ea8c281b  0.0.0.0/0            0.0.0.0/0
    0     0 ACCEPT     all  --  br-05a2ea8c281b !br-05a2ea8c281b  0.0.0.0/0            0.0.0.0/0
    0     0 ACCEPT     all  --  br-05a2ea8c281b br-05a2ea8c281b  0.0.0.0/0            0.0.0.0/0
    0     0 ACCEPT     all  --  *      br-032fd1b78367  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
    0     0 DOCKER     all  --  *      br-032fd1b78367  0.0.0.0/0            0.0.0.0/0
    9   504 ACCEPT     all  --  br-032fd1b78367 !br-032fd1b78367  0.0.0.0/0            0.0.0.0/0
    0     0 ACCEPT     all  --  br-032fd1b78367 br-032fd1b78367  0.0.0.0/0            0.0.0.0/0
    0     0 ACCEPT     all  --  *      br-ae1997e801f3  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
    0     0 DOCKER     all  --  *      br-ae1997e801f3  0.0.0.0/0            0.0.0.0/0
  132  7920 ACCEPT     all  --  br-ae1997e801f3 !br-ae1997e801f3  0.0.0.0/0            0.0.0.0/0
    0     0 ACCEPT     all  --  br-ae1997e801f3 br-ae1997e801f3  0.0.0.0/0            0.0.0.0/0
    0     0 ACCEPT     all  --  *      br-9f6d34f7e48a  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
   14   824 DOCKER     all  --  *      br-9f6d34f7e48a  0.0.0.0/0            0.0.0.0/0
    4   240 ACCEPT     all  --  br-9f6d34f7e48a !br-9f6d34f7e48a  0.0.0.0/0            0.0.0.0/0
    0     0 ACCEPT     all  --  br-9f6d34f7e48a br-9f6d34f7e48a  0.0.0.0/0            0.0.0.0/0
   29  1886 FLANNEL-FWD  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* flanneld forward */

Chain OUTPUT (policy ACCEPT 1309K packets, 288M bytes)
 pkts bytes target     prot opt in     out     source               destination
1298K  286M KUBE-FIREWALL  all  --  *      *       0.0.0.0/0            0.0.0.0/0
1308K  288M KUBE-IPVS-OUT-FILTER  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes ipvs access filter */

Chain DOCKER (6 references)
 pkts bytes target     prot opt in     out     source               destination
   14   824 ACCEPT     tcp  --  !br-9f6d34f7e48a br-9f6d34f7e48a  0.0.0.0/0            172.24.0.2           tcp dpt:3001
    0     0 ACCEPT     tcp  --  !br-ae1997e801f3 br-ae1997e801f3  0.0.0.0/0            172.21.0.2           tcp dpt:3000
    4   184 ACCEPT     tcp  --  !br-05a2ea8c281b br-05a2ea8c281b  0.0.0.0/0            172.22.0.2           tcp dpt:4443
   12   700 ACCEPT     tcp  --  !br-09363fc9af47 br-09363fc9af47  0.0.0.0/0            172.19.0.2           tcp dpt:4443
    8   368 ACCEPT     tcp  --  !br-09363fc9af47 br-09363fc9af47  0.0.0.0/0            172.19.0.3           tcp dpt:443

Chain DOCKER-ISOLATION-STAGE-1 (1 references)
 pkts bytes target     prot opt in     out     source               destination
  212 12626 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0

Chain DOCKER-ISOLATION-STAGE-2 (0 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0

Chain FLANNEL-FWD (1 references)
 pkts bytes target     prot opt in     out     source               destination
   29  1886 ACCEPT     all  --  *      *       10.244.0.0/16        0.0.0.0/0            /* flanneld forward */
    0     0 ACCEPT     all  --  *      *       0.0.0.0/0            10.244.0.0/16        /* flanneld forward */

Chain DOCKER-USER (1 references)
 pkts bytes target     prot opt in     out     source               destination
  212 12626 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0

Chain KUBE-FORWARD (1 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 ACCEPT     all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes forwarding rules */
    0     0 ACCEPT     all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes forwarding conntrack rule */ ctstate RELATED,ESTABLISHED

Chain KUBE-NODE-PORT (1 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 ACCEPT     all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* Kubernetes health check node port */ match-set KUBE-HEALTH-CHECK-NODE-PORT dst

Chain KUBE-PROXY-FIREWALL (2 references)
 pkts bytes target     prot opt in     out     source               destination

Chain KUBE-SOURCE-RANGES-FIREWALL (0 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 DROP       all  --  *      *       0.0.0.0/0            0.0.0.0/0

Chain KUBE-IPVS-FILTER (1 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0            match-set KUBE-LOAD-BALANCER dst,dst
    2   104 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0            match-set KUBE-CLUSTER-IP dst,dst
    0     0 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0            match-set KUBE-EXTERNAL-IP dst,dst
    0     0 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0            match-set KUBE-EXTERNAL-IP-LOCAL dst,dst
    0     0 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0            match-set KUBE-HEALTH-CHECK-NODE-PORT dst
    0     0 REJECT     all  --  *      *       0.0.0.0/0            0.0.0.0/0            ctstate NEW match-set KUBE-IPVS-IPS dst reject-with icmp-port-unreachable

Chain KUBE-IPVS-OUT-FILTER (1 references)
 pkts bytes target     prot opt in     out     source               destination

Chain KUBE-FIREWALL (2 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 DROP       all  --  *      *      !127.0.0.0/8          127.0.0.0/8          /* block incoming localnet connections */ ! ctstate RELATED,ESTABLISHED,DNAT

Chain KUBE-KUBELET-CANARY (0 references)
 pkts bytes target     prot opt in     out     source               destination
worker node:
[root@worker-1 ~]# iptables -nvL
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination
18469 1430K KUBE-IPVS-FILTER  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes ipvs access filter */
10534  954K KUBE-PROXY-FIREWALL  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kube-proxy firewall rules */
10534  954K KUBE-NODE-PORT  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes health check rules */
10767 1115K KUBE-FIREWALL  all  --  *      *       0.0.0.0/0            0.0.0.0/0

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination
    0     0 KUBE-PROXY-FIREWALL  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kube-proxy firewall rules */
    0     0 KUBE-FORWARD  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes forwarding rules */
    0     0 DOCKER-USER  all  --  *      *       0.0.0.0/0            0.0.0.0/0
    0     0 DOCKER-ISOLATION-STAGE-1  all  --  *      *       0.0.0.0/0            0.0.0.0/0
    0     0 ACCEPT     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
    0     0 DOCKER     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0
    0     0 ACCEPT     all  --  docker0 !docker0  0.0.0.0/0            0.0.0.0/0
    0     0 ACCEPT     all  --  docker0 docker0  0.0.0.0/0            0.0.0.0/0

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination
18359 1696K KUBE-IPVS-OUT-FILTER  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes ipvs access filter */
18605 1739K KUBE-FIREWALL  all  --  *      *       0.0.0.0/0            0.0.0.0/0

Chain DOCKER (1 references)
 pkts bytes target     prot opt in     out     source               destination

Chain DOCKER-ISOLATION-STAGE-1 (1 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 DOCKER-ISOLATION-STAGE-2  all  --  docker0 !docker0  0.0.0.0/0            0.0.0.0/0
    0     0 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0

Chain DOCKER-ISOLATION-STAGE-2 (1 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 DROP       all  --  *      docker0  0.0.0.0/0            0.0.0.0/0
    0     0 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0

Chain DOCKER-USER (1 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0

Chain KUBE-FIREWALL (2 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 DROP       all  --  *      *      !127.0.0.0/8          127.0.0.0/8          /* block incoming localnet connections */ ! ctstate RELATED,ESTABLISHED,DNAT

Chain KUBE-KUBELET-CANARY (0 references)
 pkts bytes target     prot opt in     out     source               destination

Chain KUBE-FORWARD (1 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 ACCEPT     all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes forwarding rules */
    0     0 ACCEPT     all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes forwarding conntrack rule */ ctstate RELATED,ESTABLISHED

Chain KUBE-NODE-PORT (1 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 ACCEPT     all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* Kubernetes health check node port */ match-set KUBE-HEALTH-CHECK-NODE-PORT dst

Chain KUBE-PROXY-FIREWALL (2 references)
 pkts bytes target     prot opt in     out     source               destination

Chain KUBE-SOURCE-RANGES-FIREWALL (0 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 DROP       all  --  *      *       0.0.0.0/0            0.0.0.0/0

Chain KUBE-IPVS-FILTER (1 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0            match-set KUBE-LOAD-BALANCER dst,dst
    0     0 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0            match-set KUBE-CLUSTER-IP dst,dst
    0     0 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0            match-set KUBE-EXTERNAL-IP dst,dst
    0     0 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0            match-set KUBE-EXTERNAL-IP-LOCAL dst,dst
    0     0 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0            match-set KUBE-HEALTH-CHECK-NODE-PORT dst
   45  2700 REJECT     all  --  *      *       0.0.0.0/0            0.0.0.0/0            ctstate NEW match-set KUBE-IPVS-IPS dst reject-with icmp-port-unreachable

Chain KUBE-IPVS-OUT-FILTER (1 references)
 pkts bytes target     prot opt in     out     source               destination
If I run iptables -F INPUT in the worker the ping command starts to work back again:
[root@worker-1 ~]# iptables -F INPUT
[root@worker-1 ~]# ping 198.58.126.88
PING 198.58.126.88 (198.58.126.88) 56(84) bytes of data.
64 bytes from 198.58.126.88: icmp_seq=1 ttl=64 time=0.054 ms
64 bytes from 198.58.126.88: icmp_seq=2 ttl=64 time=0.043 ms
64 bytes from 198.58.126.88: icmp_seq=3 ttl=64 time=0.037 ms
64 bytes from 198.58.126.88: icmp_seq=4 ttl=64 time=0.039 ms
64 bytes from 198.58.126.88: icmp_seq=5 ttl=64 time=0.023 ms
64 bytes from 198.58.126.88: icmp_seq=6 ttl=64 time=0.022 ms
64 bytes from 198.58.126.88: icmp_seq=7 ttl=64 time=0.070 ms
64 bytes from 198.58.126.88: icmp_seq=8 ttl=64 time=0.072 ms
^C
--- 198.58.126.88 ping statistics ---
8 packets transmitted, 8 received, 0% packet loss, time 7197ms
rtt min/avg/max/mdev = 0.022/0.045/0.072/0.017 ms
strace command from worker:
[root@worker-1 ~]# iptables -F INPUT
[root@worker-1 ~]# strace -eopenat kubectl version
openat(AT_FDCWD, "/sys/kernel/mm/transparent_hugepage/hpage_pmd_size", O_RDONLY) = 3
--- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=20723, si_uid=0} ---
openat(AT_FDCWD, "/usr/bin/kubectl", O_RDONLY|O_CLOEXEC) = 3
--- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=20723, si_uid=0} ---
--- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=20723, si_uid=0} ---
--- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=20723, si_uid=0} ---
--- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=20723, si_uid=0} ---
--- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=20723, si_uid=0} ---
--- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=20723, si_uid=0} ---
--- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=20723, si_uid=0} ---
--- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=20723, si_uid=0} ---
--- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=20723, si_uid=0} ---
--- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=20723, si_uid=0} ---
--- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=20723, si_uid=0} ---
openat(AT_FDCWD, "/usr/local/sbin", O_RDONLY|O_CLOEXEC) = 3
openat(AT_FDCWD, "/usr/local/bin", O_RDONLY|O_CLOEXEC) = 3
openat(AT_FDCWD, "/usr/sbin", O_RDONLY|O_CLOEXEC) = 3
openat(AT_FDCWD, "/usr/bin", O_RDONLY|O_CLOEXEC) = 3
openat(AT_FDCWD, "/root/bin", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/root/.kube/config", O_RDONLY|O_CLOEXEC) = 3
Client Version: v1.29.14
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
The connection to the server 198.58.126.88:6443 was refused - did you specify the right host or port?
+++ exited with 1 +++
nftables before and after executing the kubeadm join command in the worker enter image description here
Chain KUBE-IPVS-FILTER (0 references)
target     prot opt source               destination
RETURN     all  --  anywhere             anywhere             match-set KUBE-LOAD-BALANCER dst,dst
RETURN     all  --  anywhere             anywhere             match-set KUBE-CLUSTER-IP dst,dst
RETURN     all  --  anywhere             anywhere             match-set KUBE-EXTERNAL-IP dst,dst
RETURN     all  --  anywhere             anywhere             match-set KUBE-EXTERNAL-IP-LOCAL dst,dst
RETURN     all  --  anywhere             anywhere             match-set KUBE-HEALTH-CHECK-NODE-PORT dst
REJECT     all  --  anywhere             anywhere             ctstate NEW match-set KUBE-IPVS-IPS dst reject-with icmp-port-unreachable
[root@worker-1 ~]# sudo iptables -S
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
-N KUBE-FIREWALL
-N KUBE-KUBELET-CANARY
-N KUBE-FORWARD
-N KUBE-NODE-PORT
-N KUBE-PROXY-FIREWALL
-N KUBE-SOURCE-RANGES-FIREWALL
-N KUBE-IPVS-FILTER
-N KUBE-IPVS-OUT-FILTER
-A INPUT -m comment --comment "kubernetes ipvs access filter" -j KUBE-IPVS-FILTER
-A INPUT -m comment --comment "kube-proxy firewall rules" -j KUBE-PROXY-FIREWALL
-A INPUT -m comment --comment "kubernetes health check rules" -j KUBE-NODE-PORT
-A FORWARD -m comment --comment "kube-proxy firewall rules" -j KUBE-PROXY-FIREWALL
-A FORWARD -m comment --comment "kubernetes forwarding rules" -j KUBE-FORWARD
-A OUTPUT -m comment --comment "kubernetes ipvs access filter" -j KUBE-IPVS-OUT-FILTER
-A OUTPUT -j KUBE-FIREWALL
-A KUBE-FIREWALL ! -s 127.0.0.0/8 -d 127.0.0.0/8 -m comment --comment "block incoming localnet connections" -m conntrack ! --ctstate RELATED,ESTABLISHED,DNAT -j DROP
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -j ACCEPT
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding conntrack rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A KUBE-NODE-PORT -m comment --comment "Kubernetes health check node port" -m set --match-set KUBE-HEALTH-CHECK-NODE-PORT dst -j ACCEPT
-A KUBE-SOURCE-RANGES-FIREWALL -j DROP
-A KUBE-IPVS-FILTER -m set --match-set KUBE-LOAD-BALANCER dst,dst -j RETURN
-A KUBE-IPVS-FILTER -m set --match-set KUBE-CLUSTER-IP dst,dst -j RETURN
-A KUBE-IPVS-FILTER -m set --match-set KUBE-EXTERNAL-IP dst,dst -j RETURN
-A KUBE-IPVS-FILTER -m set --match-set KUBE-EXTERNAL-IP-LOCAL dst,dst -j RETURN
-A KUBE-IPVS-FILTER -m set --match-set KUBE-HEALTH-CHECK-NODE-PORT dst -j RETURN
-A KUBE-IPVS-FILTER -m conntrack --ctstate NEW -m set --match-set KUBE-IPVS-IPS dst -j REJECT --reject-with icmp-port-unreachable
The blocked connection from the worker to the master starts to happen as soon as the kubelet service is running; if the kubelet service is stopped then I can ping back the master from the worker. What might be causing this blocking on the worker node? Thanks.
Rafael Mora (101 rep)
Mar 10, 2025, 02:17 AM • Last activity: Mar 15, 2025, 03:20 AM
0 votes
0 answers
27 views
SSL status: PROVISIONING for more then 2 hours
I've created certificate, but it's status: PROVISIONING for more than 2 hours. ``` gcloud compute ssl-certificates describe mcrt-94a7195a-ffff-ffff-ffff-fb16fda2bf5f creationTimestamp: '2025-02-24T03:17:40.212-08:00' id: '7909269187352193979' kind: compute#sslCertificate managed: domainStatus: app.a...
I've created certificate, but it's status: PROVISIONING for more than 2 hours.
gcloud compute ssl-certificates describe mcrt-94a7195a-ffff-ffff-ffff-fb16fda2bf5f
creationTimestamp: '2025-02-24T03:17:40.212-08:00'
id: '7909269187352193979'
kind: compute#sslCertificate
managed:
  domainStatus:
    app.apis-charlieai-ml.com: FAILED_NOT_VISIBLE
  domains:
  - app.apis-charlieai-ml.com
  status: PROVISIONING
name: mcrt-94a7195a-ffff-ffff-ffff-fb16fda2bf5f
selfLink: https://www.googleapis.com/compute/v1/projects/charlie-ai/global/sslCertificates/mcrt-94a7195a-ffff-ffff-ffff-fb16fda2bf5f 
type: MANAGED
Is it ok? How long will it take to change from PROVISIONING to any other status?
Yaroslav Minieiev (1 rep)
Feb 24, 2025, 02:33 PM • Last activity: Feb 24, 2025, 02:58 PM
Showing page 1 of 20 total questions